input
stringlengths 286
19k
| output
stringlengths 1
15.8k
| metadata
dict | _instance_id
stringlengths 15
62
|
---|---|---|---|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Generative Adversarial Networks (GANs) have achieved remarkable results in the task of generating realistic natural images.
In most applications, GAN models share two aspects in common.
On the one hand, GANs training involves solving a challenging saddle point optimization problem, interpreted as an adversarial game between a generator and a discriminator functions.
On the other hand, the generator and the discriminator are parametrized in terms of deep convolutional neural networks.
The goal of this paper is to disentangle the contribution of these two factors to the success of GANs.
In particular, we introduce Generative Latent Optimization (GLO), a framework to train deep convolutional generators without using discriminators, thus avoiding the instability of adversarial optimization problems.
Throughout a variety of experiments, we show that GLO enjoys many of the desirable properties of GANs: learning from large data, synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors.
Generative Adversarial Networks (GANs) BID15 are a powerful framework to learn generative models of natural images.
GANs learn these generative models by setting up an adversarial game between two learning machines.
On the one hand, a generator plays to transform noise vectors into fake samples, which resemble real samples drawn from a distribution of natural images.
On the other hand, a discriminator plays to distinguish between real and fake samples.
During training, the generator and the discriminator learn in turns.
First, the discriminator learns to assign high scores to real samples, and low scores to fake samples.
Then, the generator learns to increase the scores of fake samples, as to fool the discriminator.
After proper training, the generator is able to produce realistic natural images from noise vectors.Recently, GANs have been used to produce high-quality images resembling handwritten digits, human faces, and house interiors BID36 .
Furthermore, GANs exhibit three strong signs of generalization.
First, the generator translates linear interpolations in the noise space into semantic interpolations in the image space.
In other words, a linear interpolation in the noise space will generate a smooth interpolation of visually-appealing images.
Second, the generator allows linear arithmetic in the noise space.
Similarly to word embeddings BID31 , linear arithmetic indicates that the generator organizes the noise space to disentangle the nonlinear factors of variation of natural images into linear statistics.
Third, the generator is able to to synthesize new images that resemble those of the data distribution.
This allows for applications such as image in-painting BID18 and super-resolution BID26 .Despite
their success, training and evaluating GANs is notoriously difficult. The adversarial
optimization problem implemented by GANs is sensitive to random initialization, architectural choices, and hyper-parameter settings. In many cases,
a fair amount of human care is necessary to find the correct configuration to train a GAN in a particular dataset. It is common to
observe generators with similar architectures and hyper-parameters to exhibit dramatically different behaviors. Even when properly
trained, the resulting generator may synthesize samples that resemble only a few localized regions (or modes) of the data distribution BID14 . While several advances
have been made to stabilize the training of GANs BID37 , this task remains more art than science.The difficulty of training GANs is aggravated by the challenges in their evaluation: since evaluating the likelihood of a GAN with respect to the data is an intractable problem, the current gold standard to evaluate the quality of GANs is to eyeball the samples produced by the generator. The evaluation of discriminators
is also difficult, since their visual features do not always transfer well to supervised tasks BID12 BID13 . Finally, the application of GANs
to non-image data has been relatively limited.Research question To model natural images with GANs, the generator and discriminator are commonly parametrized as deep Convolutional Networks (convnets) BID24 . Therefore, it is reasonable to hypothesize
that the reasons for the success of GANs in modeling natural images come from two complementary sources: (A1) Leveraging the powerful inductive bias of deep convnets. (A2) The adversarial training protocol.This
work
attempts to disentangle the factors of success (A1) and (A2) in GAN models. Specifically, we propose and study one algorithm
that relies on (A1) and avoids (A2), but still obtains competitive results when compared to a GAN.
The experimental results presented in this work suggest that, in the image domain, we can recover many of the properties of GAN models by using convnets trained with simple reconstruction losses.
While this does not invalidate the promise of GANs as generic models of uncertainty or as methods for building generative models, our results suggest that, in order to more fully test the adversarial construction, research needs to move beyond images and convnets.
On the other hand, practitioners who care only about generating images for a particular application, and find that the parameterized discriminator does improve their results can use reconstruction losses in their model searches, alleviating some of the instability of GAN training.While the visual quality of the results are promising, especially on the CelebA dataset, they are not yet to the level of the results obtained by GANs on the LSUN bedrooms.
This suggest several research directions: one possibility, suggested by 3, is that being able to cover the entire dataset is too onerous a task if all that is required is to generate a few nice samples.
In that figure we see that GANs have trouble reconstructing randomly chosen images at the same level of fidelity as their generations.
However, GANs can produce good images after a single pass through the data with SGD.
In future work we hope to better understand the tension between these two observations.
There are many possibilities for improving the quality of GLO samples beyond understanding the effects of coverage.
For example other loss functions (e.g. a VGG metric, as in BID32 ), model architectures (here we stayed close to DCGAN for ease of comparison), and more sophisticated sampling methods after training the model all may improve the visual quality of the samples.There is also much work to be done in adding structure to the Z space.
Because the methods here keep track of the correspondence between samples and their representatives, and because the Z space is free, we hope to be able to organize the Z in interesting ways as we train. | Are GANs successful because of adversarial training or the use of ConvNets? We show a ConvNet generator trained with a simple reconstruction loss and learnable noise vectors leads many of the desirable properties of a GAN. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:8 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We perform an in-depth investigation of the suitability of self-attention models for character-level neural machine translation.
We test the standard transformer model, as well as a novel variant in which the encoder block combines information from nearby characters using convolution.
We perform extensive experiments on WMT and UN datasets, testing both bilingual and multilingual translation to English using up to three input languages (French, Spanish, and Chinese).
Our transformer variant consistently outperforms the standard transformer at the character-level and converges faster while learning more robust character-level alignments.
Most existing Neural Machine Translation (NMT) models operate on the word or subword-level.
Often, these models are memory inefficient because of large vocabulary size.
Character-level models (Lee et al., 2017; Cherry et al., 2018) instead work directly on raw characters, resulting in a more compact language representation, while mitigating out-of-vocabulary (OOV) problems (Luong and Manning, 2016) .
They are especially suitable for multilingual translation, where multiple languages can be modelled using the same character vocabulary.
Multilingual training can lead to improvements in the overall performance without any increase in model complexity (Lee et al., 2017) .
It also circumvents the need to train separate models for each language pair.
Models based on self-attention have achieved excellent performance on a number of tasks including machine translation (Vaswani et al., 2017) and representation learning (Devlin et al., 2019; Yang et al., 2019) .
Despite the success of these models, no previous work has considered their suitability for character-level translation, with the In this work, we perform an in-depth investigation of the suitability of self-attention models for character-level translation.
We consider two models: the standard transformer from (Vaswani et al., 2017) ; as well as a novel variant, which we call the convtransformer (Figure 1 , Section 3).
The latter uses convolution to facilitate interactions among nearby character representations.
We evaluate these models on both bilingual and multilingual translation to English, using up to three input languages: French (FR), Spanish (ES), and Chinese (ZH).
We compare their translation performance on close (e.g., FR and ES) and on distant (e.g., FR and ZH) input languages (Section 5.1), and we analyze their learned character alignments (Section 5.2).
We find that self-attention models work surprisingly well for character-level translation, performing competitively with equivalent subword-level models while requiring up to 60% fewer parameters.
At the character-level, the convtransformer performs better than the standard transformer, converging faster and producing more robust alignments.
We performed a detailed investigation of the utility of self-attention models for character-level translation, testing the standard transformer architecture, as well as a novel variant augmented by convolution in the encoder to facilitate information propagation across characters.
Our experiments show that self-attention performs very well on characterlevel translation, performing competitively with subword-level models, while requiring fewer parameters.
Training on multiple input languages is also effective and leads to improvements across all languages when the source and target languages are similar.
When the languages are different, we observe a drop in performance, in particular for the distant language.
In future work, we will extend our analysis to include additional source and target languages from different language families, such as more Asian languages.
We will also work towards improving the training efficiency of character-level models, which is one of their main bottlenecks.
A Example model outputs Tables 3, 4 and 5 contain example translations produced by our different bilingual and multilingual models trained on the UN datasets. | We perform an in-depth investigation of the suitability of self-attention models for character-level neural machine translation. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:80 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose a context-adaptive entropy model for use in end-to-end optimized image compression.
Our model exploits two types of contexts, bit-consuming contexts and bit-free contexts, distinguished based upon whether additional bit
allocation is required.
Based on these contexts, we allow the model to more accurately estimate the distribution of each latent representation with a more generalized form of the approximation models, which accordingly leads to an
enhanced compression performance.
Based on the experimental results, the proposed method outperforms the traditional image codecs, such as BPG and JPEG2000, as well as other previous artificial-neural-network (ANN) based approaches, in terms of the peak signal-to-noise ratio (PSNR) and multi-scale structural similarity (MS-SSIM) index.
The test code is publicly available at https://github.com/JooyoungLeeETRI/CA_Entropy_Model.
Recently, artificial neural networks (ANNs) have been applied in various areas and have achieved a number of breakthroughs resulting from their superior optimization and representation learning performance.
In particular, for various problems that are sufficiently straightforward that they can be solved within a short period of time by hand, a number of ANN-based studies have been conducted and significant progress has been made.
With regard to image compression, however, relatively slow progress has been made owing to its complicated target problems.
A number of works, focusing on the quality enhancement of reconstructed images, were proposed.
For instance, certain approaches BID4 BID17 BID24 have been proposed to reduce artifacts caused by image compression, relying on the superior image restoration capability of an ANN.
Although it is indisputable that artifact reduction is one of the most promising areas exploiting the advantages of ANNs, such approaches can be viewed as a type of post-processing, rather than image compression itself.Regarding ANN-based image compression, the previous methods can be divided into two types.
First, as a consequence of the recent success of generative models, some image compression approaches targeting the superior perceptual quality BID0 BID16 BID15 have been proposed.
The basic idea here is that learning the distribution of natural images enables a very high compression level without severe perceptual loss by allowing the generation of image components, such as textures, which do not highly affect the structure or the perceptual quality of the reconstructed images.
Although the generated images are very realistic, the acceptability of the machine-created image components eventually becomes somewhat applicationdependent.
Meanwhile, a few end-to-end optimized ANN-based approaches (Toderici et al., 2017; BID1 BID18 , without generative models, have been proposed.
In these approaches, unlike traditional codecs comprising separate tools, such as prediction, transform, and quantization, a comprehensive solution covering all functions has been sought after using end-to-end optimization.
Toderici et al. (2017) 's approach exploits a small number of latent binary representations to contain the compressed information in every step, and each step increasingly stacks the additional latent representations to achieve a progressive improvement in quality of the reconstructed images.
improved the compression performance by enhancing operation methods of the networks developed by Toderici et al. (2017) .
Although Toderici et al. (2017) ; provided novel frameworks suitable to quality control using a single trained network, the increasing number of iteration steps to obtain higher image quality can be a burden to certain applications.
In contrast to the approaches developed by Toderici et al. (2017) and , which extract binary representations with as high an entropy as possible, BID1 , BID18 , and regard the image compression problem as being how to retrieve discrete latent representations having as low an entropy as possible.
In other words, the target problem of the former methods can be viewed as how to include as much information as possible in a fixed number of representations, whereas the latter is simply how to reduce the expected bit-rate when a sufficient number of representations are given, assuming that the low entropy corresponds to small number of bits from the entropy coder.
To solve the second target problem, BID1 , BID18 , and adopt their own entropy models to approximate the actual distributions of the discrete latent representations.
More specifically, BID1 and BID18 proposed novel frameworks that exploit the entropy models, and proved their performance capabilities by comparing the results with those of conventional codecs such as JPEG2000.
Whereas BID1 and BID18 assume that each representation has a fixed distribution, introduced an input-adaptive entropy model that estimates the scale of the distribution for each representation.
This idea is based on the characteristics of natural images in which the scales of the representations vary together in adjacent areas.
They provided test results that outperform all previous ANN-based approaches, and reach very close to those of BPG BID3 , which is known as a subset of HEVC (ISO/IEC 23008-2, ITU-T H.265), used for image compression.One of the principle elements in end-to-end optimized image compression is the trainable entropy model used for the latent representations.
Because the actual distributions of latent representations are unknown, the entropy models provide the means to estimate the required bits for encoding the latent representations by approximating their distributions.
When an input image x is transformed into a latent representation y and then uniformly quantized intoŷ, the simple entropy model can be represented by pŷ(ŷ), as described by .
When the actual marginal distribution ofŷ is denoted as m(ŷ), the rate estimation, calculated through cross entropy using the entropy model, pŷ(ŷ), can be represented as shown in equation FORMULA0 , and can be decomposed into the actual entropy ofŷ and the additional bits owing to a mismatch between the actual distributions and their approximations.
Therefore, decreasing the rate term R during the training process allows the entropy model pŷ(ŷ) to approximate m(ŷ) as closely as possible, and let the other parameters transform x into y properly such that the actual entropy ofŷ becomes small.
DISPLAYFORM0 In terms of KL-divergence, R is minimized when pŷ(ŷ) becomes perfectly matched with the actual distribution m(ŷ).
This means that the compression performance of the methods essentially depends on the capacity of the entropy model.
To enhance the capacity, we propose a new entropy model that exploits two types of contexts, bit-consuming and bit-free contexts, distinguished according to whether additional bit allocation is required.
Utilizing these two contexts, we allow the model to more accurately estimate the distribution of each latent representation through the use of a more generalized form of the entropy models, and thus more effectively reduce the spatial dependencies among the adjacent latent representations.
FIG0 demonstrates a comparison of the compression results of our method to those of other previous approaches.
The contributions of our work are as follows:• We propose a new context-adaptive entropy model framework that incorporates the two different types of contexts.•
We provide the test results that outperform the widely used conventional image codec BPG in terms of the PSNR and MS-SSIM.•
We discuss the directions of improvement in the proposed methods in terms of the model capacity and the level of the contexts.Note that we follow a number of notations given by because our approach can be viewed as an extension of their work, in that we exploit the same rate-distortion (R-D) optimization framework. The
rest of this paper is organized as follows. In
Section 2, we introduce the key approaches of end-to-end optimized image compression and propose the context-adaptive entropy model. Section
3 demonstrates the structure of the encoder and decoder models used, and the experimental setup and results are then given in section 4. Finally
, in Section 5, we discuss the current state of our work and directions for improvement.
Based on previous ANN-based image compression approaches utilizing entropy models BID1 BID18 , we extended the entropy model to exploit two different types of contexts.
These contexts allow the entropy models to more accurately estimate the distribution of the representations with a more generalized form having both mean and standard deviation parameters.
Based on the evaluation results, we showed the superiority of the proposed method.
The contexts we utilized are divided into two types.
One is a sort of free context, containing the part of the latent variables known to both the encoder and the decoder, whereas the other is the context, which requires additional bit allocation.
Because the former is a generally used context in a variety of codecs, and the latter was already verified to help compression using 's approach, our contributions are not the contexts themselves, but can be viewed as providing a framework of entropy models utilizing these contexts.Although the experiments showed the best results in the ANN-based image compression domain, we still have various studies to conduct to further improve the performance.
One possible way is generalizing the distribution models underlying the entropy model.
Although we enhanced the performance by generalizing the previous entropy models, and have achieved quite acceptable results, the Gaussian-based entropy models apparently have a limited expression power.
If more elaborate models, such as the non-parametric models of or BID11 , are combined with the context-adaptivity proposed in this paper, they would provide better results by reducing the mismatch between the actual distributions and the approximation models.
Another possible way is improving the level of the contexts.
Currently, our methods only use low-level representations within very limited adjacent areas.
However, if the sufficient capacity of the networks and higher-level contexts are given, a much more accurate estimation could be possible.
For instance, if an entropy model understands the structures of human faces, in that they usually have two eyes, between which a symmetry exists, the entropy model could approximate the distributions more accurately when encoding the second eye of a human face by referencing the shape and position of the first given eye.
As is widely known, various generative models BID5 BID13 BID25 learn the distribution p(x) of the images within a specific domain, such as human faces or bedrooms.
In addition, various in-painting methods BID12 BID22 BID23 learn the conditional distribution p(x | context) when the viewed areas are given as context.
Although these methods have not been developed for image compression, hopefully such high-level understandings can be utilized sooner or later.
Furthermore, the contexts carried using side information can also be extended to some high-level information such as segmentation maps or any other information that helps with compression.
Segmentation maps, for instance, may be able to help the entropy models estimate the distribution of a representation discriminatively according to the segment class the representation belongs to.Traditional codecs have a long development history, and a vast number of hand-crafted heuristics have been stacked thus far, not only for enhancing compression performance, but also for compromising computational complexities.
Therefore, ANN-based image compression approaches may not provide satisfactory solutions as of yet, when taking their high complexity into account.
However, considering its much shorter history, we believe that ANN-based image compression has much more potential and possibility in terms of future extension.
Although we remain a long way from completion, we hope the proposed context-adaptive entropy model will provide an useful contribution to this area.
The structure of the hybrid network for higher bit-rate environments.
The same notations as in the figure 4 are used.
The representation y is divided into two parts and quantized.
One of the resulting parts,ŷ 1 , is encoded using the proposed model, whereas the other,ŷ 2 , is encoded using a simpler model in which only the standard deviations are estimated using side information.
The detailed structure of the proposed model is illustrated in FIG3 .
All concatenation and split operations are performed in a channel-wise manner. | Context-adaptive entropy model for use in end-to-end optimized image compression, which significantly improves compression performance | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:800 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep networks often perform well on the data distribution on which they are trained, yet give incorrect (and often very confident) answers when evaluated on points from off of the training distribution.
This is exemplified by the adversarial examples phenomenon but can also be seen in terms of model generalization and domain shift.
Ideally, a model would assign lower confidence to points unlike those from the training distribution.
We propose a regularizer which addresses this issue by training with interpolated hidden states and encouraging the classifier to be less confident at these points.
Because the hidden states are learned, this has an important effect of encouraging the hidden states for a class to be concentrated in such a way so that interpolations within the same class or between two different classes do not intersect with the real data points from other classes.
This has a major advantage in that it avoids the underfitting which can result from interpolating in the input space.
We prove that the exact condition for this problem of underfitting to be avoided by Manifold Mixup is that the dimensionality of the hidden states exceeds the number of classes, which is often the case in practice.
Additionally, this concentration can be seen as making the features in earlier layers more discriminative.
We show that despite requiring no significant additional computation, Manifold Mixup achieves large improvements over strong baselines in supervised learning, robustness to single-step adversarial attacks, semi-supervised learning, and Negative Log-Likelihood on held out samples.
Machine learning systems have been enormously successful in domains such as vision, speech, and language and are now widely used both in research and industry.
Modern machine learning systems typically only perform well when evaluated on the same distribution that they were trained on.
However machine learning systems are increasingly being deployed in settings where the environment is noisy, subject to domain shifts, or even adversarial attacks.
In many cases, deep neural networks which perform extremely well when evaluated on points on the data manifold give incorrect answers when evaluated on points off the training distribution, and with strikingly high confidence.This manifests itself in several failure cases for deep learning.
One is the problem of adversarial examples (Szegedy et al., 2014) , in which deep neural networks with nearly perfect test accuracy can produce incorrect classifications with very high confidence when evaluated on data points with small (imperceptible to human vision) adversarial perturbations.
These adversarial examples could present serious security risks for machine learning systems.
Another failure case involves the training and testing distributions differing significantly.
With deep neural networks, this can often result in dramatically reduced performance.To address these problems, our Manifold Mixup approach builds on following assumptions and motivations: (1) we adopt the manifold hypothesis, that is, data is concentrated near a lower-dimensional non-linear manifold (this is the only required assumption on the data generating distribution for Manifold Mixup to work); (2) a neural net can learn to transform the data non-linearly so that the transformed data distribution now lies on a nearly flat manifold; (3) as a consequence, linear interpolations between examples in the hidden space also correspond to valid data points, thus providing novel training examples.
(a,b,c) shows the decision boundary on the 2d spirals dataset trained with a baseline model (a fully connected neural network with nine layers where middle layer is a 2D bottleneck layer), Input Mixup with α = 1.0, and Manifold Mixup applied only to the 2D bottleneck layer.
As seen in
(b), Input Mixup can suffer from underfitting since the interpolations between two samples may intersect with a real sample.
Whereas Manifold Mixup
(c), fits the training data perfectly (more intuitive example of how Manifold Mixup avoids underfitting is given in Appendix H).
The bottom row (d,e,f) shows the hidden states for the baseline, Input Mixup, and manifold mixup respectively.
Manifold Mixup concentrates the labeled points from each class to a very tight region, as predicted by our theory (Section 3) and assigns lower confidence classifications to broad regions in the hidden space.
The black points in the bottom row are the hidden states of the points sampled uniformly in x-space and it can be seen that manifold mixup does a better job of giving low confidence to these points.
Additional results in Figure 6 of Appendix B show that the way Manifold Mixup changes the representations is not accomplished by other well-studied regularizers (weight decay, dropout, batch normalization, and adding noise to the hidden states).Manifold
Mixup performs training on the convex combinations of the hidden state representations of data samples. Previous
work, including the study of analogies through word embeddings (e.g. king -man + woman ≈ queen), has shown that such linear interpolation between hidden states is an effective way of combining factors (Mikolov et al., 2013) . Combining
such factors in the higher level representations has the advantage that it is typically lower dimensional, so a simple procedure like linear interpolation between pairs of data points explores more of the space and with more of the points having meaningful semantics. When we combine
the hidden representations of training examples, we also perform the same linear interpolation in the labels (seen as one-hot vectors or categorical distributions), producing new soft targets for the mixed examples.In practice, deep networks often learn representations such that there are few strong constraints on how the states can be distributed in the hidden space, because of which the states can be widely distributed through the space, (as seen in FIG0 ). As well as, nearly
all points in hidden space correspond to high confidence classifications even if they correspond to off-the-training distribution samples (seen as black points in FIG0 ). In contrast, the consequence
of our Manifold Mixup approach is that the hidden states from real examples of a particular class are concentrated in local regions and the majority of the hidden space corresponds to lower confidence classifications. This concentration of the hidden
states of the examples of a particular class into a local regions enables learning more discriminative features. A low-dimensional example of this
can be seen in FIG0 and a more detailed analytical discussion for what "concentrating into local regions" means is in Section 3.Our method provides the following contributions:• The introduction of a novel regularizer which outperforms competitive alternatives such as Cutout BID4 , Mixup (Zhang et al., 2018) , AdaMix BID10 Dropout (Hinton et al., 2012) . On CIFAR-10, this includes a 50%
reduction in test Negative Log-Likelihood (NLL) from 0.1945 to 0.0957.• Manifold Mixup achieves significant robustness to single step adversarial attacks.• A new method for semi-supervised
learning which uses a Manifold Mixup based consistency loss. This method reduces error relative
to Virtual Adversarial Training (VAT) (Miyato et al., 2018a) by 21.86% on CIFAR-10, and unlike VAT does not involve any additional significant computation.• An analysis of Manifold Mixup and
exact sufficient conditions for Manifold Mixup to achieve consistent interpolations. Unlike Input Mixup, this doesn't require
strong assumptions about the data distribution (see the failure case of Input Mixup in FIG0 ): only that the number of hidden units exceeds the number of classes, which is easily satisfied in many applications.
Deep neural networks often give incorrect yet extremely confident predictions on data points which are unlike those seen during training.
This problem is one of the most central challenges in deep learning both in theory and in practice.
We have investigated this from the perspective of the representations learned by deep networks.
In general, deep neural networks can learn representations such that real data points are widely distributed through the space and most of the area corresponds to high confidence classifications.
This has major downsides in that it may be too easy for the network to provide high confidence classification on points which are off of the data manifold and also that it may not provide enough incentive for the network to learn highly discriminative representations.
We have presented Manifold Mixup, a new algorithm which aims to improve the representations learned by deep networks by encouraging most of the hidden space to correspond to low confidence classifications while concentrating the hidden states for real examples onto a lower dimensional subspace.
We applied Manifold Mixup to several tasks and demonstrated improved test accuracy and dramatically improved test likelihood on classification, better robustness to adversarial examples from FGSM attack, and improved semi-supervised learning.
Manifold Mixup incurs virtually no additional computational cost, making it appealing for practitioners. | A method for learning better representations, that acts as a regularizer and despite its no significant additional computation cost , achieves improvements over strong baselines on Supervised and Semi-supervised Learning tasks. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:801 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Sometimes SRS (Stereotactic Radio Surgery) requires using sphere packing on a Region of Interest (ROI) such as cancer to determine a treatment plan.
We have developed a sphere packing algorithm which packs non-intersecting spheres inside the ROI.
The region of interest in our case are those voxels which are identified as cancer tissues.
In this paper, we analyze the rotational invariant properties of our sphere-packing algorithm which is based on distance transformations.
Epsilon-Rotation invariant means the ability to arbitrary rotate the 3D ROI while keeping the volume properties remaining (almost) same within some limit of epsilon.
The applied rotations produce spherical packing which remains highly correlated as we analyze the geometrically properties of sphere packing before and after the rotation of the volume data for the ROI.
Our novel sphere packing algorithm has high degree of rotation invariance within the range of +/- epsilon.
Our method used a shape descriptor derived from the values of the disjoint set of spheres form the distance-based sphere packing algorithm to extract the invariant descriptor from the ROI.
We demonstrated by implementing these ideas using Slicer3D platform available for our research.
The data is based on sing MRI Stereotactic images.
We presented several performance results on different benchmarks data of over 30 patients in Slicer3D platform.
In several applications such as inspection of tumor or interacting with portion of a 3D volume data, the ROI could be rotated at arbitrary angles.
If a sphere packing algorithm is used before and after such rotation, then rotational invariance suggests that there might be high correlation between spheres found by our sphere packing algorithm before and after the rotation.
Defining correspondences between the original and rotated ROIs is an important task that could be solved by spheres' descriptors.
If these descriptors are highly correlated, then we can anticipate that the ROIs might be similar as well.
Li et al. (Li & Simske, 2002) stated that translation and scaling are easy compared to rotation.
Rotation of a 3D volume data or 3D image involves simultaneous manipulation of three coordinates to maintain invariance.
In the case of sphere packing, as we capture the ROI with nonintersecting spheres, the rotation invariance means that set of spheres will remain identical in size although their placement is expected to change under an arbitrary rotation.
There are three major techniques to prove the rotation invariance: landmarking, rotation invariant features/shape extraction descriptor, and brute force rotation alignment.
The landmarking is normally carried out by following two methods, domain specific landmarking and generic landmarking (Szeptycki, Ardabilian, & Chen, 2009 ).
The domain specific landmarking accepts some fixed point in the image and does rotation with respect to that about an arbitrary axis.
The generic landmarking method on the other hand, finds the major axes of the 3D/2D image and that can rotate the volume or image as a whole in carrying out the rotation.
Because the size of the volume data can be typically large based on the size of the data, both these approaches require that large memory storage is available as the complete voxel information is required, and usually is time consuming.
The brute force alignment method divides/degrades the object into large number of smaller parts and works with them for rotation.
This method is time consuming, complex and complicated because parts have to be organized.
The developed code for a particular shape in this method may only apply to the data in hand and may not be generalizable.
Finally, Invariant feature/shape descriptor involves identification of certain invariant features (measurable quantities) that remains unaltered under rotations of the 3D image or volume data.
The invariant features are indexed with a feature vector also known as shape signatures.
Then, the optimal rotation can be defined by measuring model's similarities in terms of the distance such that the rotation invariant property would mean that these distance measures are as close to each other with certain limit before and after the rotation.
There are literally many of rotation invariant features that been used in the past, including ratio of perimeter to area, fractal measures, circularity, min/max/mean curvature, and shape histograms, etc.
Lin et al. (Lin, Khade, & Li, 2012) and Yankov et al. (Yankov, Keogh, Wei, Xi, & Hodges, 2008) use time series representation as a feature vector to match the 3D shapes to prove the rotation invariance.
Based on our research, most of the studies have been used spherical harmonic method to map the features of objects into a unit sphere to prove the invariance under rotation (Kazhdan, Funkhouser, & Rusinkiewicz, 2003; Nina-Paravecino & Manian, 2010; Vranic, 2003) .
The spherical harmonic method does not always give accurate results to distinguish between models since the internal parts of the 3D shapes may not fit in same sphere.
Other researchers combined the spherical harmonic with spatial geometric moments (El Mallahi, Zouhri, El-Mekkaoui, & Qjidaa, 2017; Kakarala & Mao, 2010) .
The most common graph method used is skeletons.
The skeletons are based on medial axis.
The medial axis of the 3D objects has been used as a shape descriptor in a number of researches Iyer, Jayanti, Lou, Kalyanaraman, & Ramani, 2004; Liu, 2009; Lou et al., 2003; S'anchezCruz & Bribiesca, 2003; Sundar, Silver, Gagvani, & Dickinson, 2003) .
However, this method is sensitive to noise and has a heavy computationally cost.
In this paper, we considered the set of spheres as shape-descriptors and analyzed the sphere packing before and after the rotations and looked for the similarity measure.
We aimed to show that set of spheres are invariant such that even if we rotate the image, the size of the spheres and center's distances are highly correlated.
We used our sphere packing algorithm to pack non-intersecting spheres into the ROIs before and after rotations.
As mentioned earlier, those spheres could provide invariant shape descriptor.
After rotation the voxels will be populated with the new voxel orientation.
Our shape descriptor provides a novel featureless method that doesn't depend on any specific feature or texture, instead is related to sphere packing generated by our sphere packing algorithm.
Our method characterizes the 3D object similarity by the shape geometries of the sphere packing, and sphere's correspondence with one another and their spatial relationships.
In this paper, we show that our previous work for sphere packing (Anonymous, 2019) can be used to show the invariance under rotation since our algorithm can describe volumetric shapes more succinctly than voxel representation.
In this work, the spheres packing together with the radiuses and centers functions provided an shape descriptor, a novel approach for characterization and compression of shape information for 3D volume and voxel data.
The spheres radius works the best for our study for finding the similarity after rotation.
Even though there are differences between the total number of calculated distances before and after rotation, our algorithm accuracy is reasonably high because it is able to calculate almost similar radiuses each time within epsilon.
The consistency of spheres radius is because our algorithm at each iteration finds the maximum radius distance to pick first, so increasing the number of packed spheres to cover the required voxels based on the desired packing density doesn't affect the epsilon value.
Changing the topology due to equal spheres is the main reason of the increase of the epsilon value of the distance ratios.
The algorithm decision of choosing which sphere of the same size to place first, is the big issue here.
Therefore, increasing the number of packed spheres will significantly increase the changes in topology which will results in increasing the epsilon value.
When the radiuses are equal, the descriptor graph before and after might change considerably based on which sphere our algorithm suggests.
In this case, aggregate we will need to collect all those spheres which are equal and replace them by the average of the center of the sphere in the shape descriptor, and so then (e) value will be similar in the shape description before and after the rotation.
Thus, set of spheres whose radius are equal are replaced with one sphere.
That is expected to reduce the epsilon (e) value further.
Moreover, the topology changing in our study affect our accuracy results.
We believe that eliminating equal spheres by using the enclosing sphere in our implementation will decrease the distance ratio results comparing the shape descriptors before and after the rotation of the ROI.
Our novel medical visualization techniques promise to improve the efficiency, diagnostic quality and the treatment.
The field of 3D shape approximation and similarity have been a focus in the area of geometry in general for several hundred years now.
Shape analysis for feature extraction is the key problem in the shape approximation and similarity issues.
The best way for similarity matching is to identify certain shape signatures (prominent features in the image).
These signatures are then compared between the transformed images through similarity assessment, distance computation or any other appropriate methods.
This paper presented a method for defining a possible invariant shape descriptor from 3D-image or 3D-volume data to be used to match the objects from different rotations/viewpoints.
Our method can be applied to a wide variety of data types such as 2D images and even polygonal meshes.
Our heuristics is e-invariant and has an impressive result of 96% invariant under rotations.
The experimental results prove the effectiveness of our novel idea.
The proposed system was fully software implemented in Slicer3D and has been tested on 30 patient's databases.
For future works, we will apply other measures such as 3D-spatial sorting based on the spheres found, or identifying a minimal volume enclosing sphere surrounding all spheres of equal radius (as mentioned earlier) to improve epsilon (e) value further.
Moreover, as Slicer3D is experimental, not FDA approved, yet used worldwide, our plan is to upload our implementations under BSD license so that worldwide communities can try the system and provide more feedback using their 3D volume data and reporting e-value for their data. | Packing region of Interest (ROI) such as cancerous regions identified in 3D Volume Data, Packing spheres inside the ROI, rotating the ROI , measures of difference in sphere packing before and after the rotation. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:802 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We show implicit filter level sparsity manifests in convolutional neural networks (CNNs) which employ Batch Normalization and ReLU activation, and are trained using adaptive gradient descent techniques with L2 regularization or weight decay.
Through an extensive empirical study (Anonymous, 2019) we hypothesize the mechanism be hind the sparsification process.
We find that the interplay of various phenomena influences the strength of L2 and weight decay regularizers, leading the supposedly non sparsity inducing regularizers to induce filter sparsity.
In this workshop article we summarize some of our key findings and experiments, and present additional results on modern network architectures such as ResNet-50.
In this article we discuss the findings from BID7 regarding filter level sparsity which emerges in certain types of feedforward convolutional neural networks.
Filter refers to the weights and the nonlinearity associated with a particular feature, acting together as a unit.
We use filter and feature interchangeably throughout the document.
We particularly focus on presenting evidence for the implicit sparsity, our experimentally backed hypotheses regarding the cause of the sparsity, and discuss the possible role such implicit sparsification plays in the adaptive vs vanilla (m)SGD generalization debate.
For implications on neural network speed up, refer to the original paper BID7 .In
networks which employ Batch Normalization and ReLU activation, after training, certain filters are observed to not activate for any input. Importantly
, the sparsity emerges in the presence of regularizers such as L2 and weight decay (WD) which are in general understood to be non sparsity inducing, and the sparsity vanishes when regularization is 1 Max Planck Institute For Informatics, Saarbrücken, Germany 2 Saarland Informatics Campus, Germany 3 Ulsan National Institute of Science and Technology, South Korea.
Our findings would help practitioners and theoreticians be aware that seemingly unrelated hyperparameters can inadvertently affect the underlying network capacity, which interplays with both the test accuracy and generalization gap, and could partially explain the practical performance gap between Adam and SGD.
Our work opens up future avenues of theoretical and practical exploration to further validate our hypotheses, and attempt to understand the emergence of feature selectivity in Adam and other adaptive SGD methods.As for network speed up due to sparsification, the penalization of selective features can be seen as a greedy local search heuristic for filter pruning.
While the extent of implicit filter sparsity is significant, it obviously does not match up with some of the more recent explicit sparsification approaches BID1 BID3 which utilize more expensive model search and advanced heuristics such as filter redundancy.
Future work should reconsider the selective-feature pruning criteria itself, and examine nonselective features as well, which putatively have comparably low discriminative information as selective features and could also be pruned.
These non-selective features are however not captured by greedy local search heuristics because pruning them can have a significant impact on the accuracy.
Though the accuracy can presumably can be recouped after fine-tuning. | Filter level sparsity emerges implicitly in CNNs trained with adaptive gradient descent approaches due to various phenomena, and the extent of sparsity can be inadvertently affected by different seemingly unrelated hyperparameters. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:803 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Humans can learn a variety of concepts and skills incrementally over the course of their lives while exhibiting an array of desirable properties, such as non-forgetting, concept rehearsal, forward transfer and backward transfer of knowledge, few-shot learning, and selective forgetting.
Previous approaches to lifelong machine learning can only demonstrate subsets of these properties, often by combining multiple complex mechanisms.
In this Perspective, we propose a powerful unified framework that can demonstrate all of the properties by utilizing a small number of weight consolidation parameters in deep neural networks.
In addition, we are able to draw many parallels between the behaviours and mechanisms of our proposed framework and those surrounding human learning, such as memory loss or sleep deprivation.
This Perspective serves as a conduit for two-way inspiration to further understand lifelong learning in machines and humans.
Humans have a sustained ability to acquire knowledge and skills, refine them on the basis of novel experiences, and transfer them across domains over a lifespan [1] [2] [3] .
It is no surprise that the learning abilities of humans have been inspiring machine learning approaches for decades.
Further extending the influence of human learning on machine learning, continual or lifelong learning (LLL) in particular [4] , is the goal of this work.
To mimic human continual concept learning scenarios, in this paper we propose a new learning setting in which one concept-learning task is presented at a time.
Each classification task and its corresponding labeled data are presented one at a time in sequence.
As an example, assume we are given the task sequence of learning to classify hand-written characters of "1", "2", "3", and so on, by providing training examples of "1", "2", "3" in sequence (such as those in Figure 1 ).
When learning "1", only data of "1" is available.
When learning of "2", data of "2" becomes available, and so on.
Preprint.
Under review.
We assume that these concepts are mutually exclusive (i.e. any sample can be a positive sample for at most one task).
We also assume a set of "background negative" examples are given -they are presumably the previously learned concepts.
This setting is in stark contrast to the standard setup in multi-class machine learning, where it is assumed that all training data of all classes is readily available, as a "batch" mode.
For the example task sequence in this section, we can consider the four classes to be "1", "2", "3", and "I".
The negative samples used when learning to identify the three positive classes can come from a domain-appropriate non-overlapping "background" set (in this case, lower-case letters).
After a model is trained up to class i, it will not have access to those training samples unless absolutely necessary (see later).
The samples shown are from the EMNIST dataset [5] .
Under this continual learning setting, we hope than a LLL approach would exhibit the following properties:
Non-forgetting: This is the ability to avoid catastrophic forgetting [6] of old tasks upon learning new tasks.
For example, when learning the second task "2", with only training data on "2" (and without using the data of the task 1), "2" should be learned without forgetting how to classify task 1 learned earlier.
Due to the tendency towards catastrophic forgetting, non-lifelong learning approaches would require retraining on data for "1" and "2" together to avoid forgetting.
A skill opposite to non-forgetting is selective forgetting.
As we will describe further, learning new tasks may require expansion of the neural network, and when this is not possible, the model can perform selective forgetting to free up capacity for new tasks.
Forward transfer: This is the ability to learn new tasks easier and better following earlier learned tasks.
For example, after learning the task of classifying "1", it would be easier (requiring less training data for the same or higher predictive accuracy) to learn to classify "I".
Achieving sufficient forward transfer opens the door to few-shot learning of later tasks.
Non-confusion: Machine learning algorithms need to find discriminating features for classification only as robust as they need to be to minimize a loss, thus, when more tasks emerge for learning, earlier learned features may not be sufficient, leading to confusion between classes.
For example, after learning "1" and "2" as the first two tasks, the learned model may say "with only straight stroke" is "1" and "with curved stroke" is "2".
But when learning "I" as a later new task, the model may rely only on the presence of a straight stroke again, leading to confusion between "1" and "I"
when the model is finally tested.
To resolve such confusion between "1" and "I", samples of both "1" and "I" are needed to be seen together during training so that discriminating features may be found.
In humans, this type of confusion may be seen when we start learning to recognize animals for example.
To distinguish between common distinct animals such as birds and dogs, only features such as size or presence of wings is sufficient, ignoring finer features such as facial shape.
However, when we next learn to identify cats, we must use the previous data on dogs and new data on cats to identify finer features (such as facial shape) to distinguish them.
Backward transfer: This is knowledge transfer in the opposite direction as forward transfer.
When new tasks are learned, they may, in turn, help to improve the performance of old tasks.
This is analogous to an "overall review" before a final exam, after materials of all chapters have been taught and learned.
Later materials can often help better understand earlier materials.
Past works on LLL have only focused on subsets of the aforementioned properties.
For example, an approach inspiring our own, Elastic Weight Consolidation (EWC) [7] , focuses only on non-
forgetting.
The approach of [8] considers non-forgetting as well as forward and backward transfer and confusion reduction, but does not allow for selective forgetting.
Figure 2 illustrates the scope of our framework compared with related previous approaches.
Section 4 contains a more detailed comparison.
In this paper, we provide a general framework of LLL in deep neural networks where all of these these abilities can be demonstrated.
Deep neural networks, which have become popular in recent years, are an attractive type of machine learning model due to their ability to automatically learn abstract features from data.
Weights (strengths of links between neurons) of a network can be modified by the back propagation algorithms to minimize the total error of the desired output and the actual output in the output layer.
In our study, we consider fully-connected neural networks with two hidden layers to illustrate our LLL approach.
The basic idea of our unified framework, similar to EWC [7] , is to utilize "stiffness" parameters of weights during training phases to achieve the various LLL properties such as nonforgetting, forward transfer, etc.
For each lifelong learning property, a subset of its weights may be "frozen", another subset of weights may be "free" to be changed, and yet another subset of weights may be "easily changed", depending on the type of lifelong learning properties we are aiming to facilitate at the time.
EWC and its conceptual successors [11] [12] [13] are lifelong learning approaches which estimate the importance of each weight in a network for maintaining task performance.
By preventing already important weights from changing to accommodate new tasks (i.e. consolidating weights), catastrophic forgetting can be reduced.
Generally speaking, each network weight, θ i , is associated with a remembering, BWT+: positive backward transfer, FWT: forward transfer.
We expect our approach to be outperform the related approaches of EWC [7] and PNN [10] on a majority of the metrics.
consolidation value, b i , which can be set or tuned for each stage of learning as we will soon discuss.
When training a model with EWC, we combine the original loss L t with weight consolidation as follows:
Here, θ target i is the consolidation target value for a weight, θ t i is the weight value being updated during training on task t, and λ is used to balance the importance of the two loss components.
Clearly, a large b value indicates that changing the weight is strongly penalized during training, whereas a value of 0 indicates that the weight is free to change.
In our approach, we use three values for b to control the flexibility of different sets of network weights:
• b nf for non-forgetting (ideally a very large value),
• b tr for forward transfer (ideally very small or zero),
• and b f ree for freely tunable weights (ideally very small).
While the individual weights of the network are learned via back-propagation, these consolidation hyperparameters are set by several heuristic strategies.
We will illustrate how changing these hyperparameters to control the stiffness of weights in different parts of the deep neural networks, our approach can achieve all of the LLL abilities mentioned above (Section 2).
As we mentioned, these hyperparameters are determined during LLL by heuristic strategies, but one might wonder if these heuristics can be learned.
Our comparison between lifelong learning of machines and humans suggests that our model hyperparameters are probably intrinsic to the physiology of the brain -a product of natural evolution.
A person can consciously perform metalearning, such as during memory training and explicit rehearsal, whereas these heuristics may be explicitly learned or fine-turned.
This would be our future study.
In this work, we presented a unified approach for lifelong learning.
This approach tackles a difficult problem that captures many important aspects of human learning, namely non-forgetting, forward transfer, confusion reduction, backward transfer, few-shot learning, and selective forgetting.
Progress in this area is critical for the development of computationally efficient and flexible machine learning algorithms.
The success at this problem reduces the demand for training data while a single model learns to solve more and more tasks.
While previous works have focused on a subset of these lifelong learning skills, our proposed approach utilizes a single mechanism, controlling weight consolidation, to address all of the considered skills.
We define only a small number of consolidation hyperparameters which are dynamically applied to groups of weights.
In addition to describing the novel approach, we examine its parallels with human learning.
We note several similarities in the response of our model to hyperparameter settings and the effects on human learning to analogous changes in the brain. | Drawing parallels with human learning, we propose a unified framework to exhibit many lifelong learning abilities in neural networks by utilizing a small number of weight consolidation parameters. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:804 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Limited angle CT reconstruction is an under-determined linear inverse problem that requires appropriate regularization techniques to be solved.
In this work we study how pre-trained generative adversarial networks (GANs) can be used to clean noisy, highly artifact laden reconstructions from conventional techniques, by effectively projecting onto the inferred image manifold.
In particular, we use a robust version of the popularly used GAN prior for inverse problems, based on a recent technique called corruption mimicking, that significantly improves the reconstruction quality.
The proposed approach operates in the image space directly, as a result of which it does not need to be trained or require access to the measurement model, is scanner agnostic, and can work over a wide range of sensing scenarios.
Computed Tomography (CT) reconstruction is the process of recovering the structure and density of objects from a series of x-ray projections, called sinograms.
While traditional full-view CT is relatively easier to solve, the problem becomes under-determined in two crucial scenarios often encountered in practice -(a
) few-view: when the number of available x-ray projections is very small, and (
b) limited-angle: when the total angular range is less than 180 degrees, as a result of which most of the object of interest is invisible to the scanner.
These scenarios arise in applications which require the control of x-ray dosage to human subjects, limiting the cost by using fewer sensors, or handling structural limitations that restrict how an object can be scanned.
When such constraints are not extreme, suitable regularization schemes can help produce artifact-free reconstructions.
While the design of such regularization schemes are typically driven by priors from the application domain, they are found to be insufficient in practice under both few-view and limited-angle settings.
In the recent years, there is a surge in research interest to utilize deep learning approaches for challenging inverse problems, including CT reconstruction [1, 2, 3] .
These networks implicitly learn to model the manifold of CT images, hence resulting in higher fidelity reconstruction, when compared to traditional methods such as Filtered Backprojection (FBP), or Regularized Least Squares (RLS), for the same number of measurements.
While these continue to open new opportunities in CT reconstruction, they rely of directly inferring mappings between sinograms and the corresponding CT images, in lieu of regularized optimization strategies.
However, the statistics of sinogram data can vary significantly across different scanner types, thus rendering reconstruction networks trained on one scanner ineffective for others.
Furthermore, in practice, the access to the sinogram data for a scanner could be restricted in the first place.
This naturally calls for entirely image-domain methods that do not require access to the underlying measurements.
In this work, we focus on the limited-angle scenario, which is known to be very challenging due to missing information.
Instead of requiring sinograms or scanner-specific representations, we pursue an alternate solution that is able to directly work in the image domain, with no pairwise (sinogram-image) training necessary.
To this end, we advocate the use of generative adversarial networks (GANs) [4] as image manifold priors.
GANs have emerged as a powerful, unsupervised technique to parameterize high dimensional image distributions, allowing us to sample from these spaces to produce very realistic looking images.
We train the GAN to capture the space of all possible reconstructions using a training set of clean CT images.
Next, we use an initial seed reconstruction using an existing technique such as Filtered Back Projection (FBP) or Regularized Least Squares (RLS) and 'clean' it by projecting it onto the image manifold, which we refer to as the GAN prior following [6] .
Since the final reconstruction is always forced to be from the manifold, it is expected to be artifact-free.
More specifically, this process involves sampling from the latent space of the GAN, in order to find an image that resembles the seed image.
Though this has been conventionally carried out using projected gradient descent (PGD) [5, 6 ], as we demonstrate in our results, this approach performs poorly when the initial estimate is too noisy or has too many artifacts, which is common under extremely limited angle scenarios.
Instead, our approach utilizes a recently proposed technique referred to as corruption mimicking, used in the design of MimicGAN [7] , that achieves robustness to the noisy seed reconstruction through the use of a randomly initialized shallow convolutional neural network (CNN), in addition to PGD.
By modeling the initial guess of this network as a random corruption for the unknown clean image, the process of corruption mimicking alternates between estimating the unknown corruption and finding the clean solution, and this alternating optimization is repeated until convergence, in terms of effectively matching the observed noisy data.
The resulting algorithm is test time only, and can operate in an artifact-agnostic manner, i.e. it can clean images that arise from a large class of distortions like those obtained from various limited-angle reconstructions.
Furthermore, it reduces to the well-known PGD style of projection, when the CNN is replaced by an identity function.
In figures 1, 2, we show qualitative and quantitative results obtained for both the MNIST and Fashion-MNIST datasets respectively.
In both cases, we demonstrate significant improvements in recovering the true reconstruction compared to the vanilla GAN prior.
It should be noted that a performance boost of nearly 4-5 dB on MNIST and 0.5-1dB on Fashion-MNIST are achieved with no additional information or data, but due to the inclusion of the robust GAN prior.
Additionally, PSNR and SSIM tend to be uncorrelated with perceptual metrics in many cases, as perceptually poor reconstructions can be deceptively close in PSNR or SSIM.
A potential fix in GAN-based reconstruction approaches is to compute error in the discriminator feature space as a proxy for perceptual quality.
[8] : Given the RLS reconstruction, we improve them by projecting onto the image manifold using corruption mimicking [7] .
In all cases, we show the improvement obtained by using the robust GAN prior over a standard GAN projection. | We show that robust GAN priors work better than GAN priors for limited angle CT reconstruction which is a highly under-determined inverse problem. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:805 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose an effective multitask learning setup for reducing distant supervision noise by leveraging sentence-level supervision.
We show how sentence-level supervision can be used to improve the encoding of individual sentences, and to learn which input sentences are more likely to express the relationship between a pair of entities.
We also introduce a novel neural architecture for collecting signals from multiple input sentences, which combines the benefits of attention and maxpooling.
The proposed method increases AUC by 10% (from 0.261 to 0.284), and outperforms recently published results on the FB-NYT dataset.
Early work in relation extraction from text used fully supervised methods, e.g., BID2 , which motivated the development of relatively small datasets with sentence-level annotations such as ACE 2004 BID2 , BioInfer and SemEval 2010 .
Recognizing the difficulty of annotating text with relations, especially when the number of relation types of interest is large, BID16 pioneered the distant supervision approach to relation extraction, where a knowledge base (KB) and a text corpus are used to automatically generate a large dataset of labeled sentences which is then used to train a relation classifier.
Distant supervision provides a practical alternative to manual annotations, but introduces many noisy examples.
Although many methods have been proposed to reduce the noise in distantly supervised models for relation extraction (e.g., BID8 BID23 BID22 BID5 BID27 BID11 , a rather obvious approach has been understudied: using sentence-level supervision to augment distant supervision.
Intuitively, supervision at the sentence-level can help reduce the noise in distantly supervised models by identifying which of the input sentences for a given pair of entities are likely to express a relation.We experiment with a variety of model architectures to combine sentence-and bag-level supervision and find it most effective to use the sentence-level annotations to directly supervise the sentence encoder component of the model in a multi-task learning framework.
We also introduce a novel maxpooling attention architecture for combining the evidence provided by different sentences where the entity pair is mentioned, and use the sentence-level annotations to supervise attention weights.The contributions of this paper are as follows:• We propose an effective multitask learning setup for reducing distant supervision noise by leveraging existing datasets of relations annotated at the sentence level.•
We propose maxpooled attention, a neural architecture which combines the benefits of maxpooling and soft attention, and show that it helps the model combine information about a pair of entities from multiple sentences.•
We release our library for relation extraction as open source. 1
The following section defines the notation we use, describes the problem and provides an overview of our approach.
We propose two complementary methods to improve performance and reduce noise in distantly supervised relation extraction.
The first is incorporating sentence-level supervision and the second is maxpooled attention, a novel form of attention.
The sentence-level supervision improves sentence encoding and provides supervision for attention weights, while maxpooled attention effectively combines sentence encodings and their weights into a bag encoding.
Our experiments show a 10% improvement in AUC (from 0.261 to 0.284) outperforming recently published results on the FB-NYT dataset . | A new form of attention that works well for the distant supervision setting, and a multitask learning approach to add sentence-level annotations. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:806 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The attention layer in a neural network model provides insights into the model’s reasoning behind its prediction, which are usually criticized for being opaque.
Recently, seemingly contradictory viewpoints have emerged about the interpretability of attention weights (Jain & Wallace, 2019; Vig & Belinkov, 2019).
Amid such confusion arises the need to understand attention mechanism more systematically.
In this work, we attempt to fill this gap by giving a comprehensive explanation which justifies both kinds of observations (i.e., when is attention interpretable and when it is not).
Through a series of experiments on diverse NLP tasks, we validate our observations and reinforce our claim of interpretability of attention through manual evaluation.
Attention is a way of obtaining a weighted sum of the vector representations of a layer in a neural network model (Bahdanau et al., 2015) .
It is used in diverse tasks ranging from machine translation (Luong et al., 2015) , language modeling (Liu & Lapata, 2018) to image captioning (Xu et al., 2015) , and object recognition (Ba et al., 2014) .
Apart from substantial performance benefit (Vaswani et al., 2017) , attention also provides interpretability to neural models (Wang et al., 2016; Lin et al., 2017; Ghaeini et al., 2018) which are usually criticized for being black-box function approximators (Chakraborty et al., 2017) .
There has been substantial work on understanding attention in neural network models.
On the one hand, there is work on showing that attention weights are not interpretable, and altering them does not significantly affect the prediction (Jain & Wallace, 2019; Serrano & Smith, 2019) .
While on the other hand, some studies have discovered how attention in neural models captures several linguistic notions of syntax and coreference (Vig & Belinkov, 2019; Clark et al., 2019; Tenney et al., 2019) .
Amid such contrasting views arises a need to understand the attention mechanism more systematically.
In this paper, we attempt to fill this gap by giving a comprehensive explanation which justifies both kinds of observations.
The conclusions of Jain & Wallace (2019) ; Serrano & Smith (2019) have been mostly based on text classification experiments which might not generalize to several other NLP tasks.
In Figure 1 , we report the performance on text classification, Natural Language Inference (NLI) and Neural Machine Translation (NMT) of two models: one trained with neural attention and the other trained with attention weights fixed to a uniform distribution.
The results show that the attention mechanism in text classification does not have an impact on the performance, thus, making inferences about interpretability of attention in these models might not be accurate.
However, on tasks such as NLI and NMT uniform attention weights degrades the performance substantially, indicating that attention is a crucial component of the model for these tasks and hence the analysis of attention's interpretability here is more reasonable.
In comparison to the existing work on interpretability, we analyze attention mechanism on a more diverse set of NLP tasks that include text classification, pairwise text classification (such as NLI), and text generation tasks like neural machine translation (NMT).
Moreover, we do not restrict ourselves to a single attention mechanism and also explore models with self-attention.
For examining the interpretability of attention weights, we perform manual evaluation.
Our key contributions are:
1. We extend the analysis of attention mechanism in prior work to diverse NLP tasks and provide a comprehensive picture which alleviates seemingly contradicting observations.
2. We identify the conditions when attention weights are interpretable and correlate with feature importance measures -when they are computed using two vectors which are both functions of the input (Figure 1b, c) .
We also explain why attention weights are not interpretable when the input has only single sequence (Figure 1a ), an observation made by Jain & Wallace (2019) , by showing that they can be viewed as a gating unit.
3. We validate our hypothesis of interpretability of attention through manual evaluation. | Analysis of attention mechanism across diverse NLP tasks. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:807 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Generative models forsource code are an interesting structured prediction problem, requiring to reason about both hard syntactic and semantic constraints as well as about natural, likely programs.
We present a novel model for this problem that uses a graph to represent the intermediate state of the generated output.
Our model generates code by interleaving grammar-driven expansion steps with graph augmentation and neural message passing steps.
An experimental evaluation shows that our new model can generate semantically meaningful expressions, outperforming a range of strong baselines.
Learning to understand and generate programs is an important building block for procedural artificial intelligence and more intelligent software engineering tools.
It is also an interesting task in the research of structured prediction methods: while imbued with formal semantics and strict syntactic rules, natural source code carries aspects of natural languages, since it acts as a means of communicating intent among developers.
Early works in the area have shown that approaches from natural language processing can be applied successfully to source code BID11 , whereas the programming languages community has had successes in focusing exclusively on formal semantics.
More recently, methods handling both modalities (i.e., the formal and natural language aspects) have shown successes on important software engineering tasks BID22 BID4 BID1 and semantic parsing (Yin & Neubig, 2017; BID20 ).However
, current generative models of source code mostly focus on only one of these modalities at a time. For example
, program synthesis tools based on enumeration and deduction BID24 BID19 BID8 BID7 are successful at generating programs that satisfy some (usually incomplete) formal specification but are often obviously wrong on manual inspection, as they cannot distinguish unlikely from likely, "natural" programs. On the other
hand, learned code models have succeeded in generating realistic-looking programs BID17 BID5 BID18 BID20 Yin & Neubig, 2017) . However, these
programs often fail to be semantically relevant, for example because variables are not used consistently.In this work, we try to overcome these challenges for generative code models and present a general method for generative models that can incorporate structured information that is deterministically available at generation time. We focus our attention
on generating source code and follow the ideas of program graphs BID1 ) that have been shown to learn semantically meaningful representations of (pre-existing) programs. To achieve this, we lift
grammar-based tree decoder models into the graph setting, where the diverse relationships between various elements of the generated code can be modeled. For this, the syntax tree
under generation is augmented with additional edges denoting known relationships (e.g., last use of variables). We then interleave the steps
of the generative procedure with neural message passing BID9 to compute more precise representations of the intermediate states of the program generation. This is fundamentally different
from sequential generative models of graphs BID14 BID23 , which aim to generate all edges and nodes, whereas our graphs are deterministic augmentations of generated trees.To summarize, we present a) a general graph-based generative
procedure for highly structured objects, incorporating rich structural information; b) ExprGen, a new code generation task
focused on (a, u) ← insertChild(a, )
We presented a generative code model that leverages known semantics of partially generated programs to direct the generative procedure.
The key idea is to augment partial programs to obtain a graph, and then use graph neural networks to compute a precise representation for the partial program.
This representation then helps to better guide the remainder of the generative procedure.
We have shown that this approach can be used to generate small but semantically interesting expressions from very imprecise context information.
The presented model could be useful in program repair scenarios (where repair proposals need to be scored, based on their context) or in the code review setting (where it could highlight very unlikely expressions).
We also believe that similar models could have applications in related domains, such as semantic parsing, neural program synthesis and text generation. | Representing programs as graphs including semantics helps when generating programs | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:808 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The inference of models, prediction of future symbols, and entropy rate estimation of discrete-time, discrete-event processes is well-worn ground.
However, many time series are better conceptualized as continuous-time, discrete-event processes.
Here, we provide new methods for inferring models, predicting future symbols, and estimating the entropy rate of continuous-time, discrete-event processes.
The methods rely on an extension of Bayesian structural inference that takes advantage of neural network’s universal approximation power.
Based on experiments with simple synthetic data, these new methods seem to be competitive with state-of- the-art methods for prediction and entropy rate estimation as long as the correct model is inferred.
Much scientific data is dynamic, meaning that we see not a static image of a system but its time evolution.
The additional richness of dynamic data should allow us to better understand the system, but we may not know how to process the richer data in a way that will yield new insight into the system in question.
For example, we have records of when earthquakes have occurred, but still lack the ability to predict earthquakes well or estimate their intrinsic randomness (Geller, 1997); we know which neurons have spiked when, but lack an understanding of the neural code (Rieke et al., 1999) ; and finally, we can observe organisms, but have difficulty modeling their behavior (Berman et al., 2016; Cavagna et al., 2014) .
Such examples are not only continuous-time, but also discreteevent, meaning that the observations belong to a finite set (e.g, neuron spikes or is silent) and are not better-described as a collection of real numbers.
These disparate scientific problems are begging for a unified framework for inferring expressive continuous-time, discrete-event models and for using those models to make predictions and, potentially, estimate the intrinsic randomness of the system.
In this paper, we present a step towards such a unified framework that takes advantage of: the inference and the predictive advantages of unifilarity-meaning that the hidden Markov model's underlying state (the so-called "causal state" (Shalizi & Crutchfield, 2001) or "predictive state representation" (Littman & Sutton, 2002) ) can be uniquely identified from the past data; and the universal approximation power of neural networks (Hornik, 1991) .
Indeed, one could view the proposed algorithm for model inference as the continuous-time extension of Bayesian structural inference Strelioff & Crutchfield (2014) .
We focus on time series that are discrete-event and inherently stochastic.
In particular, we infer the most likely unifilar hidden semi-Markov model (uhsMm) given data using the Bayesian information criterion.
This class of models is slightly more powerful than semi-Markov models, in which the future symbol depends only on the prior symbol, but for which the dwell time of the next symbol is drawn from a non-exponential distribution.
With unifilar hidden semi-Markov models, the probability of a future symbol depends on arbitrarily long pasts of prior symbols, and the dwell time distribution for that symbol is non-exponential.
Beyond just model inference, we can use the inferred model and the closed-form expressions in Ref. (Marzen & Crutchfield, 2017) to estimate the process' entropy rate, and we can use the inferred states of the uhsMm to predict future input via a k-nearest neighbors approach.
We compare the latter two algorithms to reasonable extensions of state-of-the-art algorithms.
Our new algorithms appear competitive as long as model inference is in-class, meaning that the true model producing the data is equivalent to one of the models in our search.
In Sec. 3, we introduce the reader to unifilar hidden semi-Markov models.
In Sec. 4, we describe our new algorithms for model inference, entropy rate estimation, and time series prediction and test our algorithms on synthetic data that is memoryful.
And in Sec. 5, we discuss potential extensions and applications of this research.
We have introduced a new algorithm for inferring causal states (Shalizi & Crutchfield, 2001 ) of a continuous-time, discrete-event process using the groundwork of Ref. (Marzen & Crutchfield, 2017) .
We have introduced a new estimator of entropy rate that uses the causal states.
And finally, we have shown that a predictor based on causal states is more accurate and less compute-heavy than other predictors.
The new inference, estimation, and prediction algorithms could be used to infer a predictive model of complex continuous-time, discrete-event processes, such as animal behavior, and calculate estimates of the intrinsic randomness of such complex processes.
Future research could delve into improving estimators of other time series information measures (James et al., 2011) , using something more accurate than BIC to calculate MAP models, or enumerating the topology of all possible uhsMm models for non-binary alphabets (Johnson et al.) . | A new method for inferring a model of, estimating the entropy rate of, and predicting continuous-time, discrete-event processes. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:809 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures.
Many tasks in radiology, for example, are largely problems of multi-label classification wherein medical images are interpreted to indicate multiple present or suspected pathologies.
Clinical settings drive the necessity for high accuracy simultaneously across a multitude of pathological outcomes and greatly limit the utility of tools which consider only a subset.
This issue is exacerbated by a general scarcity of training data and maximizes the need to extract clinically relevant features from available samples -- ideally without the use of pre-trained models which may carry forward undesirable biases from tangentially related tasks.
We present and evaluate a partial solution to these constraints in using LSTMs to leverage interdependencies among target labels in predicting 14 pathologic patterns from chest x-rays and establish state of the art results on the largest publicly available chest x-ray dataset from the NIH without pre-training.
Furthermore, we propose and discuss alternative evaluation metrics and their relevance in clinical practice.
Medical diagnostics have increasingly become a more interesting and viable endpoint for machine learning.
A general scarcity of publicly available medical data, however, inhibits its rapid development.
Pre-training on tangentially related datasets such as ImageNet BID4 ) has been shown to help in circumstances where training data is limited, but may introduce unintended biases which are undesirable in a clinical setting.
Furthermore, most clinical settings will drive a need for models which can accurately predict a large number of diagnostic outcomes.
This essentially turns many medical problems into multi-label classification with a large number of targets, many of which may be subtle or poorly defined and are likely to be inconsistently labeled.
In addition, unlike the traditional multi-label setting, predicting the absence of each label is as important as predicting its presence in order to minimize the possibility of misdiagnosis.
Each of these challenges drive a need for architectures which consider clinical context to make the most of the data available.Chest x-rays are the most common type of radiology exam in the world and a particularly challenging example of multi-label classification in medical diagnostics.
Making up nearly 45% of all radiological studies, the chest x-ray has achieved global ubiquity as a low-cost screening tool for a wealth of pathologies including lung cancer, tuberculosis, and pneumonia.
Each scan can contain dozens of patterns corresponding to hundreds of potential pathologies and can thus be difficult to interpret, suffering from high disagreement rates between radiologists and often resulting in unnecessary follow-up procedures.
Complex interactions between abnormal patterns frequently have significant clinical meaning that provides radiologists with additional context.
For example, a study labeled to indicate the presence of cardiomegaly (enlargement of the cardiac silhouette) is more likely to additionally have pulmonary edema (abnormal fluid in the extravascular tissue of the lung) as the former may suggest left ventricular failure which often causes the latter.
The presence of edema further predicates the possible presence of both consolidation (air space opacification) and a pleural effusion (abnormal fluid in the pleural space).
Training a model to recognize the potential for these interdependencies could enable better prediction of pathologic outcomes across all categories while maximizing the data utilization and its statistical efficiency.Among the aforementioned challenges, this work firstly addresses the problem of predicting multiple labels simultaneously while taking into account their conditional dependencies during both the training and the inference.
Similar problems have been raised and analyzed in the work of BID30 BID1 with the application of image tagging, both outside the medical context.
The work of BID26 for chest x-ray annotations are closest to ours.
All of them utilize out-of-the-box decoders based on recurrent neural networks (RNNs) to sequentially predict the labels.
Such a naive adoption of RNNs is problematic and often fails to attend to peculiarities of the medical problem in their design, which we elaborate on in Section 2.3 and Section 3.3.1.In addition, we hypothesize that the need for pre-training may be safely removed when there are sufficient medical data available.
To verify this, all our models are trained from scratch, without using any extra data from other domains.
We directly compare our results with those of that are pre-trained on ImageNet.
Furthermore, to address the issue of clinical interpretability, we juxtapose a collection of alternative metrics along with those traditionally used in machine learning, all of which are reported in our benchmark.
To improve the quality of computer-assisted diagnosis of chest x-rays, we proposed a two-stage end-to-end neural network model that combines a densely connected image encoder with a recurrent neural network decoder.
The first stage was chosen to address the challenges to learning presented by high-resolution medical images and limited training set sizes.
The second stage was designed to allow the model to exploit statistical dependencies between labels in order to improve the accuracy of its predictions.
Finally, the model was trained from scratch to ensure that the best application-specific features were captured.
Our experiments have demonstrated both the feasibility and effectiveness of this approach.
Indeed, our baseline model significantly outperformed the current state-of-the-art.
The proposed set of metrics provides a meaningful quantification of this performance and will facilitate comparisons with future work.While a limited exploration into the value of learning interdependencies among labels yields promising results, additional experimentation will be required to further explore the potential of this methodology both as it applies specifically to chest x-rays and to medical diagnostics as a whole.
One potential concern with this approach is the risk of learning biased interdependencies from a limited training set which does not accurately represent a realistic distribution of pathologies -if every example of cardiomegaly is also one of cardiac failure, the model may learn to depend too much on the presence of other patterns such as edemas which do not always accompany enlargement of the cardiac silhouette.
This risk is heightened when dealing with data labeled with a scheme which mixes pathologies, such as pneumonia, with patterns symptomatic of those pathologies, such as consolidation.
The best approach to maximizing feature extraction and leveraging interdependencies among target labels likely entails training from data labeled with an ontology that inherently poses some consistent known relational structure.
This will be the endpoint of a future study. | we present the state-of-the-art results of using neural networks to diagnose chest x-rays | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:81 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recent studies show that convolutional neural networks (CNNs) are vulnerable under various settings, including adversarial examples, backdoor attacks, and distribution shifting.
Motivated by the findings that human visual system pays more attention to global structure (e.g., shape) for recognition while CNNs are biased towards local texture features in images, we propose a unified framework EdgeGANRob based on robust edge features to improve the robustness of CNNs in general, which first explicitly extracts shape/structure features from a given image and then reconstructs a new image by refilling the texture information with a trained generative adversarial network (GAN).
In addition, to reduce the sensitivity of edge detection algorithm to adversarial perturbation, we propose a robust edge detection approach Robust Canny based on the vanilla Canny algorithm.
To gain more insights, we also compare EdgeGANRob with its simplified backbone procedure EdgeNetRob, which performs learning tasks directly on the extracted robust edge features.
We find that EdgeNetRob can help boost model robustness significantly but at the cost of the clean model accuracy.
EdgeGANRob, on the other hand, is able to improve clean model accuracy compared with EdgeNetRob and without losing the robustness benefits introduced by EdgeNetRob.
Extensive experiments show that EdgeGANRob is resilient in different learning tasks under diverse settings.
Convolutional neural networks (CNNs) have been studied extensively (Goodfellow et al., 2016) , and have achieved state-of-the-art performance in many learning tasks (He et al., 2016; .
However, recent works have shown that CNNs are vulnerable to adversarial examples (Carlini and Wagner, 2017; Goodfellow et al., 2014b; Szegedy et al., 2013) , where imperceptible perturbation can be added to the test data to tamper the predictions.
Different from adversarial examples where test data is manipulated, an orthogonal setting: data poisoning or backdoor attacks where training data is manipulated to reduce model's generalization accuracy and achieve targeted poisoning attack Chen et al., 2017b ).
In addition, recent studies show that CNNs tend to learn surface statistical regularities instead of high level abstraction, leading it fails to generalize to the superficial pattern transformation (radial kernel, random kernel (Jo and Bengio, 2017a; Wang et al., 2019a; .
We refer to this problem as model's robustness under distribution shifting.
How to improve the general robustness of DNNs under these settings remains unsolved.
To improve the robustness of CNNs, recent studies explore the underlying cause of their vulnerability.
For example, Ilyas et al. (2019) attributes the existence of adversarial examples to the existence of non-robust but highly-predictive features.
They suggest to train a classifier only on "robust features" which contain the necessary information for recognition and are insensitive to small perturbations.
In addition, it is shown that human recognition relies mainly on global object shapes rather than local patterns (e.t. textures), while CNNs are more biased towards the latter (Baker et al., 2018; Geirhos et al., 2019) .
For instance, Geirhos et al. (2019) creates a texture-shape cue conflict, such as a cat shape with elephant texture, and feeds it to an Imagenet trained CNN and huamn respectively.
While Human can still recognize it as a cat, CNN wrongly predicts it as an elephant.
Therefore, the bias toward local features potentially contributes to CNN's vulnerability to adversarial examples, distribution shifting and patterns of backdoor attacks.
Particularly, previous researcher also shows Figure 1 : Structure of the proposed pipeline.
EdgeNetRob feeds the output of edge detection to the classifier to produce robust predictions, while EdgeGANRob refill the edge image with texture information to reconstruct a new instance for predictions.
that the shape of objects is the most important cue for human object recognition (Landau et al., 1988) .
Given the above evidence, a natural question emerges: Can we improve the robustness of CNNs by making it rely more on global shape structure?
To answer this question, we need to formalize the notion of global shape structure first.
We propose to consider a specific type of shape representation: edges (image points that have sharp change in brightness).
Using edges comes with two benefits:
1) it is an effective device for modelling shape;
2) edges are easy to be captured in images, with many sophisticated algorithms (Canny, 1986; Xie and Tu, 2015; Liu et al., 2017) available.
More specifically, this paper explores a new approach EdgeGANRob to improve the robustness of the CNNs to adversarial attacks,distribution shifting and backdoor attacks by leveraging structural information in images.
The unified framework is shown in Figure 1 .
As illustrated, a simplified version of EdgeGANRob is a two-stage procedure named EdgeNetRob, which extracts the structural information by detecting edges and then trains the classifier on the extracted edges.
As a consequence, EdgeNetRob forces the CNNs to make prediction solely based on shape information, rather than texture/color, thus eliminating the texture bias (Geirhos et al., 2019) .
Our results show that EdgeNetRob can improve CNNs' robustness.
However, there are still two challenges:
(i) the direct differentiable edge detection algorithms are also vulnerable to attacks, which may lead to low robustness against sophisticated adaptive attackers.
To handle this problem, we propose a robust edge detection algorithm, Robust Canny.
Using Robust Canny is able to EdgeNetRob dramatically improve the robustness of EdgeGANRob.
As a result, this combined method outperforms the adversarial retraining based defense method .
(ii).
Although EdgeNetRob improves the CNNs' robustness, it decreases the clean accuracy of CNNs due to the missing texture/color information.
This motivates the development of EdgeGANRob, which embeds a generative model to refill the texture/colors based on the edge images before they are fed into the classifier.
Please find more visualization results on the anonymous website: https://sites.google.com/view/edgenetrob.
The main contributions of this paper include:
(i) We propose a unified framework EdgeGANRob to improve the robustness of CNNs against multiple tasks simultaneously, which explicitly extracts edge/structure information from input images and then reconstructs the original images by refilling the textural information with GAN.
(ii) To remain robust against sophisticated adaptive evasion attacks, in which attackers have access to the defense algorithm, we propose a robust edge detection approach Robust Canny based on the vanilla Canny algorithm to reduce the sensitivity of edge detector to adversarial perturbation.
(iii) To further demonstrate the effectiveness of the inpainting GAN in EdgeGANRob, we also evaluate its simplified backbone procedure EdgeNetRob by performing learning tasks directly on the extracted robust edge features.
To justify the above contributions, we conduct thorough evaluation on EdgeNetRob and EdgeGANRob in three tasks: adversarial attacks, distribution shifting and backdoor attacks, where significant improvements are achieved.
We introduced a new method based on robust edge features for improving general model robustness.
By combining a robust edge feature extractor with the generative adversarial network, our method simultaneously achieves competitive results in terms of both adversarial robustness and generalization under distribution shifting.
Additionally, we show that it can also be used to improve robustness against backdoor attacks.
Our results highlight the importance of using shape information in improving model robustness and we believe it is a promising direction for future work. | A unified model to improve model robustness against multiple tasks | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:810 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Learning rich representation from data is an important task for deep generative models such as variational auto-encoder (VAE).
However, by extracting high-level abstractions in the bottom-up inference process, the goal of preserving all factors of variations for top-down generation is compromised.
Motivated by the concept of “starting small”, we present a strategy to progressively learn independent hierarchical representations from high- to low-levels of abstractions.
The model starts with learning the most abstract representation, and then progressively grow the network architecture to introduce new representations at different levels of abstraction.
We quantitatively demonstrate the ability of the presented model to improve disentanglement in comparison to existing works on two benchmark datasets using three disentanglement metrics, including a new metric we proposed to complement the previously-presented metric of mutual information gap.
We further present both qualitative and quantitative evidence on how the progression of learning improves disentangling of hierarchical representations.
By drawing on the respective advantage of hierarchical representation learning and progressive learning, this is to our knowledge the first attempt to improve disentanglement by progressively growing the capacity of VAE to learn hierarchical representations.
Variational auto-encoder (VAE), a popular deep generative model (DGM), has shown great promise in learning interpretable and semantically meaningful representations of data ; Chen et al. (2018) ; Kim & Mnih (2018) ).
However, VAE has not been able to fully utilize the depth of neural networks like its supervised counterparts, for which a fundamental cause lies in the inherent conflict between the bottom-up inference and top-down generation process (Zhao et al. (2017) ; Li et al. (2016) ): while the bottom-up abstraction is able to extract high-level representations helpful for discriminative tasks, the goal of generation requires the preservation of all generative factors that are likely at different abstraction levels.
This issue was addressed in recent works by allowing VAEs to generate from details added at different depths of the network, using either memory modules between top-down generation layers (Li et al. (2016) ), or hierarchical latent representations extracted at different depths via a variational ladder autoencoder (VLAE, Zhao et al. (2017) ).
However, it is difficult to learn to extract and disentangle all generative factors at once, especially at different abstraction levels.
Inspired by human cognition system, Elman (1993) suggested the importance of "starting small" in two aspects of the learning process of neural networks: incremental input in which a network is trained with data and tasks of increasing complexity, and incremental memory in which the network capacity undergoes developmental changes given fixed external data and tasks -both pointing to an incremental learning strategy for simplifying a complex final task.
Indeed, the former concept of incremental input has underpinned the success of curriculum learning (Bengio et al. (2015) ).
In the context of DGMs, various stacked versions of generative adversarial networks (GANs) have been proposed to decompose the final task of high-resolution image generation into progressive sub-tasks of generating small to large images (Denton et al. (2015) ; Zhang et al. (2018) ).
The latter aspect of "starting small" with incremental growth of network capacity is less explored, although recent works have demonstrated the advantage of progressively growing the depth of GANs for generating high-resolution images (Karras et al. (2018) ; ).
These works, so far, have focused on progressive learning as a strategy to improve image generation.
We are motivated to investigate the possibility to use progressive learning strategies to improve learning and disentangling of hierarchical representations.
At a high level, the idea of progressively or sequentially learning latent representations has been previously considered in VAE.
In Gregor et al. (2015) , the network learned to sequentially refine generated images through recurrent networks.
In Lezama (2019) , a teacher-student training strategy was used to progressively increase the number of latent dimensions in VAE to improve the generation of images while preserving the disentangling ability of the teacher model.
However, these works primarily focus on progressively growing the capacity of VAE to generate, rather than to extract and disentangle hierarchical representations.
In comparison, in this work, we focus on
1) progressively growing the capacity of the network to extract hierarchical representations, and
2) these hierarchical representations are extracted and used in generation from different abstraction levels.
We present a simple progressive training strategy that grows the hierarchical latent representations from different depths of the inference and generation model, learning from high-to low-levels of abstractions as the capacity of the model architecture grows.
Because it can be viewed as a progressive strategy to train the VLAE presented in Zhao et al. (2017) , we term the presented model pro-VLAE.
We quantitatively demonstrate the ability of pro-VLAE to improve disentanglement on two benchmark data sets using three disentanglement metrics, including a new metric we proposed to complement the metric of mutual information gap (MIG) previously presented in Chen et al. (2018) .
These quantitative studies include comprehensive comparisons to β-VAE ), VLAE (Zhao et al. (2017) ), and the teacher-student strategy as presented in (Lezama (2019) ) at different values of the hyperparameter β.
We further present both qualitative and quantitative evidence that pro-VLAE is able to first learn the most abstract representations and then progressively disentangle existing factors or learn new factors at lower levels of abstraction, improving disentangling of hierarhical representations in the process.
In this work, we present a progressive strategy for learning and disentangling hierarchical representations.
Starting from a simple VAE, the model first learn the most abstract representation.
Next, the model learn independent representations from high-to low-levels of abstraction by progressively growing the capacity of the VAE deep to shallow.
Experiments on several benchmark data sets demonstrated the advantages of the presented method.
An immediate future work is to include stronger guidance for allocating information across the hierarchy of abstraction levels, either through external multi-scale image supervision or internal information-theoretic regularization strategies. | We proposed a progressive learning method to improve learning and disentangling latent representations at different levels of abstraction. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:811 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep latent variable models are powerful tools for representation learning.
In this paper, we adopt the deep information bottleneck model, identify its shortcomings and propose a model that circumvents them.
To this end, we apply a copula transformation which, by restoring the invariance properties of the information bottleneck method, leads to disentanglement of the features in the latent space.
Building on that, we show how this transformation translates to sparsity of the latent space in the new model.
We evaluate our method on artificial and real data.
In recent years, deep latent variable models BID13 BID22 BID7 have become a popular toolbox in the machine learning community for a wide range of applications BID14 BID19 BID11 .
At the same time, the compact representation, sparsity and interpretability of the latent feature space have been identified as crucial elements of such models.
In this context, multiple contributions have been made in the field of relevant feature extraction BID3 BID0 and learning of disentangled representations of the latent space BID5 BID2 BID9 .In
this paper, we consider latent space representation learning. We
focus on disentangling features with the copula transformation and, building on that, on forcing a compact low-dimensional representation with a sparsity-inducing model formulation. To
this end, we adopt the deep information bottleneck (DIB) model BID0 which combines the information bottleneck and variational autoencoder methods. The
information bottleneck (IB) principle BID26 identifies relevant features with respect to a target variable. It
takes two random vectors x and y and searches for a third random vector t which, while compressing x, preserves information contained in y. A
variational autoencoder (VAE) BID13 BID22 ) is a generative model which learns a latent representation t of x by using the variational approach.Although DIB produces good results in terms of image classification and adversarial attacks, it suffers from two major shortcomings. First
, the IB solution only depends on the copula of x and y and is thus invariant to strictly monotone transformations of the marginal distributions. DIB
does not preserve this invariance, which means that it is unnecessarily complex by also implicitly modelling the marginal distributions. We
elaborate on the fundamental issues arising from this lack of invariance in Section 3. Second
, the latent space of the IB is not sparse which results in the fact that a compact feature representation is not feasible.Our contribution is two-fold: In the first step, we restore the invariance properties of the information bottleneck solution in the DIB. We achieve
this by applying a transformation of x and y which makes the latent space only depend on the copula. This is a
way to fully represent all the desirable features inherent to the IB formulation. The model
is also simplified by ensuring robust and fully non-parametric treatment of the marginal distributions. In addition
, the problems arising from the lack of invariance to monotone transformations of the marginals are solved. In the second
step, once the invariance properties are restored, we exploit the sparse structure of the latent space of DIB. This is possible
thanks to the copula transformation in conjunction with using the sparse parametrisation of the information bottleneck, proposed by BID21 . It translates to
a more compact latent space that results in a better interpretability of the model. The remainder of
this paper is structured as follows: In Section 2, we review publications on related models. Subsequently, in
Section 3, we describe the proposed copula transformation and show how it fixes the shortcomings of DIB, as well as elaborate on the sparsity induced in the latent space. In Section 4, we
present results of both synthetic and real data experiments. We conclude our
paper in Section 5.
We have presented a novel approach to compact representation learning of deep latent variable models.
To this end, we showed that restoring invariance properties of the Deep Information Bottleneck with a copula transformation leads to disentanglement of the features in the latent space.
Subsequently, we analysed how the copula transformation translates to sparsity in the latent space of the considered model.
The proposed model allows for a simplified and fully non-parametric treatment of marginal distributions which has the advantage that it can be applied to distributions with arbitrary marginals.
We evaluated our method on both artificial and real data.
We showed that in practice the copula transformation leads to latent spaces that are disentangled, have an increased prediction capability and are resilient to adversarial attacks.
All these properties are not sensitive to the only hyperparameter of the model, λ.In Section 3.2, we motivated the copula transformation for the Deep Information Bottleneck with the lack of invariance properties present in the original Information Bottleneck model, making the copula augmentation particularly suited for the DIB.
The relevance of the copula transformation, however, reaches beyond the variational autoencoder, as evidenced by e.g. resilience to adversarial attacks or the positive influence on convergence rates presented in Section 4.
These advantages of our model that do not simply follow from restoring the Information Bottleneck properties to the DIB, but are additional benefits of the copula.
The copula transformation thus promises to be a simple but powerful addition to the general deep learning toolbox. | We apply the copula transformation to the Deep Information Bottleneck which leads to restored invariance properties and a disentangled latent space with superior predictive capabilities. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:812 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
State-of-the-art machine learning methods exhibit limited compositional generalization.
At the same time, there is a lack of realistic benchmarks that comprehensively measure this ability, which makes it challenging to find and evaluate improvements.
We introduce a novel method to systematically construct such benchmarks by maximizing compound divergence while guaranteeing a small atom divergence between train and test sets, and we quantitatively compare this method to other approaches for creating compositional generalization benchmarks.
We present a large and realistic natural language question answering dataset that is constructed according to this method, and we use it to analyze the compositional generalization ability of three machine learning architectures.
We find that they fail to generalize compositionally and that there is a surprisingly strong negative correlation between compound divergence and accuracy.
We also demonstrate how our method can be used to create new compositionality benchmarks on top of the existing SCAN dataset, which confirms these findings.
Human intelligence exhibits systematic compositionality (Fodor & Pylyshyn, 1988) , the capacity to understand and produce a potentially infinite number of novel combinations of known components, i.e., to make "infinite use of finite means" (Chomsky, 1965) .
In the context of learning from a set of training examples, we can observe compositionality as compositional generalization, which we take to mean the ability to systematically generalize to composed test examples of a certain distribution after being exposed to the necessary components during training on a different distribution.
Humans demonstrate this ability in many different domains, such as natural language understanding (NLU) and visual scene understanding.
For example, we can learn the meaning of a new word and then apply it to other language contexts.
As Lake & Baroni (2018) put it: "Once a person learns the meaning of a new verb 'dax', he or she can immediately understand the meaning of 'dax twice' and 'sing and dax'."
Similarly, we can learn a new object shape and then understand its compositions with previously learned colors or materials (Johnson et al., 2017; Higgins et al., 2018) .
In contrast, state-of-the-art machine learning (ML) methods often fail to capture the compositional structure that is underlying the problem domain and thus fail to generalize compositionally Bastings et al., 2018; Loula et al., 2018; Russin et al., 2019; Johnson et al., 2017) .
We believe that part of the reason for this shortcoming is a lack of realistic benchmarks that comprehensively measure this aspect of learning in realistic scenarios.
As others have proposed, compositional generalization can be assessed using a train-test split based on observable properties of the examples that intuitively correlate with their underlying compositional structure.
Finegan-Dollak et al. (2018) , for example, propose to test on different output patterns than are in the train set, while propose, among others, to split examples by output length or to test on examples containing primitives that are rarely shown during training.
In this paper, we formalize and generalize this intuition and make these contributions:
• We introduce distribution-based compositionality assessment (DBCA), which is a novel method to quantitatively assess the adequacy of a particular dataset split for measuring compositional generalization and to construct splits that are ideally suited for this purpose (Section 2).
• We present the Compositional Freebase Questions (CFQ) 1 , a simple yet realistic and large NLU dataset that is specifically designed to measure compositional generalization using the DBCA method, and we describe how to construct such a dataset (Section 3).
• We use the DBCA method to construct a series of experiments for measuring compositionality on CFQ and SCAN and to quantitatively compare these experiments to other compositionality experiments (Section 4).
• We analyze the performance of three baseline ML architectures on these experiments and show that these architectures fail to generalize compositionally, and perhaps more surprisingly, that compound divergence between train and test sets is a good predictor of the test accuracy (Section 5).
In this paper we presented what is (to the best of our knowledge) the largest and most comprehensive benchmark for compositional generalization on a realistic NLU task.
It is based on a new dataset generated via a principled rule-based approach and a new method of splitting the dataset by optimizing the divergence of atom and compound distributions between train and test sets.
The performance of three baselines indicates that in a simple but realistic NLU scenario, state-of-the-art learning systems fail to generalize compositionally even if they are provided with large amounts of training data and that the mean accuracy is strongly correlated with the compound divergence.
We hope our work will inspire others to use this benchmark as a yardstick to advance the compositional generalization capabilities of learning systems and achieve high accuracy at high compound divergence.
Some specific directions that we consider promising include applying unsupervised pretraining on the input language or output queries and the use of more diverse or more targeted learning architectures, such as syntactic attention (Russin et al., 2019) .
We also believe it would be interesting to apply the DBCA approach to other domains such as visual reasoning, e.g. based on CLEVR (Johnson et al., 2017) .
In the area of compositionality benchmarks, we are interested in determining the performance of current architectures on the end-to-end task that expects a natural language answer given a natural language question in CFQ.
We would like also to extend our approach to broader subsets of language understanding, including use of ambiguous constructs, negations, quantification, comparatives, additional languages, and other vertical domains. | Benchmark and method to measure compositional generalization by maximizing divergence of compound frequency at small divergence of atom frequency. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:813 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
To understand how object vision develops in infancy and childhood, it will be necessary to develop testable computational models.
Deep neural networks (DNNs) have proven valuable as models of adult vision, but it is not yet clear if they have any value as models of development.
As a first model, we measured learning in a DNN designed to mimic the architecture and representational geometry of the visual system (CORnet).
We quantified the development of explicit object representations at each level of this network through training by freezing the convolutional layers and training an additional linear decoding layer.
We evaluate decoding accuracy on the whole ImageNet validation set, and also for individual visual classes.
CORnet, however, uses supervised training and because infants have only extremely impoverished access to labels they must instead learn in an unsupervised manner.
We therefore also measured learning in a state-of-the-art unsupervised network (DeepCluster).
CORnet and DeepCluster differ in both supervision and in the convolutional networks at their heart, thus to isolate the effect of supervision, we ran a control experiment in which we trained the convolutional network from DeepCluster (an AlexNet variant) in a supervised manner.
We make predictions on how learning should develop across brain regions in infants.
In all three networks, we also tested for a relationship in the order in which infants and machines acquire visual classes, and found only evidence for a counter-intuitive relationship.
We discuss the potential reasons for this.
DNNs were inspired by the brain.
Although DNNs learn like humans from large quantities of data, there is little work to build formal connections between infant and machine learning.
Such connections have the potential to bring considerable insight to both fields but the challenge is to find defining characteristics that can be measured in both systems.
This paper has addressed this challenge by measuring two characteristic features in DNNs that can be measured in infants.
A APPENDIX: DETERMINING NUMBER OF TRAINING EPOCHS FOR THE OBJECT DECODER Training the object decoders was the most computationally expensive part of this project, as one was trained for every layer across many epochs and models.
It was therefore necessary to use as few training epochs as possible.
To evaluate how many were needed, we trained decoders for 5 epochs on features from a sample of convolutional training epochs (0, 20, 40, 60) and all layers (Fig. 4) .
It was found that while there was a steady increase in decoding performance up to (and presumably beyond) the 5 epochs, the relative performance across different layers, or epochs, was broadly captured by epoch 2.
For further analyses we therefore used 2 epochs of training for the decoding layer.
Fig.
1 showed the layerwise changes in top-5 precision through learning.
Fig.
5 shows the corresponding changes in cross-entropy loss. | Unsupervised networks learn from bottom up; machines and infants acquire visual classes in different orders | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:814 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We show that in a variety of large-scale deep learning scenarios the gradient dynamically converges to a very small subspace after a short period of training.
The subspace is spanned by a few top eigenvectors of the Hessian (equal to the number of classes in the dataset), and is mostly preserved over long periods of training.
A simple argument then suggests that gradient descent may happen mostly in this subspace.
We give an example of this effect in a solvable model of classification, and we comment on possible implications for optimization and learning.
Stochastic gradient descent (SGD) BID14 and its variants are used to train nearly every large-scale machine learning model.
Its ubiquity in deep learning is connected to the efficiency at which gradients can be computed BID15 BID16 , though its success remains somewhat of a mystery due to the highly nonlinear and nonconvex nature of typical deep learning loss landscapes .
In an attempt to shed light on this question, this paper investigates the dynamics of the gradient and the Hessian matrix during SGD.In a common deep learning scenario, models contain many more tunable parameters than training samples.
In such "overparameterized" models, one expects generically that the loss landscape should have many flat directions: directions in parameter space in which the loss changes by very little or not at all (we will use "flat" colloquially to also mean approximately flat).1
Intuitively, this may occur because the overparameterization leads to a large redundancy in configurations that realize the same decrease in the loss after a gradient descent update.One local way of measuring the flatness of the loss function involves the Hessian.
Small or zero eigenvalues in the spectrum of the Hessian are an indication of flat directions BID10 .
In , the spectrum of the Hessian for deep learning crossentropy losses was analyzed in depth.2
These works showed empirically that along the optimization trajectory the spectrum separates into two components: a bulk component with many small eigenvalues, and a top component of much larger positive eigenvalues.
3 Correspondingly, at each point in parameter space the tangent space has two orthogonal components, which we will call the bulk subspace and the top subspace.
The dimension of the top subspace is k, the number of classes in the classification objective.
This result indicates the presence of many flat directions, which is consistent with the general expectation above.In this work we present two novel observations:• First, the gradient of the loss during training quickly moves to lie within the top subspace of the Hessian.
4 Within this subspace the gradient seems to have no special properties; its direction appears random with respect to the eigenvector basis.•
Second, the top Hessian eigenvectors evolve nontrivially but tend not to mix with the bulk eigenvectors, even over hundreds of training steps or more. In
other words, the top subspace is approximately preserved over long periods of training.These observations are borne out across model architectures, including fully connected networks, convolutional networks, and ResNet-18, and data sets FIG1 , TAB0 , Appendices C-D).Taken
all together, despite the large number of training examples and even larger number of parameters in deep-learning models, these results seem to imply that learning may happen in a tiny, slowly-evolving subspace. Indeed
, consider a gradient descent step −ηg where η is the learning rate and g the gradient. The change
in the loss to leading order in η is δL = −η g 2 . Now, let g
top be the projection of g onto the top subspace of the Hessian. If the gradient
is mostly contained within this subspace, then doing gradient descent with g top instead of g will yield a similar decrease in the loss, assuming the linear approximation is valid. Therefore, we think
this may have bearing on the question of how gradient descent can traverse such a nonlinear and nonconvex landscape.To shed light on this mechanism more directly, we also present a toy model of softmax regression trained on a mixture of Gaussians that displays all of the effects observed in the full deep-learning scenarios. This isn't meant as
a definitive explanation, but rather an illustrative example in which we can understand these phenomenon directly. In this model, we can
solve the gradient descent equations exactly in a limit where the Gaussians have zero variance. 5 We find that the gradient
is concentrated in the top Hessian subspace, while the bulk subspace has all zero eigenvalues. We then argue and use empirical
simulations to show that including a small amount of variance will not change these conclusions, even though the bulk subspace will now contain non-zero eigenvalues.Finally, we conclude by discussing some consequences of these observations for learning and optimization, leaving the study of improving current methods based on these ideas for future work.
We have seen that quite generally across architectures, training methods, and tasks, that during the course of training the Hessian splits into two slowly varying subspaces, and that the gradient lives in the subspace spanned by the k eigenvectors with largest eigenvalues (where k is the number of classes).
The fact that learning appears to concentrate in such a small subspace with all positive Hessian eigenvalues might be a partial explanation for why deep networks train so well despite having a nonconvex loss function.
The gradient essentially lives in a convex subspace, and perhaps that lets one extend the associated guarantees to regimes in which they otherwise wouldn't apply.An essential question of future study concerns further investigation of the nature of this nearly preserved subspace.
From Section 3, we understand, at least in certain examples, why the spectrum splits into two blocks as was first discovered by .
However, we would like to further understand the hierarchy of the eigenvalues in the top subspace and how the top subspace mixes with itself in deep learning examples.
We'd also like to investigate more directly the different eigenvectors in this subspace and see whether they have any transparent meaning, with an eye towards possible relevance for feature extraction.Central to our claim about learning happening in the top subspace was the fact the decrease in the loss was predominantly due to the projection of the gradient onto this subspace.
Of course, one could explicitly make this projection onto g top and use that to update the parameters.
By the argument given in the introduction, the loss on the current iteration will decrease by almost the same amount if the linear approximation holds.
However, updating with g top has a nonlinear effect on the dynamics and may, for example, alter the spectrum or cause the top subspace to unfreeze.
Further study of this is warranted.Similarly, given the nontrivial relationship between the Hessian and the gradient, a natural question is whether there are any practical applications for second-order optimization methods (see BID7 for a review).
Much of this will be the subject of future research, but we will conclude by making a few preliminary comments here.An obvious place to start is with Newton's method BID7 .
Newton's method consists of the parameter update DISPLAYFORM0 .
There are a few traditional criticisms of Newton's method.
The most practical is that for models as large as typical deep networks, computation of the inverse of the highly-singular Hessian acting on the gradient is infeasible.
Even if one could represent the matrix, the fact that the Hessian is so ill-conditioned makes inverting it not well-defined.
A second criticism of Newton's method is that it does not strictly descend, but rather moves towards critical points, whether they are minima, maxima, or saddles .
These objections have apparent simple resolutions given our results.
Since the gradient predominantly lives in a tiny nearly-fixed top subspace, this suggests a natural low rank approximation to Newton's method DISPLAYFORM1 top .Inverting
the Hessian in the top subspace is well-defined and computationally simple. Furthermore
, the top subspace of the Hessian has strictly positive eigenvalues, indicating that this approximation to Newton's method will descend rather then climb. Of course,
Newton's method is not the only second-order path towards optima, and similar statements apply to other methods. | For classification problems with k classes, we show that the gradient tends to live in a tiny, slowly-evolving subspace spanned by the eigenvectors corresponding to the k-largest eigenvalues of the Hessian. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:815 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Answering questions that require multi-hop reasoning at web-scale necessitates retrieving multiple evidence documents, one of which often has little lexical or semantic relationship to the question.
This paper introduces a new graph-based recurrent retrieval approach that learns to retrieve reasoning paths over the Wikipedia graph to answer multi-hop open-domain questions.
Our retriever model trains a recurrent neural network that learns to sequentially retrieve evidence paragraphs in the reasoning path by conditioning on the previously retrieved documents.
Our reader model ranks the reasoning paths and extracts the answer span included in the best reasoning path.
Experimental results show state-of-the-art results in three open-domain QA datasets, showcasing the effectiveness and robustness of our method.
Notably, our method achieves significant improvement in HotpotQA, outperforming the previous best model by more than 14 points.
Open-domain Question Answering (QA) is the task of answering a question given a large collection of text documents (e.g., Wikipedia).
Most state-of-the-art approaches for open-domain QA (Chen et al., 2017; Wang et al., 2018a; Lee et al., 2018; Yang et al., 2019) leverage non-parameterized models (e.g., TF-IDF or BM25) to retrieve a fixed set of documents, where an answer span is extracted by a neural reading comprehension model.
Despite the success of these pipeline methods in singlehop QA, whose questions can be answered based on a single paragraph, they often fail to retrieve the required evidence for answering multi-hop questions, e.g., the question in Figure 1 .
Multi-hop QA (Yang et al., 2018) usually requires finding more than one evidence document, one of which often consists of little lexical overlap or semantic relationship to the original question.
However, retrieving a fixed list of documents independently does not capture relationships between evidence documents through bridge entities that are required for multi-hop reasoning.
Recent open-domain QA methods learn end-to-end models to jointly retrieve and read documents (Seo et al., 2019; Lee et al., 2019) .
These methods, however, face challenges for entity-centric questions since compressing the necessary information into an embedding space does not capture lexical information in entities.
Cognitive Graph (Ding et al., 2019) incorporates entity links between documents for multi-hop QA to extend the list of retrieved documents.
This method, however, compiles a fixed list of documents independently and expects the reader to find the reasoning paths.
In this paper, we introduce a new recurrent graph-based retrieval method that learns to retrieve evidence documents as reasoning paths for answering complex questions.
Our method sequentially retrieves each evidence document, given the history of previously retrieved documents to form several reasoning paths in a graph of entities.
Our method then leverages an existing reading comprehension model to answer questions by ranking the retrieved reasoning paths.
The strong interplay between the retriever model and reader model enables our entire method to answer complex questions by exploring more accurate reasoning paths compared to other methods.
structure of the documents during the iterative retrieval process.
In addition, all of these multi-step retrieval methods do not accommodate arbitrary steps of reasoning and the termination condition is hard-coded.
In contrast, our method leverages the Wikipedia graph to retrieve documents that are lexically or semantically distant to questions, and is adaptive to any reasoning path lengths, which leads to significant improvement over the previous work in HotpotQA and SQuAD Open.
This paper introduces a new graph-based recurrent retrieval approach, which retrieves reasoning paths over the Wikipedia graph to answer multi-hop open-domain questions.
Our retriever model learns to sequentially retrieve evidence paragraphs to form the reasoning path.
Subsequently, our reader model re-ranks the reasoning paths, and it determines the final answer as the one extracted from the best reasoning path.
Our experimental results significantly advance the state of the art on HotpotQA by more than 14 points absolute gain on the full wiki setting.
Our approach also achieves the state-of-the-art performance on SQuAD Open and Natural Questions Open without any architectural changes, demonstrating the robustness of our method.
Our method provides insights into the underlying entity relationships, and the discrete reasoning paths are helpful in interpreting our framework's reasoning process.
Future work involves end-to-end training of our graph-based recurrent retriever and reader for improving upon our current two-stage training.
where W r ∈ R d×2d is a weight matrix, b r ∈ R d is a bias vector, and α ∈ R 1 is a scalar parameter (initialized with 1.0).
We set the global initial state a 1 to a parameterized vector s ∈ R d , and we also parameterize an [EOE] vector w [EOE] ∈ R d for the [EOE] symbol.
The use of w i for both the input and output layers is inspired by Inan et al. (2017); Press & Wolf (2017) .
In addition, we align the norm of w [EOE] with those of w i , by applying layer normalization (Ba et al., 2016) of the last layer in BERT because w [EOE] is used along with the BERT outputs.
Without the layer normalization, the L2-norms of w i and w [EOE] can be quite different, and the model can easily discriminate between them by the difference of the norms. | Graph-based recurrent retriever that learns to retrieve reasoning paths over Wikipedia Graph outperforms the most recent state of the art on HotpotQA by more than 14 points. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:816 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Stereo matching is one of the important basic tasks in the computer vision field.
In recent years, stereo matching algorithms based on deep learning have achieved excellent performance and become the mainstream research direction.
Existing algorithms generally use deep convolutional neural networks (DCNNs) to extract more abstract semantic information, but we believe that the detailed information of the spatial structure is more important for stereo matching tasks.
Based on this point of view, this paper proposes a shallow feature extraction network with a large receptive field.
The network consists of three parts: a primary feature extraction module, an atrous spatial pyramid pooling (ASPP) module and a feature fusion module.
The primary feature extraction network contains only three convolution layers.
This network utilizes the basic feature extraction ability of the shallow network to extract and retain the detailed information of the spatial structure.
In this paper, the dilated convolution and atrous spatial pyramid pooling (ASPP) module is introduced to increase the size of receptive field.
In addition, a feature fusion module is designed, which integrates the feature maps with multiscale receptive fields and mutually complements the feature information of different scales.
We replaced the feature extraction part of the existing stereo matching algorithms with our shallow feature extraction network, and achieved state-of-the-art performance on the KITTI 2015 dataset.
Compared with the reference network, the number of parameters is reduced by 42%, and the matching accuracy is improved by 1.9%.
Since the introduction of deep learning in the computer vision field, increasing the network depth (that is, the number of layers in the network) seems to be a necessary means to improve the feature extraction ability.
Taking the object classification task as an example, as the network depth increases from the 8-layer network AlexNet (Krizhevsky et al., 2012) to the 16-layer network VGG (Simonyan & Zisserman, 2014) and to the 101-layer network ResNet (He et al., 2015) , the classification accuracy constantly improves.
There are two purposes of the deep network.
First, the deep network can improve the ability to extract abstract features (Zeiler & Fergus, 2013) , which are important for some vision tasks, such as object detection (Girshick, 2015; Ren et al., 2017) and classification.
For example, for objects such as cups, their colors, shapes and sizes may be different, and they cannot be accurately identified using only these primary feature information.
Therefore, the feature extraction network must have the ability to extract more abstract semantic information.
Second, the deep feature extraction network can obtain a larger receptive field to learn more context information (Luo et al., 2017; Liu et al., 2018) .
With the increase in the number of network layers, the size of the receptive field is also constantly increasing.
In particular, after image sampling using a pooling operation, even the 3*3 convolution kernel has the ability to extract context information.
Many studies (Zeiler & Fergus, 2013; Yu & Koltun, 2016) have shown that the lower part of the convolution neural network mainly extracts primary features, such as the edges and corners, while the higher part can extract more abstract semantic information.
However, many basic vision tasks rely more on basic feature information instead of the high-level abstract features.
Stereo matching is one of the basic vision tasks.
In the traditional stereo matching algorithm (Scharstein & Szeliski, 2002) , the color similarity metrics of pixels are usually used to calculate the matching costs between the left and right images to find the matching points in the two images.
After the introduction of deep learning, more robust feature information can be obtained through training and learning, which can effectively improve the performance of the stereo matching algorithm.
At present, many excellent stereo matching algorithms based on deep learning, such as the GC-Net (Kendall et al., 2017) , PSMNet (Chang & Chen, 2018) and GwcNet (Guo et al., 2019) , generally adopt similar processes, including feature extraction, matching cost volume construction, 3D convolution and disparity regression.
This paper focuses on the feature extraction steps.
The stereo matching task has two requirements for the feature extraction network.
The first requirement is the enlargement of the receptive field as far as possible so that the network can obtain more context information, which is critical to solving the mismatching problems in the discontinuous disparity area.
Because a larger receptive field can learn the relationships between different objects, even if there are problems, such as conclusion or inconsistent illumination, the network can use the context information to infer disparity and improve the stereo matching accuracy in the ill-posed regions.
The second requirement is the maintenance of more details of the spatial structure, which can improve the matching accuracy of many small structures, such as railings, chains, traffic signs and so on.
The existing feature extraction networks usually use a deep convolution neural network to obtain a larger receptive field and extract more abstract semantic information.
In this process, with the increase of the network layers and the compression of the image size, substantial detailed information of the spatial structure is inevitably lost.
We believe that compared with the abstract semantic information that is extracted by a deep network, the detailed information of the spatial structure is more important to improving the stereo matching accuracy.
Based on this point of view, this paper proposes a novel structure of feature extraction network -a shallow feature extraction network.
Unlike the common feature extraction network (with ResNet-50 as the backbone), in this paper, the backbone of the feature extraction network only has 3 convolution layers, and the image is only downsampled once in the first convolution layer to compress the size of the image.
This structure retains more details of the spatial structure and pays more attention to primary features such as the edges and corners of objects, while abandoning more abstract semantic information.
To solve the problem that the size of the receptive field of the shallow structure is limited, this paper introduces the atrous spatial pyramid pooling (ASPP) module .
The ASPP module uses the dilated convolution to increase the receptive field size without increasing the number of parameters.
In addition, the convolution layers with different dilation rate can obtain feature maps with multiscale receptive fields.
The large receptive fields can be used to obtain context information and to solve the problem of mismatching in ill-posed regions, and the small receptive fields can be used to retain more detailed information of the spatial structure and to improve the stereo matching accuracy in local areas.
To integrate feature maps with multiscale receptive fields, this paper designs the feature fusion module and introduces the channel attention mechanism (Jie et al., 2017) .
We assign different weights to feature maps with different dilation rates in the channel dimensions.
The weights are acquired through learning, and more weight and attention are given to the feature channels with greater roles.
The advantages of a shallow feature extraction network with a large receptive field are twofold.
One advantage is that the network can meet the two requirements of the stereo matching task for the feature extraction network.
On the basis of ensuring the large receptive field, more details of the spatial structure are retained.
The other advantage is that the network greatly reduces the number of parameters and the difficulties of network training and deployment.
The feature extraction network that is designed in this paper is used to replace the feature extraction part of the existing stereo matching network, and state-of-the-art performance is achieved on the KITTI2015 dataset (Geiger, 2012) .
Compared with the reference network, the number of parameters is reduced by 42%, and the matching accuracy is improved by 1.9%.
The main contributions of this paper are as follows.
• A shallow feature extraction network is proposed to extract and retain more details of the spatial structure.
This network can improve the stereo matching accuracy with fewer parameters.
• The dilated convolution and ASPP module are introduced to enlarge the receptive field.
We verify the effect of the dilated convolution on the receptive field using mathematics and experiments.
• A feature fusion module, which integrates the feature maps with multiscale receptive fields, is designed and realizes the mutual complementary feature information of different scales.
Focusing on the feature extraction part of a stereo matching network, this paper proposes a novel network structure, which abandons the popular deep convolution neural network and use the shallow network structure to extract and retain more basic feature information.
To solve the problem that the receptive field of a shallow network is limited, this paper introduces the ASPP module and obtains multiscale receptive fields by adding convolution branches with different dilation rates.
By using the feature fusion module, the feature maps with multiscale receptive fields are fused together to solve the information loss problem that is caused by dilated convolution.
Finally, a large and dense receptive field is obtained.
The shallow feature extraction network with a large receptive field can provide more suitable feature information for stereo matching task, with fewer parameters and lower training difficulty.
Using the SWNet to replace the feature extraction part of the existing network can effectively improve the stereo matching accuracy.
A APPENDIX Figure 4 : Schematic diagram of neurons corresponding to receptive fields.
To clearly explain the calculation process of the theoretical receptive field and effective receptive field, the 2D convolution neural network is simplified into a 1D neural network similar to multilayer perceptron (MLP).
The connection relationship between its neurons is shown in Figure 4 , where each circle represents one neuron.
Limited by the size of the image, only half of the receptive field of the neuron is shown.
The receptive field of the neuron in layer 0 (input layer) is 1, that is r 0 = 1.
The receptive field of the neuron in layer 1 is r 1 = r 0 × k 1 = 1 × 3 = 3.
The receptive field of neurons in layer 2 is r 2 = r 1 × k 2 = 3 × 3 = 9 , but since neurons are not independent of each other, there are overlaps between their receptive fields, so the overlaps must be subtracted when calculating the size of the receptive field.
The number of neurons in the overlapping part is related to the kernel size and the convolution stride.
As shown in Figure 4 , the kernel size of the neurons in layer 2 is three.
Then there are two overlaps in the corresponding receptive field, and the number of neurons that is contained in each overlaps is one.
Therefore, the number of neurons that is contained in all overlaps is as follows.
Then the size of receptive field of neuron in layer 2 should be modified as
It is worth noting that, in the convolution neural network, as the number of convolution layers increases, the impact of convolution stride is cumulative.
Therefore, the size of the receptive field of the neuron in layer n should be formulated as
For dilated convolution, the kernel size should be modified as
By substituting formula (10) into formula (9), the size of the theoretical receptive field of the dilated convolution can be calculated as
For the size of the effective receptive field, this paper only studies the case when the convolution stride is smaller than the kernel size, which is k n > s n .
As shown in Figure 4 , the kernel of the neuron in layer 3 is dilated, and the information of some low-level neurons will not be transmitted to the neuron in layer 3, which are called invalid neurons (black circles in Figure 4 ).
The maximum number of continuous invalid neurons in layer 2 is the dilation rate of layer 3 minus 1, which is p 2 = d 3 − 1 = 5 − 1 = 4 .
The maximum number of continuously invalid neurons in layer 0-1 is related to the connection relationship between network layers.
To describe this relationship, this paper introduces the concepts of exclusive subneurons and shared subneurons.
Subneurons refer to the low-level neurons that are directly connected to the neurons in higher layers.
As shown in Figure 4 , the green neurons are the subneurons of purple neurons, while the black neurons are not.
An exclusive subneuron refers to the only sub-neuron in layer (n-1) that is connected to a neuron in layer n.
As shown in Figure 4 , the red neurons are the exclusive subneurons of the yellow neurons.
Under the 1D condition, each neuron has two adjacent neurons, and there is overlap between the subneurons of every two neurons.
Therefore, the number of exclusive subneurons of a neuron in layer n can be calculated as
However, the number of exclusive subneurons should be non-negative, with a minimum value of 0.
Therefore, a non-negative constraint is added to formula (12)
Therefore, if one neuron in layer n fails, it will directly lead to the failure of N n subneurons in layer (n-1).
A shared subneuron refers to the subneuron that is connected with multiple neurons in higher layers.
As shown in Figure 4 , the blue neurons are the shared neurons of the yellow neurons.
A shared subneuron in layer (n-1) is connected to M n neurons in layer n.
In other words, if there are M n continuously invalid neurons in layer n, there will be one invalid neuron in layer (n-1).
The calculation method of M n is M n = k n − s n + 1
Comprehensively considering the exclusive subneurons and shared subneurons, when there are p n invalid neurons in layer n, the number of invalid neurons in layer (n-1) is p n−1 = p n N n + (p n − M n + 1) = p n (N n + 1) − M n + 1
If the invalid neuron in layer n is directly caused by the dilated convolution, the number of invalid neurons in layer n is p n = d n+1 − 1
As shown in Figure 4 , the number of invalid neurons in layer 2 is p 2 = d 3 − 1 = 5 − 1 = 4 .
The numbers of invalid neurons in layer 1 and 0 are p 1 = 4 × (0 + 1) − 3 + 1 = 2 and p 0 = 2 × (1 + 1) − 2 + 1 = 3, respectively.
The size of the effective receptive field should be the size of theoretical receptive field minus the number of invalid neurons in layer 0.
The calculation method is shown in formula (17) r n = r n − p 0 (k n − 1)
B APPENDIX K denotes the convolution kernel size, C denotes the number of output channels, S denotes the convolution stride, D denotes the dilation rate, BN denotes the batch normalization layer, ReLU denotes the activation layer, H denotes the height of the image and W denotes the width of the image.
Concat stands for the concatenation operation of feature maps, and SElayer stands for assigning weights to each feature map. | We introduced a shallow featrue extraction network with a large receptive field for stereo matching tasks, which uses a simple structure to get better performance. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:817 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose a new output layer for deep neural networks that permits the use of logged contextual bandit feedback for training.
Such contextual bandit feedback can be available in huge quantities (e.g., logs of search engines, recommender systems) at little cost, opening up a path for training deep networks on orders of magnitude more data.
To this effect, we propose a Counterfactual Risk Minimization (CRM) approach for training deep networks using an equivariant empirical risk estimator with variance regularization, BanditNet, and show how the resulting objective can be decomposed in a way that allows Stochastic Gradient Descent (SGD) training.
We empirically demonstrate the effectiveness of the method by showing how deep networks -- ResNets in particular -- can be trained for object recognition without conventionally labeled images.
Log data can be recorded from online systems such as search engines, recommender systems, or online stores at little cost and in huge quantities.
For concreteness, consider the interaction logs of an ad-placement system for banner ads.
Such logs typically contain a record of the input to the system (e.g., features describing the user, banner ad, and page), the action that was taken by the system (e.g., a specific banner ad that was placed) and the feedback furnished by the user (e.g., clicks on the ad, or monetary payoff).
This feedback, however, provides only partial information -"contextual-bandit feedback" -limited to the actions taken by the system.
We do not get to see how the user would have responded, if the system had chosen a different action (e.g., other ads or banner types).
Thus, the feedback for all other actions the system could have taken is typically not known.
This makes learning from log data fundamentally different from traditional supervised learning, where "correct" predictions and a loss function provide feedback for all actions.In this paper, we propose a new output layer for deep neural networks that allows training on logged contextual bandit feedback.
By circumventing the need for full-information feedback, our approach opens a new and intriguing pathway for acquiring knowledge at unprecedented scale, giving deep neural networks access to this abundant and ubiquitous type of data.
Similarly, it enables the application of deep learning even in domains where manually labeling full-information feedback is not viable.In contrast to online learning with contextual bandit feedback (e.g., BID11 BID0 ), we perform batch learning from bandit feedback (BLBF) BID1 BID5 and the algorithm does not require the ability to make interactive interventions.
At the core of the new output layer for BLBF training of deep neural networks lies a counterfactual training objective that replaces the conventional cross-entropy objective.
Our approach -called BanditNet -follows the view of a deep neural network as a stochastic policy.
We propose a counterfactual risk minimization (CRM) objective that is based on an equivariant estimator of the true error that only requires propensity-logged contextual bandit feedback.
This makes our training objective fundamentally different from the conventional cross-entropy objective for supervised classification, which requires full-information feedback.
Equivariance in our context means that the learning result is invariant to additive translations of the loss, and it is more formally defined in Section 3.2.
To enable large-scale training, we show how this training objective can be decomposed to allow stochastic gradient descent (SGD) optimization.In addition to the theoretical derivation of BanditNet, we present an empirical evaluation that verifies the applicability of the theoretical argument.
It demonstrates how a deep neural network architec-ture can be trained in the BLBF setting.
In particular, we derive a BanditNet version of ResNet (He et al., 2016) for visual object classification.
Despite using potentially much cheaper data, we find that Bandit-ResNet can achieve the same classification performance given sufficient amounts of contextual bandit feedback as ResNet trained with cross-entropy on conventionally (full-information) annotated images.
To easily enable experimentation on other applications, we share an implementation of BanditNet.
1 2 RELATED WORK Several recent works have studied weak supervision approaches for deep learning.
Weak supervision has been used to pre-train good image features (Joulin et al., 2016) and for information retrieval BID3 .
Closely related works have studied label corruption on CIFAR-10 recently BID12 .
However, all these approaches use weak supervision/corruption to construct noisy proxies for labels, and proceed with traditional supervised training (using crossentropy or mean-squared-error loss) with these proxies.
In contrast, we work in the BLBF setting, which is an orthogonal data-source, and modify the loss functions optimized by deep nets to directly implement risk minimization.Virtually all previous methods that can learn from logged bandit feedback employ some form of risk minimization principle BID9 over a model class.
Most of the methods BID1 BID2 BID5 employ an inverse propensity scoring (IPS) estimator (Rosenbaum & Rubin, 1983) as empirical risk and use stochastic gradient descent (SGD) to optimize the estimate over large datasets.
Recently, the self-normalized estimator BID8 ) has been shown to be a more suitable estimator for BLBF BID7 .
The self-normalized estimator, however, is not amenable to stochastic optimization and scales poorly with dataset size.
In our work, we demonstrate how we can efficiently optimize a reformulation of the self-normalized estimator using SGD.Previous BLBF methods focus on simple model classes: log-linear and exponential models (Swaminathan & Joachims, 2015a) or tree-based reductions BID1 ).
In contrast, we demonstrate how current deep learning models can be trained effectively via batch learning from bandit feedback (BLBF), and compare these with existing approaches on a benchmark dataset (Krizhevsky & Hinton, 2009 ).Our
work, together with independent concurrent work BID4 , demonstrates success with off-policy variants of the REINFORCE BID11 algorithm. In
particular, our algorithm employs a Lagrangian reformulation of the self-normalized estimator, and the objective and gradients of this reformulation are similar in spirit to the updates of the REINFORCE algorithm. This
connection sheds new light on the role of the baseline hyper-parameters in REINFORCE: rather than simply reduce the variance of policy gradients, our work proposes a constructive algorithm for selecting the baseline in the off-policy setting and it suggests that the baseline is instrumental in creating an equivariant counterfactual learning objective.
We proposed a new output layer for deep neural networks that enables the use of logged contextual bandit feedback for training.
This type of feedback is abundant and ubiquitous in the form of interaction logs from autonomous systems, opening up the possibility of training deep neural networks on unprecedented amounts of data.
In principle, this new output layer can replace the conventional cross-entropy layer for any network architecture.
We provide a rigorous derivation of the training objective, linking it to an equivariant counterfactual risk estimator that enables counterfactual risk minimization.
Most importantly, we show how the resulting training objective can be decomposed and reformulated to make it feasible for SGD training.
We find that the BanditNet approach applied to the ResNet architecture achieves predictive accuracy comparable to conventional full-information training for visual object recognition.The paper opens up several directions for future work.
First, it enables many new applications where contextual bandit feedback is readily available.
Second, in settings where it is infeasible to log propensity-scored data, it would be interesting to combine BanditNet with propensity estimation techniques.
Third, there may be improvements to BanditNet, like smarter search techniques for S, more efficient counterfactual estimators beyond SNIPS, and the ability to handle continuous outputs.
DISPLAYFORM0 If the optimaŵ a andŵ b are not equivalent in the sense thatR DISPLAYFORM1 where g(w
) corresponds to the value of the control variate S. Sinceŵ a andŵ b are not equivalent optima, we know that DISPLAYFORM2 Adding the two inequalities and solving implies that DISPLAYFORM3 B APPENDIX: CHARACTERIZING THE RANGE OF S TO EXPLORE.Theorem 2. Let
p ≤ π 0 (y | x)
be a lower bound on the propensity for the logging policy, then constraining the solution of Eq. (11) to the w with control variate S ∈ [1 − , 1 + ] for a training set of size n will not exclude the minimizer of the true risk w * = arg min w∈W R(π w ) in the policy space W with probability at least DISPLAYFORM4 Proof. For
the optimal w * , let DISPLAYFORM5 be the control variate in the denominator of the SNIPS estimator. S is
a random variable that is a sum of bounded random variables between 0 and DISPLAYFORM6 We can bound the probability that the control variate S of the optimum w * lies outside of [1− , 1+ ] via Hoeffding's inequality: DISPLAYFORM7 The same argument applies to any individual policy π w , not just w * . Note
, however, that it can still be highly likely that at least one policy π w with w ∈ W shows a large deviation in the control variate for high-capacity W , which can lead to propensity overfitting when using the naive IPS estimator. Suppose
we have a dataset of n BLBF samples D = {(x 1 , y 1 , δ 1 , p 1 ) . . . (x n , y n , δ n , p n )} where each instance is an i.i.d. sample from the data generating distribution. In the
sequel we will be considering two datasets of n + 1 samples, D = D ∪ {(x , y , δ , p )} and D = D ∪ {(x , y , δ , p )} where (x , y , δ , p ) = (x , y , δ , p ) and (x , y , δ , p ), (x , y , δ , p ) / ∈ D.For notational convenience, let DISPLAYFORM8 π0(yi|xi) , andġ i := ∇ w g i . First
consider the vanilla IPS risk estimate of Eq. (5). DISPLAYFORM9
To maximize this estimate using stochastic optimization, we must construct an unbiased gradient estimate. That is, we
randomly select one sample from D and compute a gradient α((x i , y i , δ i , p i )) and we require that DISPLAYFORM10 Here the expectation is over our random choice of 1 out of n samples. Observe that
α((x i , y i , δ i , p i )) =ḟ i suffices (and indeed, this corresponds to vanilla SGD): DISPLAYFORM11 Other choices of α(·) can also produce unbiased gradient estimates, and this leads to the study of stochastic variance-reduced gradient optimization. Now let us attempt
to construct an unbiased gradient estimate for Eq. (8): DISPLAYFORM12 Suppose such a gradient estimate exists, β((x i , y i , δ i , p i )). Then, DISPLAYFORM13
This identity is true for any sample of BLBF instances -in particular, for D and D : DISPLAYFORM14 1 n + 1 β((x i , y i , δ i , p i )) + β((x , y , δ , p )) n + 1 , DISPLAYFORM15 1 n + 1 β((x i , y i , δ i , p i )) + β((x , y , δ , p )) n + 1 .Subtracting these two
equations, DISPLAYFORM16 = β((x , y , δ , p )) − β((x , y , δ , p )) n + 1 .The LHS clearly depends
on {(x i , y i , δ i , p i )} n i=1 in general, while the RHS does not! This contradiction indicates
that no construction of β that only looks at a sub-sample of the data can yield an unbiased gradient estimate ofR SNIPS (π w ). | The paper proposes a new output layer for deep networks that permits the use of logged contextual bandit feedback for training. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:818 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Protein classification is responsible for the biological sequence, we came up with an idea whichdeals with the classification of proteomics using deep learning algorithm.
This algorithm focusesmainly to classify sequences of protein-vector which is used for the representation of proteomics.Selection of the type protein representation is challenging based on which output in terms ofaccuracy is depended on, The protein representation used here is n-gram i.e. 3-gram and Kerasembedding used for biological sequences like protein.
In this paper we are working on the Proteinclassification to show the strength and representation of biological sequence of the proteins
Human body comprises of the many cells the key for the formation of all these are the DNA(Deoxyribonucleic acid) which is a chain of nucleotides that are thread like chain in nature responsible for carrying genetic instruction for development, functioning of organisms in the body and RNA(Ribonucleic acid) which is a polymeric molecule which is very much essential for the three biological roles namely CDRE(Coding, Decoding ,Regulation and Expression) of each and every gene present in the body, which are present in every living being.
Like human beings use various language for communication, biological organisms use these type of the codes by using DNA and RNA for the communication.
Selection of the type of feature extraction is a challenging task because it helps for the studying of the types of the genes to the machine Email address: [email protected] (Naveenkumar K S) using the machine learning algorithm, Even a most highly sophisticated algorithm would go wrong if the feature extraction is not done in a proper form.
The features from the existing data can be obtained by manually or by using unsupervised (data without labels) fashion BID0 , BID1 .
This work focuses on protein family classification wit the publically avilable data Swiss-Prot BID2 .
In this work application of keras embedding and n-gram technique is used to map the protein sequences into numeric vectors and followed by traditional machine learning and deep neural network (DNN) for classification.The rest of the paper are organized as follows.
Section 2 discusses the related works in a detailed manner.
Section 3 presents the background works.
Section 4 presents the description of the data set.
Section 5 gives a overview about the proposed architecture which is used in building this work.
Section 6 and 7 presents results and conclusion, future work directions and discussions.
This paper has proposed deep learning method for protein classification.
To transform protein to numeric vector, the n-gram and Keras embedding representation is used.
Deep learning method with Keras embedding has performed well in comparison to the n-gram with deep neural network.
The main reason is due to the Keras embedding has the capability to preserver the sequential information among the protein sequences.
Thus, the deep learning algorithms able to capture the optimal information from the syntactic and semantic information of Keras embedding vectors.
The proposed methodology can be employed for other domain in biology tasks such as genomics, DNA classification.
This is one of the significant directions towards future work. | Protein Family Classification using Deep Learning | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:819 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Semmelhack et al. (2014) have achieved high classification accuracy in distinguishing swim bouts of zebrafish using a Support Vector Machine (SVM).
Convolutional Neural Networks (CNNs) have reached superior performance in various image recognition tasks over SVMs, but these powerful networks remain a black box.
Reaching better transparency helps to build trust in their classifications and makes learned features interpretable to experts.
Using a recently developed technique called Deep Taylor Decomposition, we generated heatmaps to highlight input regions of high relevance for predictions.
We find that our CNN makes predictions by analyzing the steadiness of the tail's trunk, which markedly differs from the manually extracted features used by Semmelhack et al. (2014).
We further uncovered that the network paid attention to experimental artifacts.
Removing these artifacts ensured the validity of predictions.
After correction, our best CNN beats the SVM by 6.12%, achieving a classification accuracy of 96.32%.
Our work thus demonstrates the utility of AI explainability for CNNs.
In the study by Semmelhack et al. (2014) , a well-performing classifier allowed to correlate neural interventions with behavioral changes.
Support Vector Machines (SVMs) were commonly applied to such classification tasks, relying on feature engineering by domain experts.
In recent years, Convolutional Neural Networks (CNNs) have proven to reach high accuracies in classification tasks on images and videos reducing the need for manual feature engineering.
After Lecun & Bengio (1995) introduced them in the 90s, CNNs had their break-through in the competition ILSVRC2012 with the architecture of .
Since then, more and more sophisticated architectures have been designed enabling them to identify increasingly abstract features.
This development has become possible due to the availability of larger training sets, computing resources, GPU training implementations, and better regularization techniques, such as Dropout ; Zeiler & Fergus (2014) ).
While these more complex deep neural network architectures achieved better results, they also kept their learnt features hidden if not further analyzed.
This caused CNNs to come with significant drawbacks: a lack of trust in their classifications, missing interpretability of learned features in the application domain, and the absence of hints as to what data could enhance performance (Molnar (2019) ).
Explaining the decisions made by CNNs might even become a legal requirement in certain applications (Alber et al. (2018) ).
In order to overcome these drawbacks, subsequent research has developed approaches to shed light on the inner workings of CNNs.
These approaches have been successfully used for uncovering how CNNs might learn unintended spurious correlations, termed "Clever Hans" predictions (Lapuschkin et al. (2019) ).
Such predictions could even become harmful if the predictions entailed decisions with severe consequences (Leslie (2019) ).
Also, since deep neural networks have become a popular machine learning technique in applied domains, spurious correlations would undermine scientific discoveries.
This paper focuses on zebrafish research as an applied domain of AI explainability, considering that the research community around this organism has grown immensely.
The zebrafish is an excellent model organism for vertebrates, including humans, due to the following four reasons: The genetic codes of humans and zebrafish are about 70% orthologue (Howe et al. (2013) ).
The fish are translucent which allows non-invasive observation of changes in the organism (Bianco et al. (2011) ).
Furthermore, zebrafish are relatively cheap to maintain, produce plenty of offspring, and develop rapidly.
Finally, they are capable of recovering their brain structures within days after brain injury (Kishimoto et al. (2011) ; Kizil et al. (2012) ).
In this paper, we adapt CNNs to work on highly controlled zebrafish video recordings and show the utility of a recently developed AI explainability technique on this task.
We train the network on optical flow for binary classifying swim bouts and achieve superior performance when compared to the current state-of-the-art in bout classification (Semmelhack et al. (2014) ).
We then create heatmaps over the videos with the "iNNvestigate" toolbox (Alber et al. (2018) ) which highlight the areas that our CNN pays attention to when making a prediction.
The resulting heatmaps show that our CNN learns reasonable features which are very different from those manually composed by Semmelhack et al. (2014) .
We trained a two-stream Convolutional Neural Network (CNN) on recordings of larval zebrafish to classify prey and spontaneous swim bouts.
We then visualized the learned weights by generating relevance heatmaps showing which regions of the input the network focuses on while performing its classifications.
We find that our CNN is capable of learning highly discriminating tail features.
These features seem to be quite different from the ones used in the SVM classification by Semmelhack et al. (2014) -the previous state-of-the-art in bout classification.
The heatmaps further uncovered a "Clever Hans" type of correlation.
After removing this spurious correlation and retraining the network, the network reached a test accuracy of 96.32%, which is 6.12% points better than the accuracy achieved by Semmelhack et al. (2014) .
Judging from the test accuracy, our CNN has learned better discriminating features than those used for the SVM by Semmelhack et al. (2014), and has thus beaten manual feature engineering in this application domain.
Steadiness of the fish's trunk as differentiating feature.
The relevance heatmaps and high accuracy show that the network achieves correct classifications by looking for salient features in the trunk of the tail while largely disregarding the tip.
A sharp and clear relevance profile confined to the edges of the trunk gives a clear sign of a prey bout.
The opposite speaks for a spontaneous bout.
Here, attention spreads out to capture the strong vertical oscillation of the trunk.
For this reason we conclude that the CNN makes its predictions based on the steadiness of the trunk.
We believe our interpretation of learned features to be in line with existing research on the kinematics of prey bouts.
As shown by Borla et al. (2002) and McElligott & O'Malley (2005) , prey bouts require fine control of the tail's axial kinematics to perform precise swim movements.
Zebrafish noticeably reduce their yaw rotation and stabilize the positioning of their head to make a targeted move at their prey.
Such precise movements are not required in spontaneous swim bouts.
The heatmaps indicate that the network has found clear evidence for these kinds of motion in the trunk of the tail.
Furthermore, we argue that the CNN has learned features which are very different from the ones identified by Semmelhack et al. (2014) .
All of their features -as outlined in Section 2 -, except the second one, rely on information from the tip of the tail and a complete sequence of frames.
However, many optical flow frames do not depict the tip of the tail because of its small size and high speed.
This might have happened due to suboptimal parameter settings which could not handle the sometimes long distances which the tip traveled between frames.
Also, subsamples include only 85 of the original 150 frames for each video.
Due to its higher performance, we conclude not only that the CNN has learned a different set of features, but also that these features must bear higher discriminative power.
Origin of the "Clever Hans" correlation.
The telltale motion in the top left corner stems from a substance called agarose, which the fish's head was embedded in to keep it steady.
It is quite curious that, while not visible to human eyes, the agarose seems to be moving each time the fish performed a spontaneous swim bout, but not so for a prey bout.
We speculate that this correlation was unintentionally introduced by the experimenters who might have tapped the petri dish to induce the fish to perform a spontaneous swim bout.
Future work.
Calculating and storing optical flow is expensive.
If we attained similar performance on original frames, training would be considerably cheaper.
While we can confirm the findings by that the spatial stream by itself reaches a fairly competitive accuracy, it provides only very minor improvement to the overall network.
Yet, this stream is probably looking for very similar features as the temporal stream, because it focuses largely on the upper half of the tail, just like the temporal stream.
If that is the case, we should see improved performance when giving the spatial stream a sequence of frames.
It should be interesting to probe whether the spatial stream could then match or even surpass the performance of the temporal stream.
Furthermore, CNNs such as the one used in this paper could be used to investigate brain recovery in larval zebrafish.
It has been shown on a cellular level that zebrafish can heal their brain within days after a lesion.
However, this needs to be proven on a behavioral level (Krakauer et al. (2017) ).
Future work could perform a lesion study on the optic tectum in zebrafish (McDowell et al. (2004) ; Roeser & Baier (2003) ), a brain region responsible for translating visual input into motor output.
CNNs could then assess swim bouts of recovered fish and give a measure for potential behavioral changes.
Insights from relevance heatmaps would be required if the CNN were not able not distinguish recovered fish from healthy ones. | We demonstrate the utility of a recent AI explainability technique by visualizing the learned features of a CNN trained on binary classification of zebrafish movements. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:82 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this work, we attempt to answer a critical question: whether there exists some input sequence that will cause a well-trained discrete-space neural network sequence-to-sequence (seq2seq) model to generate egregious outputs (aggressive, malicious, attacking, etc.).
And if such inputs exist, how to find them efficiently.
We adopt an empirical methodology, in which we first create lists of egregious output sequences, and then design a discrete optimization algorithm to find input sequences that will cause the model to generate them.
Moreover, the optimization algorithm is enhanced for large vocabulary search and constrained to search for input sequences that are likely to be input by real-world users.
In our experiments, we apply this approach to dialogue response generation models trained on three real-world dialogue data-sets: Ubuntu, Switchboard and OpenSubtitles, testing whether the model can generate malicious responses.
We demonstrate that given the trigger inputs our algorithm finds, a significant number of malicious sentences are assigned large probability by the model, which reveals an undesirable consequence of standard seq2seq training.
Recently, research on adversarial attacks BID5 BID19 has been gaining increasing attention: it has been found that for trained deep neural networks (DNNs), when an imperceptible perturbation is applied to the input, the output of the model can change significantly (from correct to incorrect).
This line of research has serious implications for our understanding of deep learning models and how we can apply them securely in real-world applications.
It has also motivated researchers to design new models or training procedures BID12 , to make the model more robust to those attacks.For continuous input space, like images, adversarial examples can be created by directly applying gradient information to the input.
Adversarial attacks for discrete input space (such as NLP tasks) is more challenging, because unlike the image case, directly applying gradient will make the input invalid (e.g. an originally one-hot vector will get multiple non-zero elements).
Therefore, heuristics like local search and projected gradient need to be used to keep the input valid.
Researchers have demonstrated that both text classification models BID4 or seq2seq models (e.g. machine translation or text summarization) BID2 BID0 are vulnerable to adversarial attacks.
All these efforts focus on crafting adversarial examples that carry the same semantic meaning of the original input, but cause the model to generate wrong outputs.In this work, we take a step further and consider the possibility of the following scenario: Suppose you're using an AI assistant which you know, is a deep learning model trained on large-scale highquality data, after you input a question the assistant replies: "You're so stupid, I don't want to help you."We term this kind of output (aggressive, insulting, dangerous, etc.) an egregious output.
Although it may seem sci-fi and far-fetched at first glance, when considering the black-box nature of deep learning models, and more importantly, their unpredictable behavior with adversarial examples, it is difficult to verify that the model will not output malicious things to users even if it is trained on "friendly" data.In this work, we design algorithms and experiments attempting to answer the question: "Given a well-trained 1 discrete-space neural seq2seq model, do there exist input sequence that will cause it to generate egregious outputs?"
We apply them to the dialogue response generation task.
There are two key differences between this work and previous works on adversarial attacks: first, we look for not only wrong, but egregious, totally unacceptable outputs; second, in our search, we do not require the input sequence to be close to an input sequence in the data, for example, no matter what the user inputs, a helping AI agent should not reply in an egregious manner.In this paper we'll follow the notations and conventions of seq2seq NLP tasks, but note that the framework developed in this work can be applied in general to any discrete-space seq2seq task.
In this work, we provide an empirical answer to the important question of whether well-trained seq2seq models can generate egregious outputs, we hand-craft a list of malicious sentences that should never be generated by a well-behaved dialogue response model, and then design an efficient discrete optimization algorithm to find trigger inputs for those outputs.
We demonstrate that, for models trained by popular real-world conversational data-sets, a large number of egregious outputs will be assigned a probability mass larger than "proper" outputs when some trigger input is fed into the model.
We believe this work is a significant step towards understanding neural seq2seq model's behavior, and has important implications as for applying seq2seq models into real-world applications.
First in FIG1 , we show an illustration of the forwarding process on the encoder side of the neural seq2seq model at time t, which serves as an auxiliary material for Section 2 and Section 3.1. | This paper aims to provide an empirical answer to the question of whether well-trained dialogue response model can output malicious responses. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:820 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In the problem of unsupervised learning of disentangled representations, one of the promising methods is to penalize the total correlation of sampled latent vari-ables.
Unfortunately, this well-motivated strategy often fail to achieve disentanglement due to a problematic difference between the sampled latent representation and its corresponding mean representation.
We provide a theoretical explanation that low total correlation of sample distribution cannot guarantee low total correlation of the mean representation.
We prove that for the mean representation of arbitrarily high total correlation, there exist distributions of latent variables of abounded total correlation.
However, we still believe that total correlation could be a key to the disentanglement of unsupervised representative learning, and we propose a remedy, RTC-VAE, which rectifies the total correlation penalty.
Experiments show that our model has a more reasonable distribution of the mean representation compared with baseline models, e.g.,β-TCVAE and FactorVAE.
VAEs (Variational AutoEncoders) Kingma & Welling (2013) ; Bengio et al. (2007) follow the common assumption that the high-dimensional real world observations x can be re-generated by a lowerdimension latent variable z which is semantically meaningful.
Recent works Kim & Mnih (2018) ; Chen et al. (2018) ; Kumar et al. (2017) suggest that decomposing the ELBO (Evidence Lower Bound) could lead to distinguishing the factor of disentanglement.
In particular, recent works Kim & Mnih (2018) ; Chen et al. (2018) focused on a term called total correlation (TC).
The popular belief Chen et al. (2018) is that by adding weights to this term in objective function, a VAE model can learn a disentangled representation.
This approach appears to be promising since the total correlation of a sampled representation should describe the level of factorising since total correlation is defined to be the KL-divergence between the joint distribution z ∼ q(z) and the product of marginal distributions j q(z j ).
In this case, a low value suggests a less entangled joint distribution.
However, Locatello et al. (2018) pointed out that the total correlation of sampled distribution T C sample being low does not necessarily give rise to a low total correlation of the corresponding mean representation T C mean .
Conventionally, the mean representation is used as the encoded latent variables, an unnoticed high T C mean is usually the culprit behind the undesirable entanglement.
Moreover, Locatello et al. (2018) found that as regularization strength increases, the total correlation of sampled representation T C sample and mean representation T C mean are actually negatively correlated.
Locatello et al. (2018) put doubts on most methods of disentanglement including penalizing the total correlation term Kim & Mnih (2018) ; Chen et al. (2018) , and they concluded that "the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases".
Acknowledging the difficulty in learning disentangled representation, we provide a detailed explanation of the seemingly contradictory behaviors of the total correlations of sampled and mean representation in previous works on TC penalizing strategy.
Moreover, we find that this problem described above can be remedied simply with an additional penalty term on the variance of a sampled representation.
Our contributions:
• In Theorem 1, we prove that for all mean representations, there exists a large class of sample distributions with bounded total correlation.
Particularly, a mean representation with arbitrarily large total correlation can have a corresponding sample distribution with low total correlation.
This implies that a low total correlation of sample distribution cannot guarantee a low total correlation of the mean representation.
(Section. 2) • Acknowledging the issue above, we further delve into total correlation, and provide a simple remedy by adding an additional penalty term on the variance of sample distribution.
The penalty term forces a sampled representation to behave similar to the corresponding mean representation.
Such penalty term is necessary for the strategy of penalizing T C mean in the view of Theorem 1.
(Section. 4) • We study several different methods of estimating total correlation.
They are compared and benchmarked against the ground truth value on the multivariate Gaussian distribution Locatello et al. (2018) .
We point out that the method of (minibatch) estimators suffers from the curse of dimensionality and other drawbacks, making their estimation accuracy decay significantly with the increase of the dimension of the latent space, and some strong correlated distributions can be falsely estimated to have low total correlation.
(Section. 5)
In this work, we demonstrated that our RTC-VAE, which rectifies the total correlation penalty can remedy its peculiar properties (disparity between total correlation of the samples and the mean representations).
Our experiments show that our model has a more reasonable distribution of the mean representation compared with baseline models including β-TCVAE and FactorVAE.
We also provide several theoretical proofs which could help diagnose several specific symptoms of entangle-ment.
Hopefully, our contributions could add to the explainability of the unsupervised learning of disentangled representations. | diagnosed all the problem of STOA VAEs theoretically and qualitatively | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:821 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Visual attention mechanisms have been widely used in image captioning models.
In this paper, to better link the image structure with the generated text, we replace the traditional softmax attention mechanism by two alternative sparsity-promoting transformations: sparsemax and Total-Variation Sparse Attention (TVmax).
With sparsemax, we obtain sparse attention weights, selecting relevant features.
In order to promote sparsity and encourage fusing of the related adjacent spatial locations, we propose TVmax.
By selecting relevant groups of features, the TVmax transformation improves interpretability.
We present results in the Microsoft COCO and Flickr30k datasets, obtaining gains in comparison to softmax.
TVmax outperforms the other compared attention mechanisms in terms of human-rated caption quality and attention relevance.
The goal of image captioning is to generate a fluent textual caption that describes a given image (Farhadi et al., 2010; Kulkarni et al., 2011; Vinyals et al., 2015; Xu et al., 2015) .
Image captioning is a multimodal task: it combines text generation with the detection and identification of objects in the image, along with their relations.
While neural encoder-decoder models have achieved impressive performance in many text generation tasks Vaswani et al., 2017; Chorowski et al., 2015; Chopra et al., 2016) , it is appealing to design image captioning models where structural bias can be injected to improve their adequacy (preservation of the image's information), therefore strengthening the link between their language and vision components.
State-of-the-art approaches for image captioning (Liu et al., 2018a; b; Anderson et al., 2018; Lu et al., 2018) are based on encoder-decoders with visual attention.
These models pay attention either to the features generated by convolutional neural networks (CNNs) pretrained on image recognition datasets, or to detected bounding boxes.
In this paper, we focus on the former category: visual attention over features generated by a CNN.
Without explicit object detection, it is up to the attention mechanism to identify relevant image regions, in an unsupervised manner.
A key component of attention mechanisms is the transformation that maps scores into probabilities, with softmax being the standard choice .
However, softmax is strictly dense, i.e., it devotes some attention probability mass to every region of the image.
Not only is this wasteful, it also leads to "lack of focus": for complex images with many objects, this may lead to vague captions with substantial repetitions.
Figure 1 presents an example in which this is visible: in the caption generated using softmax (top), the model attends to the whole image at every time step, leading to a repetition of "bowl of fruit."
This undesirable behaviour is eliminated by using our alternative solutions: sparsemax (middle) and the newly proposed TVMAX (bottom).
In this work, we introduce novel visual attention mechanisms by endowing them with a new capability: that of selecting only the relevant features of the image.
To this end, we first propose replacing softmax with sparsemax (Martins & Astudillo, 2016) .
While sparsemax has been previously used in NLP for attention mechanisms over words, it has never been applied to computer vision to attend over image regions.
With sparsemax, the attention weights obtained are sparse, leading to the selection (non-zero attention) of only a few relevant features.
Second, to further encourage the weights of related adjacent spatial locations to be the same (e.g., parts of an object), we introduce a new attention mechanism: Total-Variation Sparse Attention (which we dub TVMAX), inspired by prior work in structured sparsity (Tibshirani et al., 2005; Bach et al., 2012) .
With TVMAX, sparsity is allied to the ability of selecting compact regions.
According to our human evaluation experiments, Figure 1 : Example of captions generated using softmax (top), sparsemax (middle) and TVMAX attention (bottom).
Shading denotes the attention weight, with white for zero attention.
The darker the green is, the higher the attention weight is.
The full sequences are presented in Appendix C. this leads to better interpretability, since the model's behaviour is better understood by looking at the selected image regions when a particular word is generated.
It also leads to a better selection of the relevant features, and consequently to the improvement of the generated captions.
This paper introduces three main contributions:
• We propose a novel visual attention mechanism using sparse attention, based on sparsemax (Martins & Astudillo, 2016) , that improves the quality of the generated captions and increases interpretability.
• We introduce a new attention mechanism, TVMAX, that encourages sparse attention over contiguous 2D regions, giving the model the capability of selecting compact objects.
We show that TVmax can be evaluated by composing a proximal operator with a sparsemax projection, and we provide a closed-form expression for its Jacobian.
This leads to an efficient implementation of its forward and backward pass.
• We perform an empirical and qualitative comparison of the various attention mechanisms considered.
We also carry out a human evaluation experiment, taking into account the generated captions as well as the perceived relevance of the selected regions.
We propose using sparse and structured visual attention, in order to improve the process of selecting the features relevant to the caption generation.
For that, we used sparsemax and introduced TVMAX.
Results on the image captioning task, show that the attention mechanism is able to select better features when using sparsemax or TVMAX.
Furthermore, in the human assessment and attention analysis we see that the improved selection of the relevant features as well as the ability to group spatial features lead to the generation of better captions, while improving the model's interpretability.
In future work, TVMAX attention can be applied to other multimodal problems such as visual question answering.
It can also be applied in other tasks for which we have prior knowledge of the data's stucture, for instance graphs or trees.
Summing up the Eq. 17 over all j ∈ G, we observe that for any k ∈ G, the term λt jk appears twice with opposite signs.
Thus,
Dividing by |G| gives exactly Eq. 8.
This reasoning applies to any group G i . | We propose a new sparse and structured attention mechanism, TVmax, which promotes sparsity and encourages the weight of related adjacent locations to be the same. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:822 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Although there are more than 65,000 languages in the world, the pronunciations of many phonemes sound similar across the languages.
When people learn a foreign language, their pronunciation often reflect their native language's characteristics.
That motivates us to investigate how the speech synthesis network learns the pronunciation when multi-lingual dataset is given.
In this study, we train the speech synthesis network bilingually in English and Korean, and analyze how the network learns the relations of phoneme pronunciation between the languages.
Our experimental result shows that the learned phoneme embedding vectors are located closer if their pronunciations are similar across the languages.
Based on the result, we also show that it is possible to train networks that synthesize English speaker's Korean speech and vice versa.
In another experiment, we train the network with limited amount of English dataset and large Korean dataset, and analyze the required amount of dataset to train a resource-poor language with the help of resource-rich languages. | Learned phoneme embeddings of multilingual neural speech synthesis network could represent relations of phoneme pronunciation between the languages. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:823 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Generative Adversarial Networks (GANs) have recently achieved impressive results for many real-world applications, and many GAN variants have emerged with improvements in sample quality and training stability.
However, visualization and understanding of GANs is largely missing.
How does a GAN represent our visual world internally?
What causes the artifacts in GAN results?
How do architectural choices affect GAN learning?
Answering such questions could enable us to develop new insights and better models.
In this work, we present an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level.
We first identify a group of interpretable units that are closely related to object concepts with a segmentation-based network dissection method.
Then, we quantify the causal effect of interpretable units by measuring the ability of interventions to control objects in the output.
Finally, we examine the contextual relationship between these units and their surrounding by inserting the discovered object concepts into new images.
We show several practical applications enabled by our framework, from comparing internal representations across different layers, models, and datasets, to improving GANs by locating and removing artifact-causing units, to interactively manipulating objects in the scene.
We provide open source interpretation tools to help peer researchers and practitioners better understand their GAN models.
Generative Adversarial Networks (GANs) BID11 have been able to produce photorealistic images, often indistinguishable from real images.
This remarkable ability has powered many real-world applications ranging from visual recognition BID35 , to image manipulation , to video prediction .
Since their invention in 2014, many GAN variants have been proposed BID29 BID41 , often producing more realistic and diverse samples with better training stability.Despite this tremendous success, many questions remain to be answered.
For example, to produce a church image (Figure 1a) , what knowledge does a GAN need to learn?
Alternatively, when a GAN sometimes produces terribly unrealistic images (Figure 1f) , what causes the mistakes?
Why does one GAN variant work better than another?
What fundamental differences are encoded in their weights?In
this work, we study the internal representations of GANs. To
a human observer, a well-trained GAN appears to have learned facts about the objects in the image: for example, a door can appear on a building but not on a tree. We
wish to understand how a GAN represents such structure. Do
the objects emerge as pure pixel patterns without any explicit representation of objects such as doors and trees, or does the GAN contain internal variables that correspond to the objects that humans perceive? If
the GAN does contain variables for doors and trees, do those variables cause the generation of those objects, or do they merely correlate? How
are relationships between objects represented? Figure
1: Overview: (a) Realistic
outdoor church images generated by Progressive GANs BID18 . (b) Given a pre-trained
GAN model, we identify a set of interpretable units whose featuremap is correlated to an object class across different images. For example, one unit in
layer4 localizes tree regions with diverse visual appearance. (c) We force the activation
of the units to be zero and quantify the average casual effect of the ablation. Here we successfully remove
trees from church images. (d) We activate tree causal
units in other locations. These same units synthesize
new trees, visually compatible with their surrounding context. In addition, our method can
diagnose and improve GANs by identifying artifact-causing units (e). We can remove the artifacts
that
appear (f) and significantly improve the
results by ablating the "artifact" units (g). Please see our demo video.We
present
a general method for visualizing and understanding GANs at different levels of abstraction, from each neuron, to each object, to the contextual relationship between different objects. We first identify a group of interpretable
units that are related to object concepts ( Figure 1b ). These units' featuremaps closely match the
semantic segmentation of a particular object class (e.g., trees). Second, we directly intervene within the network
to identify sets of units that cause a type of objects to disappear (Figure 1c) or appear ( Figure 1d ). We quantify the causal effect of these units using
a standard causality metric. Finally, we examine the contextual relationship between
these causal object units and the background. We study where we can insert object concepts in new images
and how this intervention interacts with other objects in the image (Figure 1d ). To our knowledge, our work provides the first systematic analysis
for understanding the internal representations of GANs.Finally, we show several practical applications enabled by this analytic framework, from comparing internal representations across different layers, GAN variants and datasets; to debugging and improving GANs by locating and ablating "artifact" units ( Figure 1e) ; to understanding contextual relationships between objects in scenes; to manipulating images with interactive object-level control.
By carefully examining representation units, we have found that many parts of GAN representations can be interpreted, not only as signals that correlate with object concepts but as variables that have a causal effect on the synthesis of objects in the output.
These interpretable effects can be used to compare, debug, modify, and reason about a GAN model.
Our method can be potentially applied to other generative models such as VAEs BID20 and RealNVP BID7 .We
have focused on the generator rather than the discriminator (as did in BID29 ) because the generator must represent all the information necessary to approximate the target distribution, while the discriminator only learns to capture the difference between real and fake images. Alternatively
, we conference room church living room kitchen bedroom Figure 10 : Comparing the effect of ablating 20 window-causal units in GANs trained on five scene categories. In each case
, the 20 ablated units are specific to the class and the generator and independent of the image. In some scenes
, windows are reduced in size or number rather than eliminated, or replaced by visually similar objects such as paintings. DISPLAYFORM0
Figure 11: Inserting door units by setting 20 causal units to a fixed high value at one pixel in the representation. Whether the
door units can cause the generation of doors is dependent on its local context: we highlight every location that is responsive to insertions of door units on top of the original image, including two separate locations in (b) (we intervene
at left). The same units are
inserted in every case, but the door that appears has a size, alignment, and color appropriate to the location. Emphasizing a door
that is already present results in a larger door (d). The chart summarizes
the causal effect of inserting door units at one pixel with different contexts.can train an encoder to invert the generator BID8 . However, this incurs
additional complexity and errors. Many GANs also do not
have an encoder.Our method is not designed to compare the quality of GANs to one another, and it is not intended as a replacement for well-studied GAN metrics such as FID, which estimate realism by measuring the distance between the generated distribution of images and the true distribution BID2 surveys these methods). Instead, our goal has
been to identify the interpretable structure and provide a window into the internal mechanisms of a GAN.Prior visualization methods BID40 BID1 BID17 have brought new insights into CNN and RNN research. Motivated by that, in
this work we have taken a small step towards understanding the internal representations of a GAN, and we have uncovered many questions that we cannot yet answer with the current method. For example: why can
a door not be inserted in the sky? How does the GAN suppress
the signal in the later layers? Further work will be needed
to understand the relationships between layers of a GAN. Nevertheless, we hope that
our work can help researchers and practitioners better analyze and develop their own GANs. In Section 4.2, we have improved
GANs by manually identifying and ablating artifact-causing units. Now we describe an automatic procedure
to identify artifact units using unit-specific FID scores.To compute the FID score BID13 for a unit u, we generate 200, 000 images and select the 10, 000 images that maximize the activation of unit u, and this subset of 10, 000 images is compared to the true distribution (50, 000 real images) using FID. Although every such unit-maximizing subset
of images represents a skewed distribution, we find that the per-unit FID scores fall in a wide range, with most units scoring well in FID while a few units stand out with bad FID scores: many of them were also manually flagged by humans, as they tend to activate on images with clear visible artifacts. FIG1 shows the performance of FID scores as
a predictor of manually flagged artifact units. The per-unit FID scores can achieve 50% precision
and 50% recall. That is, of the 20 worst-FID units, 10 are also among
the 20 units manually judged to have the most noticeable artifacts. Furthermore, repairing the model by ablating the highest-FID
units works: qualitative results are shown in FIG8 and quantitative results are shown in TAB4 . (a) unit118 in layer4 DISPLAYFORM1 Figure 14: Two examples of
generator units that our dissection method labels differently from humans. Both units are taken from layer4 of a Progressive GAN of living
room model. In (a), human label the unit as 'sofa' based on viewing the top-20
activating
images, and our method labels as 'ceiling'. In this case, our method counts many ceiling activations in a sample of 1000
images beyond the top 20. In (b), the dissection method has no confident label prediction even though
the
unit consistently triggers on white letterbox shapes at the top and bottom of the image. The segmentation model we use has no label for such abstract shapes. | GAN representations are examined in detail, and sets of representation units are found that control the generation of semantic concepts in the output. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:824 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Historically, the pursuit of efficient inference has been one of the driving forces be-hind the research into new deep learning architectures and building blocks.
Some of the recent examples include: the squeeze-and-excitation module of (Hu et al.,2018), depthwise separable convolutions in Xception (Chollet, 2017), and the inverted bottleneck in MobileNet v2 (Sandler et al., 2018).
Notably, in all of these cases, the resulting building blocks enabled not only higher efficiency, but also higher accuracy, and found wide adoption in the field.
In this work, we further expand the arsenal of efficient building blocks for neural network architectures; but instead of combining standard primitives (such as convolution), we advocate for the replacement of these dense primitives with their sparse counterparts.
While the idea of using sparsity to decrease the parameter count is not new (Mozer & Smolensky, 1989), the conventional wisdom is that this reduction in theoretical FLOPs does not translate into real-world efficiency gains.
We aim to correct this misconception by introducing a family of efficient sparse kernels for several hardware platforms, which we plan to open-source for the benefit of the community.
Equipped with our efficient implementation of sparse primitives, we show that sparse versions of MobileNet v1 and MobileNet v2 architectures substantially outperform strong dense baselines on the efficiency-accuracy curve.
On Snapdragon 835 our sparse networks outperform their dense equivalents by 1.1−2.2x – equivalent to approximately one entire generation of improvement.
We hope that our findings will facilitate wider adoption of sparsity as a tool for creating efficient and accurate deep learning architectures.
Convolutional neural networks (CNNs) have proven to be excellent at solving a diverse range of tasks (Bhandare et al., 2016) .
Standard network architectures are used in classification, segmentation, object detection and generation tasks (Pan et al., 2019; Long et al., 2015; Zhao et al., 2019) .
Given their wide utility, there has been significant effort to design efficient architectures that are capable of being run on mobile and other low power devices while still achieving high classification accuracy on benchmarks such as ImageNet (Russakovsky et al., 2015) .
For example, MobileNets (Howard et al., 2017; Sandler et al., 2018) employ the depthwise separable convolutions introduced in (Sifre & Mallat, 2014) to significantly reduce resource requirements over previous architectures.
Inference time and computational complexity in these architectures are dominated by the 1×1 convolutions, which directly map to matrix-matrix multiplications.
Weight sparsity is generally known to lead (Cheng et al., 2017) to theoretically smaller and more computationally efficient (in terms of number of floating-point operations) models, but it is often disregarded as a practical means of accelerating models because of the misconception that sparse operations cannot be fast enough to achieve actual speedups during inference.
In this work we introduce fast kernels for Sparse Matrix-Dense Matrix Multiplication (SpMM) specifically targeted at the accceleration of sparse neural networks.
The main distinction of our SpMM kernel from prior art (Nagasaka et al., 2018; ) is that we focus on a different point in the design space.
While prior work focused on extremely sparse problems (typically >99%, found in scientific and graph problems), we target the sparsity range of 70-95%, more common when inducing weight sparsity in neural networks.
As a result our kernels outperform both the Intel MKL (Intel, 2009 ) and the TACO compiler (Kjolstad et al., 2017) .
Using these kernels, we demonstrate the effectiveness of weight sparsity across three generations of MobileNet (Howard et al., 2017; Sandler et al., 2018; Tan et al., 2018; Tan & Le, 2019) architectures.
Sparsity leads to an approximately one generation improvement in each architecture, with a sparse EfficientNet significantly more efficient than all previous models.
These models represent a new generation of efficient CNNs, which reduces inference times by 1.1 − 2.2×, parameter counts by over 2× and number of floating-point operations (FLOPs) by up to 3× relative to the previous generations.
We demonstrate that for a constant computational budget, sparse convolutional networks are more accurate than dense ones; this corroborates the findings of Kalchbrenner et al. (2018) , which demonstrated that for a set number of floating-point operations, sparse RNNs are more accurate than dense RNNs.
We enable the use of weight sparsity to accelerate state-of-the-art convolutional networks by providing fast SpMM kernels along with all necessary supporting kernels for ARM processors.
On Snapdragon 835 the sparse networks we present in this paper outperform their dense equivalents by 1.1 − 2.2× -equivalent to approximately one entire generation of improvement.
By overturning the misconception that "sparsity is slow", we hope to open new avenues of research that would previously not be considered. | Sparse MobileNets are faster than Dense ones with the appropriate kernels. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:825 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this work, we address the semi-supervised classification of graph data, where the categories of those unlabeled nodes are inferred from labeled nodes as well as graph structures.
Recent works often solve this problem with the advanced graph convolution in a conventional supervised manner, but the performance could be heavily affected when labeled data is scarce.
Here we propose a Graph Inference Learning (GIL) framework to boost the performance of node classification by learning the inference of node labels on graph topology.
To bridge the connection of two nodes, we formally define a structure relation by encapsulating node attributes, between-node paths and local topological structures together, which can make inference conveniently deduced from one node to another node.
For learning the inference process, we further introduce meta-optimization on structure relations from training nodes to validation nodes, such that the learnt graph inference capability can be better self-adapted into test nodes.
Comprehensive evaluations on four benchmark datasets (including Cora, Citeseer, Pubmed and NELL) demonstrate the superiority of our GIL when compared with other state-of-the-art methods in the semi-supervised node classification task.
Graph, which comprises a set of vertices/nodes together with connected edges, is a formal structural representation of non-regular data.
Due to the strong representation ability, it accommodates many potential applications, e.g., social network (Orsini et al., 2017) , world wide data (Page et al., 1999) , knowledge graph (Xu et al., 2017) , and protein-interaction network (Borgwardt et al., 2007) .
Among these, semi-supervised node classification on graphs is one of the most interesting also popular topics.
Given a graph in which some nodes are labeled, the aim of semi-supervised classification is to infer the categories of those remaining unlabeled nodes by using various priors of the graph.
While there have been numerous previous works (Brandes et al., 2008; Zhou et al., 2004; Zhu et al., 2003; Yang et al., 2016; Zhao et al., 2019) devoted to semi-supervised node classification based on explicit graph Laplacian regularizations, it is hard to efficiently boost the performance of label prediction due to the strict assumption that connected nodes are likely to share the same label information.
With the progress of deep learning on grid-shaped images/videos (He et al., 2016) , a few of graph convolutional neural networks (CNN) based methods, including spectral (Kipf & Welling, 2017) and spatial methods (Niepert et al., 2016; Pan et al., 2018; Yu et al., 2018) , have been proposed to learn local convolution filters on graphs in order to extract more discriminative node representations.
Although graph CNN based methods have achieved considerable capabilities of graph embedding by optimizing filters, they are limited into a conventionally semi-supervised framework and lack of an efficient inference mechanism on graphs.
Especially, in the case of few-shot learning, where a small number of training nodes are labeled, this kind of methods would drastically compromise the performance.
For example, the Pubmed graph dataset (Sen et al., 2008) consists (b) The process of Graph inference learning.
We extract the local representation from the local subgraph (the circle with dashed line The red wave line denote the node reachability from to .
d t th h bilit f d t th d
Figure 1: The illustration of our proposed GIL framework.
For the problem of graph node labeling, the category information of these unlabeled nodes depends on the similarity computation between a query node (e.g., vj) and these labeled reference nodes (e.g., vi).
We consider the similarity from three points: node attributes, the consistency of local topological structures (i.e., the circle with dashed line), and the between-node path reachability (i.e., the red wave line from vi to vj).
Specifically, the local structures as well as node attributes are encoded as high-level features with graph convolution, while the between-node path reachability is abstracted as reachable probabilities of random walks.
To better make the inference generalize to test nodes, we introduce a meta-learning strategy to optimize the structure relations learning from training nodes to validation nodes.
of 19,717 nodes and 44,338 edges, but only 0.3% nodes are labeled for the semi-supervised node classification task.
These aforementioned works usually boil down to a general classification task, where the model is learnt on a training set and selected by checking a validation set.
However, they do not put great efforts on how to learn to infer from one node to another node on a topological graph, especially in the few-shot regime.
In this paper, we propose a graph inference learning (GIL) framework to teach the model itself to adaptively infer from reference labeled nodes to those query unlabeled nodes, and finally boost the performance of semi-supervised node classification in the case of a few number of labeled samples.
Given an input graph, GIL attempts to infer the unlabeled nodes from those observed nodes by building between-node relations.
The between-node relations are structured as the integration of node attributes, connection paths, and graph topological structures.
It means that the similarity between two nodes is decided from three aspects: the consistency of node attributes, the consistency of local topological structures, and the between-node path reachability, as shown in Fig. 1 .
The local structures anchored around each node as well as the attributes of nodes therein are jointly encoded with graph convolution (Defferrard et al., 2016) for the sake of high-level feature extraction.
For the between-node path reachability, we adopt the random walk algorithm to obtain the characteristics from a labeled reference node v i to a query unlabeled node v j in a given graph.
Based on the computed node representation and between-node reachability, the structure relations can be obtained by computing the similar scores/relationships from reference nodes to unlabeled nodes in a graph.
Inspired by the recent meta-learning strategy (Finn et al., 2017) , we learn to infer the structure relations from a training set to a validation set, which can benefit the generalization capability of the learned model.
In other words, our proposed GIL attempts to learn some transferable knowledge underlying in the structure relations from training samples to validation samples, such that the learned structure relations can be better self-adapted to the new testing stage.
We summarize the main contributions of this work as three folds:
• We propose a novel graph inference learning framework by building structure relations to infer unknown node labels from those labeled nodes in an end-to-end way.
The structure relations are well defined by jointly considering node attributes, between-node paths, and graph topological structures.
• To make the inference model better generalize to test nodes, we introduce a meta-learning procedure to optimize structure relations, which could be the first time for graph node classification to the best of our knowledge.
• Comprehensive evaluations on three citation network datasets (including Cora, Citeseer, and Pubmed) and one knowledge graph data (i.e., NELL) demonstrate the superiority of our proposed GIL in contrast with other state-of-the-art methods on the semi-supervised classification task.
In this work, we tackled the semi-supervised node classification task with a graph inference learning method, which can better predict the categories of these unlabeled nodes in an end-to-end framework.
We can build a structure relation for obtaining the connection between any two graph nodes, where node attributes, between-node paths, and graph structure information can be encapsulated together.
For better capturing the transferable knowledge, our method further learns to transfer the mined knowledge from the training samples to the validation set, finally boosting the prediction accuracy of the labels of unlabeled nodes in the testing set.
The extensive experimental results demonstrate the effectiveness of our proposed GIL for solving the semi-supervised learning problem, even in the few-shot paradigm.
In the future, we would extend the graph inference method to handle more graph-related tasks, such as graph generation and social network analysis. | We propose a novel graph inference learning framework by building structure relations to infer unknown node labels from those labeled nodes in an end-to-end way. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:826 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
This paper proposes a method for efficient training of Q-function for continuous-state Markov Decision Processes (MDP), such that the traces of the resulting policies satisfy a Linear Temporal Logic (LTL) property.
LTL, a modal logic, can express a wide range of time-dependent logical properties including safety and liveness.
We convert the LTL property into a limit deterministic Buchi automaton with which a synchronized product MDP is constructed.
The control policy is then synthesised by a reinforcement learning algorithm assuming that no prior knowledge is available from the MDP.
The proposed method is evaluated in a numerical study to test the quality of the generated control policy and is compared against conventional methods for policy synthesis such as MDP abstraction (Voronoi quantizer) and approximate dynamic programming (fitted value iteration).
Markov Decision Processes (MDPs) are extensively used as a family of stochastic processes in automatic control, computer science, economics, etc. to model sequential decision-making problems.
Reinforcement Learning (RL) is a machine learning algorithm that is widely used to train an agent to interact with an MDP when the stochastic behaviour of the MDP is initially unknown.
However, conventional RL is mostly focused on problems in which MDP states and actions are finite.
Nonetheless, many interesting real-world tasks, require actions to be taken in response to high-dimensional or real-valued sensory inputs BID5 .
For example, consider the problem of drone control in which the drone state is represented as its Euclidean position (x, y, z) ∈ R 3 .Apart
from state space discretisation and then running vanilla RL on the abstracted MDP, an alternative solution is to use an approximation function which is achieved via regression over the set of samples. At a
given state, this function is able to estimate the value of the expected reward. Therefore
, in continuous-state RL, this approximation replaces conventional RL state-action-reward look-up table which is used in finite-state MDPs. A number
of methods are available to approximate the expected reward, e.g. CMACs BID34 , kernel-based modelling BID22 , tree-based regression BID7 , basis functions BID3 , etc. Among these
methods, neural networks offer great promise in reward modelling due to their ability to approximate any non-linear function BID13 . There exist
numerous successful applications of neural networks in RL for infinite or large-state space MDPs, e.g. Deep Q-networks BID19 , TD-Gammon BID36 , Asynchronous Deep RL BID20 , Neural Fitted Q-iteration BID26 , CACLA BID39 .In this paper
, we propose to employ feedforward networks (multi-layer perceptrons) to synthesise a control policy for infinite-state MDPs such that the generated traces satisfy a Linear Temporal Logic (LTL) property. LTL allows to
specify complex mission tasks in a rich time-dependent formal language. By employing
LTL we are able to express complex high-level control objectives that are hard to express and achieve for other methods from vanilla RL BID35 BID31 to more recent developments such as Policy Sketching BID1 . Examples include
liveness and cyclic properties, where the agent is required to make progress while concurrently executing components, to take turns in critical sections or to execute a sequence of tasks periodically. The purpose of this
work is to show that the proposed architecture efficiently performs and is compatible with RL algorithms that are core of recent developments in the community.Unfortunately, in the domain of continuous-state MDPs, to the best of our knowledge, no research has been done to enable RL to generate policies according to full LTL properties. On the other hand,
the problem of control synthesis in finite-state MDPs for temporal logic has been considered in a number of works. In BID41 , the property
of interest is an LTL property, which is converted to a Deterministic Rabin Automaton (DRA). A modified Dynamic Programming
(DP) algorithm is then proposed to maximise the worst-case probability of satisfying the specification over all transition probabilities. Notice that in this work the MDP
must be known a priori. BID8 and BID2 assume that the given
MDP has unknown transition probabilities and build a Probably Approximately Correct MDP (PAC MDP), which is producted with the logical property after conversion to DRA. The goal is to calculate the value
function for each state such that the value is within an error bound of the actual state value where the value is the probability of satisfying the given LTL property. The PAC MDP is generated via an RL-like
algorithm and standard value iteration is applied to calculate the values of states.Moving away from full LTL logic, scLTL is proposed for mission specification, with which a linear programming solver is used to find optimal policies. The concept of shielding is employed in
BID0 to synthesise a reactive system that ensures that the agent stays safe during and after learning. However, unlike our focus on full LTL expressivity
, BID0 adopted the safety fragment of LTL as the specification language. This approach is closely related to teacher-guided
RL BID37 , since a shield can be considered as a teacher, which provides safe actions only if absolutely necessary. The generated policy always needs the shield to be
online, as the shield maps every unsafe action to a safe action. Almost all other approaches in safe RL either rely
on ergodicity of the underlying MDP, e.g. (Moldovan & Abbeel, 2012) , which guarantees that any state is reachable from any other state, or they rely on initial or partial knowledge about the MDP, e.g. BID32 and BID17 ).
This paper proposes LCNFQ, a method to train Q-function in a continuous-state MDP such that the resulting traces satisfy a logical property.
The proposed algorithm uses hybrid modes to automatically switch between neural nets when it is necessary.
LCNFQ is successfully tested in a numerical example to verify its performance.
e.
s i+1 belongs to the smallest Borel set B such that P (B|s i , a i ) = 1 (or in a discrete MDP, P (s i+1 |s i , a i ) > 0).
We might also denote ρ as s 0 ..
to emphasize that ρ starts from s 0 .Definition A.2 (Stationary Policy) A stationary (randomized) policy Pol : S × A → [0, 1] is a mapping from each state s ∈ S, and action a ∈ A to the probability of taking action a in state s. A deterministic policy is a degenerate case of a randomized policy which outputs a single action at a given state, that is ∀s ∈ S, ∃a ∈ A, Pol (s, a) = 1.In an MDP M, we define a function R : S × A → R + 0 that denotes the immediate scalar bounded reward received by the agent from the environment after performing action a ∈ A in state s ∈ S.Definition A.3 (Expected (Infinite-Horizon) Discounted Reward) For a policy Pol on an MDP M, the expected discounted reward is defined as BID35 : DISPLAYFORM0 where E Pol [·] denotes the expected value given that the agent follows policy Pol , γ ∈ [0, 1) is a discount factor and s 0 , ..., s n is the sequence of states generated by policy Pol up to time step n. Definition A.4 (Optimal Policy) Optimal policy Pol * is defined as follows: DISPLAYFORM1 where D is the set of all stationary deterministic policies over the state space S. | As safety is becoming a critical notion in machine learning we believe that this work can act as a foundation for a number of research directions such as safety-aware learning algorithms. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:827 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We introduce two approaches for conducting efficient Bayesian inference in stochastic simulators containing nested stochastic sub-procedures, i.e., internal procedures for which the density cannot be calculated directly such as rejection sampling loops.
The resulting class of simulators are used extensively throughout the sciences and can be interpreted as probabilistic generative models.
However, drawing inferences from them poses a substantial challenge due to the inability to evaluate even their unnormalised density, preventing the use of many standard inference procedures like Markov Chain Monte Carlo (MCMC).
To address this, we introduce inference algorithms based on a two-step approach that first approximates the conditional densities of the individual sub-procedures, before using these approximations to run MCMC methods on the full program.
Because the sub-procedures can be dealt with separately and are lower-dimensional than that of the overall problem, this two-step process allows them to be isolated and thus be tractably dealt with, without placing restrictions on the overall dimensionality of the problem.
We demonstrate the utility of our approach on a simple, artificially constructed simulator.
Stochastic simulators are used in a myriad of scientific and industrial settings, such as epidemiology (Patlolla et al., 2004) , physics (Heermann, 1990) , engineering (Hangos and Cameron, 2001 ) and climate modelling (Held, 2005) .
They can be complex and highdimensional, often incorporating domain-specific expertise accumulated over many years of research and development.
As shown by the probabilistic programming (Gordon et al., 2014; van de Meent et al., 2018; and approximate Bayesian computation (ABC) (Csilléry et al., 2010; Marin et al., 2012) literatures, these simulators can be interpreted as probabilistic generative models, implicitly defining a probability distribution over their internal variables and outputs.
As such, they form valid targets for drawing Bayesian inferences.
In particular, by constraining selected internal variables or outputs to take on specific values, we implicitly define a conditional distribution, or posterior, over the remaining variables.
This effectively allows us, amongst other things, to run the simulator in "reverse", fixing the outputs to some observed values and figuring out what parameter values might have led to them.
For example, given a simulator for visual scenes, we can run inference on the simulator with an observed image to predict what objects are present in the scene (Kulkarni et al., 2015) .
Though recent advances in probabilistic programming systems (PPSs, Tran et al. (2017) ; Bingham et al. (2019) ; ; Casado et al. (2017) ) have provided convenient mechanisms for encoding, reasoning about, and constructing inference algorithms for such simulators, performing the necessary inference is still often extremely challenging, particularly for complex or high-dimensional problems.
In this paper, we consider a scenario where this inference is particularly challenging to perform: when the simulator makes calls to nested stochastic sub-procedures (NSSPs).
These NSSPs can take several different forms, such as internal rejection sampling loops, separate inference procedures, external sub-simulators we have no control over, or even realworld experiments.
Their unifying common feature is that the density of their outputs cannot be evaluated up to an input-independent normalising constant in closed form.
This, in turn, means the normalised density of the overall simulator cannot be evaluated, preventing one from using most common inference methods, including almost all Markov chain Monte Carlo (MCMC) and variational methods.
Though some inference methods can still be applied in these scenarios, such as nested importance sampling (Rainforth, 2018) , these tend to scale very poorly in the dimensionality and often even have fundamentally slower convergence rates than standard Monte Carlo approaches (Rainforth et al., 2018) .
To address this issue, we introduce two new approaches for performing inference in such models.
Both are based around approximating the individual NSSPs.
The first approach directly approximates the conditional density of the NSSP outputs using an amortized inference artefact.
This then forms a surrogate density for the NSSP, which, once trained, is used to replace it.
While this first approach is generally applicable, our second approach focuses on the specific case where the unnormalized density of the NSSP can be evaluated in isolation (such as a nested probabilistic program or rejection sampling loop), but its normalizing constant depends on the NSSP inputs.
Here, we train a regressor to approximate the normalising constant of the NSSP as a function of its inputs.
Once learnt, this allows the NSSP to be collapsed into the outer program: the ratio of the known unnormalised density and the approximated normalizing constant can be directly used as a factor in the overall density.
Both approaches lead to an approximate version of the overall unnormalised density, which can then be used as a target for conventional inference methods like MCMC and variational inference.
Because these approximations can be calculated separately for each NSSP, this allows them to scale to higher dimensional overall simulators far more gracefully than existing approaches, opening the door to tractably running inference for more complex problems.
Furthermore, once trained, the approximations can be reused for different datasets and configurations of the outer simulator, thereby helping amortise the cost of running multiple different inferences for no extra cost.
The approaches themselves are also amenable to automation, making them suitable candidates for PPS inference engines. | We introduce two approaches for efficient and scalable inference in stochastic simulators for which the density cannot be evaluated directly due to, for example, rejection sampling loops. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:828 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
While adversarial training can improve robust accuracy (against an adversary), it sometimes hurts standard accuracy (when there is no adversary).
Previous work has studied this tradeoff between standard and robust accuracy, but only in the setting where no predictor performs well on both objectives in the infinite data limit.
In this paper, we show that even when the optimal predictor with infinite data performs well on both objectives, a tradeoff can still manifest itself with finite data.
Furthermore, since our construction is based on a convex learning problem, we rule out optimization concerns, thus laying bare a fundamental tension between robustness and generalization.
Finally, we show that robust self-training mostly eliminates this tradeoff by leveraging unlabeled data.
Neural networks trained using standard training have very low accuracies on perturbed inputs commonly referred to as adversarial examples BID11 .
Even though adversarial training BID3 BID5 can be effective at improving the accuracy on such examples (robust accuracy), these modified training methods decrease accuracy on natural unperturbed inputs (standard accuracy) BID5 BID18 .
Table 1 shows the discrepancy between standard and adversarial training on CIFAR-10.
While adversarial training improves robust accuracy from 3.5% to 45.8%, standard accuracy drops from 95.2% to 87.3%.One
explanation for a tradeoff is that the standard and robust objectives are fundamentally at conflict. Along
these lines, Tsipras et al. BID13 and Zhang et al. BID18 construct learning problems where the perturbations can change the output of the Bayes estimator. Thus
no predictor can achieve both optimal standard accuracy and robust accuracy even in the infinite data limit. However
, we typically consider perturbations (such as imperceptible ∞ perturbations) which do not change the output of the Bayes estimator, so that a predictor with both optimal standard and high robust accuracy exists.Another explanation could be that the hypothesis class is not rich enough to contain predictors that have optimal standard and high robust accuracy, even if they exist BID8 . However
, Table 1 shows that adversarial training achieves 100% standard and robust accuracy on the training set, suggesting that the hypothesis class is expressive enough in practice.Having ruled out a conflict in the objectives and expressivity issues, Table 1 suggests that the tradeoff stems from the worse generalization of adversarial training either due to (i) the
statistical properties of the robust objective or (ii) the
dynamics of optimizing the robust objective on neural networks.In an attempt to disentangle optimization and statistics, we ask does the tradeoff indeed disappear if we rule out optimization issues? After all
, from a statistical perspective, the robust objective adds information (constraints on the outputs of perturbations) which should intuitively aid generalization, similar to Lasso regression which enforces sparsity BID12 .Contributions
. We answer the
above question negatively by constructing a learning problem with a convex loss where adversarial training hurts generalization even when the optimal predictor has both optimal standard and robust accuracy. Convexity rules
out optimization issues, revealing a fundamental statistical explanation for why adversarial training requires more samples to obtain high standard accuracy. Furthermore, we
show that we can eliminate the tradeoff in our constructed problem using the recently-proposed robust self-training BID14 BID0 BID7 BID17 on additional unlabeled data.In an attempt to understand how predictive this example is of practice, we subsample CIFAR-10 and visualize trends in the performance of standard and adversarially trained models with varying training sample sizes. We observe that
the gap between the accuracies of standard and adversarial training decreases with larger sample size, mirroring the trends observed in our constructed problem. Recent results
from BID0 show that, similarly to our constructed setting, robust self-training also helps to mitigate the trade-off in CIFAR-10.Standard vs. robust generalization. Recent work BID10
BID15 BID4 BID6 has focused on the sample complexity of learning a predictor that has high robust accuracy (robust generalization), a different objective. In contrast, we study
the finite sample behavior of adversarially trained predictors on the standard learning objective (standard generalization), and show that adversarial training as a particular training procedure could require more samples to attain high standard accuracy.
In this work, we shed some light on the counter-intuitive phenomenon where enforcing invariance respected by the optimal function could actually degrade performance.
Being invariant could require complex predictors and consequently more samples to generalize well.
Our experiments support that the tradeoff between robustness and accuracy observed in practice is indeed due to insufficient samples and additional unlabeled data is sufficient to mitigate this tradeoff. | Even if there is no tradeoff in the infinite data limit, adversarial training can have worse standard accuracy even in a convex problem. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:829 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
When communicating, humans rely on internally-consistent language representations.
That is, as speakers, we expect listeners to behave the same way we do when we listen.
This work proposes several methods for encouraging such internal consistency in dialog agents in an emergent communication setting.
We consider two hypotheses about the effect of internal-consistency constraints:
1) that they improve agents’ ability to refer to unseen referents, and
2) that they improve agents’ ability to generalize across communicative roles (e.g. performing as a speaker de- spite only being trained as a listener).
While we do not find evidence in favor of the former, our results show significant support for the latter.
Emergent communication is the study of how linguistic protocols evolve when agents are tasked to cooperate.
For example, agents engaged in a simple object retrieval task learn to communicate with one another in order to get the items they want .
To date, work of this type has each agent assume a conversational role.
Thus, agents are often trained only to speak or only to listen , or similarily trained to speak using a vocabulary disjoint from the vocabulary it is understands as a listener-e.g. speaking only to ask questions ("what color?") and listening only to comprehend the answer ("blue") Das et al., 2017) .
These assumptions are misaligned with how we think about human communication, and with the way we'd like computational models to work in practice.
As humans, not only can we easily shift between roles, we also know that there is inherent symmetry between these roles: we expect others to speak (or listen) similarly to the way we do, and we know that others expect the same of us.
We test if dialog agents that incorporate the symmetry between themselves and their communicative partners learn more generalizable representations than those which do not.
We introduce three modifications to the agents to encourage that they abide by the "golden rule": speak/listen as you would want to be spoken/listened to.
Specifically, these modifications include self-play training objectives, shared embedding spaces, and symmetric decoding and encoding mechanisms that share parameters.
We test two hypotheses about the effect of the proposed modifications on emergent communication:
1. Internal-consistency constraints improve agents' ability to generalize to unseen items-e.g. training on "red square" and "blue circle" and then testing on "blue square".
2. Internal-consistency constraints improve agents' ability to generalize across communicative roles-e.g. training on "blue" as a listener, and using "blue" as a speaker when testing.
We evaluate the effect of each of the proposed modifications with two reference game datasets and two model architectures, an RNN model used by and a Transformer model.
We find no evidence to support that internal-consistency improves generalization to unseen items (Hypothesis 1), but significant evidence that these proposed constraints enable models to generalize learned representations across communicative roles (Hypothesis 2), even in the case of where the agent receives no direct training in the target (test) role.
All of our code and data are available at bit.ly/internal-consistency-emergent-communication.
Notation.
The space of possible references is parameterized by the number of attributes n f that describe each item (e.g. color) and the number of values n v each attribute can take (e.g.{red, blue}).
Each item o is a bag-of-features vector o P t0, 1u N where N " n f¨nv . Each index o i is 1 if o expresses the ith feature value. The speaker produces a message with symbols from a vocabulary V with length L. For comparison, we use the best-performing setting |V| " 100 and L " 10 from previous work . Symbols in V are represented as 1-hot vectors. In each round of the reference game, we construct xC, r, ry where C is the context (set of item column vectors stacked into a matrix), r is a vector representing the referent, and r is the index of the referent in C. We uniformly sample k´1 items as distractors to form C " to 1 , . . . o k´1 uYtru.
The distractors are is sampled randomly each round (in every epoch).
We propose three methods for encouraging dialog agents to follow "the golden rule": speak/listen to others as you would expect to be spoken/listened to.
In the emergent communication setting, we find that the internal-consistency constraints do not systematically improve models' generalization to novel items, but both the self-play objective and shared embeddings significantly improve performance when agents are tested on roles they were not directly trained for.
In fact, when trained in one role and tested on another, these internal-consistency constraints allow the agents to perform about as well as if they had been trained in the target role. | Internal-consistency constraints improve agents ability to develop emergent protocols that generalize across communicative roles. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:83 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Skip connections are increasingly utilized by deep neural networks to improve accuracy and cost-efficiency.
In particular, the recent DenseNet is efficient in computation and parameters, and achieves state-of-the-art predictions by directly connecting each feature layer to all previous ones.
However, DenseNet's extreme connectivity pattern may hinder its scalability to high depths, and in applications like fully convolutional networks, full DenseNet connections are prohibitively expensive.
This work first experimentally shows that one key advantage of skip connections is to have short distances among feature layers during backpropagation.
Specifically, using a fixed number of skip connections, the connection patterns with shorter backpropagation distance among layers have more accurate predictions.
Following this insight, we propose a connection template, Log-DenseNet, which, in comparison to DenseNet, only slightly increases the backpropagation distances among layers from 1 to ($1 + \log_2 L$), but uses only $L\log_2 L$ total connections instead of $O(L^2)$.
Hence, \logdenses are easier to scale than DenseNets, and no longer require careful GPU memory management.
We demonstrate the effectiveness of our design principle by showing better performance than DenseNets on tabula rasa semantic segmentation, and competitive results on visual recognition.
Deep neural networks have been improving performance for many machine learning tasks, scaling from networks like AlexNet BID17 to increasingly more complex and expensive networks, like VGG BID30 , ResNet BID8 and Inception BID5 .
Continued hardware and software advances will enable us to build deeper neural networks, which have higher representation power than shallower ones.
However, the payoff from increasing the depth of the networks only holds in practice if the networks can be trained effectively.
It has been shown that naïvely scaling up the depth of networks actually decreases the performance BID8 , partially because of vanishing/exploding gradients in very deep networks.
Furthermore, in certain tasks such as semantic segmentation, it is common to take a pre-trained network and fine-tune, because training from scratch is difficult in terms of both computational cost and reaching good solutions.
Overcoming the vanishing gradient problem and being able to train from scratch are two active areas of research.Recent works attempt to overcome these training difficulties in deeper networks by introducing skip, or shortcut, connections BID25 BID7 BID31 BID8 BID19 so the gradient reaches earlier layers and compositions of features at varying depth can be combined for better performance.
In particular, DenseNet is the extreme example of this, concatenating all previous layers to form the input of each layer, i.e., connecting each layer to all previous ones.
However, this incurs an O(L 2 ) run-time complexity for a depth L network, and may hinder the scaling of networks.
Specifically, in fully convolutional networks (FCNs), where the final feature maps have high resolution so that full DenseNet connections are prohibitively expensive, BID14 propose to cut most of connections from the mid-depth.
To combat the scaling issue, propose to halve the total channel size a number of times.
Futhermore, cut 40% of the channels in DenseNets while maintaining the accuracy, suggesting that much of the O(L 2 ) computation is redundant.
Therefore, it is both necessary and natural to consider a more efficient design principle for placing shortcut connections in deep neural networks.1In this work, we address the scaling issue of skip connections by answering the question: if we can only afford the computation of a limited number of skip connections and we believe the network needs to have at least a certain depth, where should the skip connections be placed?
We design experiments to show that with the same number of skip connections at each layer, the networks can have drastically different performance based on where the skip connections are.
In particular, we summarize this result as the following design principle, which we formalize in Sec. 3.2: given a fixed number of shortcut connections to each feature layer, we should choose these shortcut connections to minimize the distance among layers during backpropagation.Following this principle, we design a network template, Log-DenseNet.
In comparison to DenseNets at depth L, Log-DenseNets cost only L log L, instead of O(L 2 ) run-time complexity.
Furthermore, Log-DenseNets only slightly increase the short distances among layers during backpropagation from 1 to 1 + log L. Hence, Log-DenseNets can scale to deeper and wider networks, even without custom GPU memory managements that DenseNets require.
In particular, we show that Log-DenseNets outperform DenseNets on tabula rasa semantic segmentation on CamVid BID2 , while using only half of the parameters, and similar computation.
Log-DenseNets also achieve comparable performance to DenseNet with the same computations on visual recognition data-sets, including ILSVRC2012 BID29 .
In short, our contributions are as follows:• We experimentally support the design principle that with a fixed number of skip connections per layer, we should place them to minimize the distance among layers during backpropagation.•
The proposed Log-DenseNets achieve small 1 + log 2 L between-layer distances using few connections (L log 2 L), and hence, are scalable for deep networks and applications like FCNs.•
The proposed network outperforms DenseNet on CamVid for tabula rasa semantic segmentation, and achieves comparable performance on ILSVRC2012 for recognition.
We show that short backpropagation distances are important for networks that have shortcut connections: if each layer has a fixed number of shortcut inputs, they should be placed to minimize MBD.
Based on this principle, we design Log-DenseNet, which uses O(L log L) total shortcut connections on a depth-L network to achieve 1 + log L MBD.
We show that Log-DenseNets improve the performance and scalability of tabula rasa fully convolutional DenseNets on CamVid.
Log-DenseNets also achieve competitive results in visual recognition data-sets, offering a trade-off between accuracy and network depth.
Our work provides insights for future network designs, especially those that cannot afford full dense shortcut connections and need high depths, like FCNs.
8 smallest interval in the recursion tree such that i, j ∈ [s, t].
Then we can continue the path to x j by following the recursion calls whose input segments include j until j is in a key location set.
The longest path is then the depth of the recursion tree plus one initial jump, i.e., 2 + log log L. Figure 5a shows the average number of input layers for each feature layer in LogLog-DenseNet.
Without augmentations, lglg_conn on average has 3 to 4 connections per layer.
With augmentations using Log-DenseNet, we desire each layer to have four inputs if possible.
On average, this increases the number of inputs by 1 to 1.5 for L ∈ (10, 2000). | We show shortcut connections should be placed in patterns that minimize between-layer distances during backpropagation, and design networks that achieve log L distances using L log(L) connections. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:830 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Building agents to interact with the web would allow for significant improvements in knowledge understanding and representation learning.
However, web navigation tasks are difficult for current deep reinforcement learning (RL) models due to the large discrete action space and the varying number of actions between the states.
In this work, we introduce DOM-Q-NET, a novel architecture for RL-based web navigation to address both of these problems.
It parametrizes Q functions with separate networks for different action categories: clicking a DOM element and typing a string input.
Our model utilizes a graph neural network to represent the tree-structured HTML of a standard web page.
We demonstrate the capabilities of our model on the MiniWoB environment where we can match or outperform existing work without the use of expert demonstrations.
Furthermore, we show 2x improvements in sample efficiency when training in the multi-task setting, allowing our model to transfer learned behaviours across tasks.
Over the past years, deep reinforcement learning (RL) has shown a huge success in solving tasks such as playing arcade games BID11 and manipulating robotic arms BID8 .
Recent advances in neural networks allow RL agents to learn control policies from raw pixels without feature engineering by human experts.
However, most of the deep RL methods focus on solving problems in either simulated physics environments where the inputs to the agents are joint angles and velocities, or simulated video games where the inputs are rendered graphics.
Agents trained in such simulated environments have little knowledge about the rich semantics of the world.The World Wide Web (WWW) is a rich repository of knowledge about the real world.
To navigate in this complex web environment, an agent needs to learn about the semantic meaning of texts, images and the relationships between them.
Each action corresponds to interacting with the Document Object Model (DOM) from tree-structured HTML.
Tasks like finding a friend on a social network, clicking an interesting link, and rating a place on Google Maps can be framed as accessing a particular DOM element and modifying its value with the user input.In contrast to Atari games, the difficulty of web tasks comes from their diversity, large action space, and sparse reward signals.
A common solution for the agent is to mimic the expert demonstration by imitation learning in the previous works BID15 BID10 .
BID10 achieved state-of-the-art performance with very few expert demonstrations in the MiniWoB BID15 benchmark tasks, but their exploration policy requires constrained action sets, hand-crafted with expert knowledge in HTML.In this work, our contribution is to propose a novel architecture, DOM-Q-NET, that parametrizes factorized Q functions for web navigation, which can be trained to match or outperform existing work on MiniWoB without using any expert demonstration.
Graph Neural Network BID13 BID9 BID7 ) is used as the main backbone to provide three levels of state and action representations.In particular, our model uses the neural message passing and the readout BID3 of the local DOM representations to produce neighbor and global representations for the web page.We also propose to use three separate multilayer perceptrons (MLP) BID12 to parametrize a factorized Q function for different action categories: "click", "type" and "mode".
The entire architecture is fully differentiable, and all of its components are jointly trained.Moreover, we evaluate our model on multitask learning of web navigation tasks, and demonstrate the transferability of learned behaviors on the web interface.
To our knowledge, this is the first instance that an RL agent solves multiple tasks in the MiniWoB at once.
We show that the multi-task agent achieves an average of 2x sample efficiency comparing to the single-task agent. | Graph-based Deep Q Network for Web Navigation | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:831 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Learning disentangled representations that correspond to factors of variation in real-world data is critical to interpretable and human-controllable machine learning.
Recently, concerns about the viability of learning disentangled representations in a purely unsupervised manner has spurred a shift toward the incorporation of weak supervision.
However, there is currently no formalism that identifies when and how weak supervision will guarantee disentanglement.
To address this issue, we provide a theoretical framework—including a calculus of disentanglement— to assist in analyzing the disentanglement guarantees (or lack thereof) conferred by weak supervision when coupled with learning algorithms based on distribution matching.
We empirically verify the guarantees and limitations of several weak supervision methods (restricted labeling, match-pairing, and rank-pairing), demonstrating the predictive power and usefulness of our theoretical framework.
Many real-world data can be intuitively described via a data-generating process that first samples an underlying set of interpretable factors, and then-conditional on those factors-generates an observed data point.
For example, in image generation, one might first generate the object identity and pose, and then build an image of this object accordingly.
The goal of disentangled representation learning is to learn a representation where each dimensions of the representation measures a distinct factor of variation in the dataset (Bengio et al., 2013) .
Learning such representations that align with the underlying factors of variation may be critical to the development of machine learning models that are explainable or human-controllable (Gilpin et al., 2018; Lee et al., 2019; Klys et al., 2018) .
In recent years, disentanglement research has focused on the learning of such representations in an unsupervised fashion, using only independent samples from the data distribution without access to the true factors of variation (Higgins et al., 2017; Chen et al., 2018a; Kim & Mnih, 2018; Esmaeili et al., 2018) .
However, Locatello et al. (2019) demonstrated that many existing methods for the unsupervised learning of disentangled representations are brittle, requiring careful supervision-based hyperparameter tuning.
To build robust disentangled representation learning methods that do not require large amounts of supervised data, recent work has turned to forms of weak supervision (Chen & Batmanghelich, 2019; Gabbay & Hoshen, 2019) .
Weak supervision can allow one to build models that have interpretable representations even when human labeling is challenging (e.g., hair style in face generation, or style in music generation).
While existing methods based on weaklysupervised learning demonstrate empirical gains, there is no existing formalism for describing the theoretical guarantees conferred by different forms of weak supervision (Kulkarni et al., 2015; Reed et al., 2015; Bouchacourt et al., 2018) .
In this paper, we present a comprehensive theoretical framework for weakly supervised disentanglement, and evaluate our framework on several datasets.
Our contributions are several-fold.
2. We propose a set of definitions for disentanglement that can handle correlated factors and are inspired by many existing definitions in the literature (Higgins et al., 2018; Suter et al., 2018; Ridgeway & Mozer, 2018) .
3. Using these definitions, we provide a conceptually useful and theoretically rigorous calculus of disentanglement.
4. We apply our theoretical framework of disentanglement to analyze the theoretical guarantees of three notable weak supervision methods (restricted labeling, match pairing, and rank pairing) and experimentally verify these guarantees.
In this work, we construct a theoretical framework to rigorously analyze the disentanglement guarantees of weak supervision algorithms.
Our paper clarifies several important concepts, such as consistency and restrictiveness, that have been hitherto confused or overlooked in the existing literature, and provides a formalism that precisely distinguishes when disentanglement arises from supervision versus model inductive bias.
Through our theory and a comprehensive set of experiments, we demonstrated the conditions under which various supervision strategies guarantee disentanglement.
Our work establishes several promising directions for future research.
First, we hope that our formalism and experiments inspire greater theoretical and scientific scrutiny of the inductive biases present in existing models.
Second, we encourage the search for other learning algorithms (besides distribution-matching) that may have theoretical guarantees when paired with the right form of supervision.
Finally, we hope that our framework enables the theoretical analysis of other promising weak supervision methods. | We construct a theoretical framework for weakly supervised disentanglement and conducted lots of experiments to back up the theory. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:832 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Despite the growing interest in continual learning, most of its contemporary works have been studied in a rather restricted setting where tasks are clearly distinguishable, and task boundaries are known during training.
However, if our goal is to develop an algorithm that learns as humans do, this setting is far from realistic, and it is essential to develop a methodology that works in a task-free manner.
Meanwhile, among several branches of continual learning, expansion-based methods have the advantage of eliminating catastrophic forgetting by allocating new resources to learn new data.
In this work, we propose an expansion-based approach for task-free continual learning.
Our model, named Continual Neural Dirichlet Process Mixture (CN-DPM), consists of a set of neural network experts that are in charge of a subset of the data.
CN-DPM expands the number of experts in a principled way under the Bayesian nonparametric framework.
With extensive experiments, we show that our model successfully performs task-free continual learning for both discriminative and generative tasks such as image classification and image generation.
Humans consistently encounter new information throughout their lifetime.
The way the information is provided, however, is vastly different from that of conventional deep learning where each minibatch is iid-sampled from the whole dataset.
Data points adjacent in time can be highly correlated, and the overall distribution of the data can shift drastically as the training progresses.
Continual learning (CL) aims at imitating incredible human's ability to learn from a non-iid stream of data without catastrophically forgetting the previously learned knowledge.
Most CL approaches (Aljundi et al., 2018; 2017; Lopez-Paz & Ranzato, 2017; Kirkpatrick et al., 2017; Rusu et al., 2016; Shin et al., 2017; Yoon et al., 2018) assume that the data stream is explicitly divided into a sequence of tasks that are known at training time.
Since this assumption is far from realistic, task-free CL is more practical and demanding but has been largely understudied with only a few exceptions of (Aljundi et al., 2019a; b) .
In this general CL, not only is explicit task definition unavailable but also the data distribution gradually shifts without a clear task boundary.
Meanwhile, existing CL methods can be classified into three different categories (Parisi et al., 2019) : regularization, replay, and expansion methods.
Regularization and replay approaches address the catastrophic forgetting by regularizing the update of a specific set of weights or replaying the previously seen data, respectively.
On the other hand, the expansion methods are different from the two approaches in that it can expand the model architecture to accommodate new data instead of fixing it beforehand.
Therefore, the expansion methods can bypass catastrophic forgetting by preventing pre-existing components from being overwritten by the new information.
The critical limitation of prior expansion methods, however, is that the decisions of when to expand and which resource to use heavily rely on explicitly given task definition and heuristics.
In this work, our goal is to propose a novel expansion-based approach for task-free CL.
Inspired by the Mixture of Experts (MoE) (Jacobs et al., 1991) , our model consists of a set of experts, each of which is in charge of a subset of the data in a stream.
The model expansion (i.e., adding more experts) is governed by the Bayesian nonparametric framework, which determines the model complexity by the data, as opposed to the parametric methods that fix the model complexity before training.
We formulate the task-free CL as an online variational inference of Dirichlet process mixture models consisting of a set of neural experts; thus, we name our approach as the Continual Neural Dirichlet Process Mixture (CN-DPM) model.
We highlight the key contributions of this work as follows.
• We are one of the first to propose an expansion-based approach for task-free CL.
Hence, our model not only prevents catastrophic forgetting but also applies to the setting where no task definition and boundaries are given at both training and test time.
Our model named CN-DPM consists of a set of neural network experts, which are expanded in a principled way built upon the Bayesian nonparametrics that have not been adopted in general CL research.
• Our model can deal with both generative and discriminative tasks of CL.
With several benchmark experiments of CL literature on MNIST, SVHN, and CIFAR 10/100, we show that our model successfully performs multiple types of CL tasks, including image classification and generation.
2 BACKGROUND AND RELATED WORK 2.1 CONTINUAL LEARNING Parisi et al. (2019) classify CL approaches into three branches: regularization (Kirkpatrick et al., 2017; Aljundi et al., 2018) , replay (Shin et al., 2017) and expansion (Aljundi et al., 2017; Rusu et al., 2016; Yoon et al., 2018) methods.
Regularization and replay approaches fix the model architecture before training and prevent catastrophic forgetting by regularizing the change of a specific set of weights or replaying previously learned data.
Hybrids of replay and regularization also exist, such as Gradient Episodic Memory (GEM) (Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2019a) .
On the other hand, methods based on expansion add new network components to learn new data.
Conceptually, such direction has the following advantages compared to the first two:
(i) catastrophic forgetting can be eliminated since new information is not overwritten on pre-existing components and
(ii) the model capacity is determined adaptively depending on the data.
Task-Free Continual Learning.
All the works mentioned above heavily rely on explicit task definition.
However, in real-world scenarios, task definition is rarely given at training time.
Moreover, the data domain may gradually shift without any clear task boundary.
Despite its importance, taskfree CL has been largely understudied; to the best of our knowledge, there are only a few works (Aljundi et al., 2019a; b; Rao et al., 2019) , each of which is respectively based on regularization, replay, and a hybrid of replay and expansion.
Specifically, Aljundi et al. (2019a) extend MAS (Aljundi et al., 2018) by adding heuristics to determine when to update the importance weights with no task definition.
In their following work (Aljundi et al., 2019b) , they improve the memory management algorithm of GEM (Lopez-Paz & Ranzato, 2017) such that the memory elements are carefully selected to minimize catastrophic forgetting.
While focused on unsupervised learning, Rao et al. (2019) is a parallel work that shares several similarities with our method, e.g., model expansion and short-term memory.
However, due to their model architecture, expansion is not enough to stop catastrophic forgetting; consequently, generative replay plays a crucial role in Rao et al. (2019) .
As such, it can be categorized as a hybrid of replay and expansion.
More detailed comparison between our method and Rao et al. (2019) is deferred to Appendix M.
In this work, we formulated expansion-based task-free CL as learning of a Dirichlet process mixture model with neural experts.
We demonstrated that the proposed CN-DPM model achieves great performance in multiple task-free settings, better than the existing methods.
We believe there are several interesting research directions beyond this work:
(i) improving the accuracy of expert selection, which is the main bottleneck of our method, and
(ii) applying our method to different domains such as natural language processing and reinforcement learning. | We propose an expansion-based approach for task-free continual learning for the first time. Our model consists of a set of neural network experts and expands the number of experts under the Bayesian nonparametric principle. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:833 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Stochastic Gradient Descent (SGD) with Nesterov's momentum is a widely used optimizer in deep learning, which is observed to have excellent generalization performance.
However, due to the large stochasticity, SGD with Nesterov's momentum is not robust, i.e., its performance may deviate significantly from the expectation.
In this work, we propose Amortized Nesterov's Momentum, a special variant of Nesterov's momentum which has more robust iterates, faster convergence in the early stage and higher efficiency.
Our experimental results show that this new momentum achieves similar (sometimes better) generalization performance with little-to-no tuning.
In the convex case, we provide optimal convergence rates for our new methods and discuss how the theorems explain the empirical results.
In recent years, Gradient Descent (GD) (Cauchy, 1847) and its variants have been widely used to solve large scale machine learning problems.
Among them, Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951) , which replaces gradient with an unbiased stochastic gradient estimator, is a popular choice of optimizer especially for neural network training which requires lower precision.
Sutskever et al. (2013) found that using SGD with Nesterov's momentum (Nesterov, 1983; 2013b) , which was originally designed to accelerate deterministic convex optimization, achieves substantial speedups for training neural networks.
This finding essentially turns SGD with Nesterov's momentum into the benchmarking method of neural network design, especially for classification tasks (He et al., 2016b; a; Zagoruyko & Komodakis, 2016; Huang et al., 2017) .
It is observed that in these tasks, the momentum technique plays a key role in achieving good generalization performance.
Adaptive methods (Duchi et al., 2011; Kingma & Ba, 2015; Tieleman & Hinton, 2012; Reddi et al., 2018) , which are also becoming increasingly popular in the deep learning community, diagonally scale the gradient to speed up training.
However, Wilson et al. (2017) show that these methods always generalize poorly compared with SGD with momentum (both classical momentum (Polyak, 1964 ) and Nesterov's momentum).
In this work, we introduce Amortized Nesterov's Momentum, which is a special variant of Nesterov's momentum.
From users' perspective, the new momentum has only one additional integer hyper-parameter m to choose, which we call the amortization length.
Learning rate and momentum parameter of this variant are strictly aligned with Nesterov's momentum and by choosing m = 1, it recovers Nesterov's momentum.
This paper conducts an extensive study based on both empirical evaluation and convex analysis to identify the benefits of the new variant (or from users' angle, to set m apart from 1).
We list the advantages of Amortized Nesterov's Momentum as follows:
• Increasing m improves robustness 1 .
This is an interesting property since the new momentum not only provides acceleration, but also enhances the robustness.
We provide an understanding of this property by analyzing the relation between convergence rate and m in the convex setting.
• Increasing m reduces (amortized) iteration complexity.
• A suitably chosen m boosts the convergence rate in the early stage of training and produces comparable final generalization performance.
• It is easy to tune m.
The performances of the methods are stable for a wide range of m and we prove that the methods converge for any valid choice of m in the convex setting.
• If m is not too large, the methods obtain the optimal convergence rate in general convex setting, just like Nesterov's method.
The new variant does have some minor drawbacks: it requires one more memory buffer, which is acceptable in most cases, and it shows some undesired behaviors when working with learning rate schedulers, which can be addressed by a small modification.
Considering these pros and cons, we believe that the proposed variant can benefit many large-scale deep learning tasks.
Our high level idea is simple: the stochastic Nesterov's momentum can be unreliable since it is provided only by the previous stochastic iterate.
The iterate potentially has large variance, which may lead to a false momentum that perturbs the training process.
We thus propose to use the stochastic Nesterov's momentum based on several past iterates, which provides robust acceleration.
In other words, instead of immediately using an iterate to provide momentum, we put the iterate into an "amortization plan" and use it later.
We presented Amortized Nesterov's Momentum, which is a special variant of Nesterov's momentum that utilizes several past iterates to provide the momentum.
Based on this idea, we designed two different realizations, namely, AM1-SGD and AM2-SGD.
Both of them are simple to implement with little-to-no additional tuning overhead over M-SGD.
Our empirical results demonstrate that switching to AM1-SGD and AM2-SGD produces faster early convergence and comparable final generalization performance.
AM1-SGD is lightweight and has more robust iterates than M-SGD, and thus can serve as a favorable alternative to M-SGD in large-scale deep learning tasks.
AM2-SGD could be favorable for more restrictive tasks (e.g., asynchronous training) due to its extensibility and good performance.
Both the methods are proved optimal in the convex case, just like M-SGD.
Based on the intuition from convex analysis, the proposed methods are trading acceleration for variance control, which provides hints for the hyper-parameter tuning.
We discuss the issues with learning rate schedulers in Appendix A.4.
We report the test accuracy results of the ResNet18 experiment (in Section 4) in Appendix A.5.
A CIFAR-100 experiment is provided in Appendix A.6.
We also provide a sanity check for our implementation in Appendix A.7.
Table 4 : Final test accuracy and average accuracy STD of training ResNet34 on CIFAR-10 over 5 runs (including the detailed data of the curves in Figure 1 and Figure 2a) .
For all the methods, η 0 = 0.1, β = 0.9.
Multiple runs start with the same x 0 .
We show in Figure 6 how m affects the convergence of test accuracy.
The results show that increasing m speeds up the convergence in the early stage.
While for AM1-SGD the convergences of Option I and Option II are similar, AM2-SGD with Option II is consistently better than with Option I in this experiment.
It seems that AM2-SGD with Option I does not benefit from increasing m and the algorithm is not robust.
Thus, we do not recommend using Option I for AM2-SGD.
Table 4 .
Labels are formatted as 'AM1/2-SGD-{Option}-{m}' .
Best viewed in color. | Amortizing Nesterov's momentum for more robust, lightweight and fast deep learning training. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:834 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
A state-of-the-art generative model, a ”factorized action variational autoencoder (FAVAE),” is presented for learning disentangled and interpretable representations from sequential data via the information bottleneck without supervision.
The purpose of disentangled representation learning is to obtain interpretable and transferable representations from data.
We focused on the disentangled representation of sequential data because there is a wide range of potential applications if disentanglement representation is extended to sequential data such as video, speech, and stock price data.
Sequential data is characterized by dynamic factors and static factors: dynamic factors are time-dependent, and static factors are independent of time.
Previous works succeed in disentangling static factors and dynamic factors by explicitly modeling the priors of latent variables to distinguish between static and dynamic factors.
However, this model can not disentangle representations between dynamic factors, such as disentangling ”picking” and ”throwing” in robotic tasks.
In this paper, we propose new model that can disentangle multiple dynamic factors.
Since our method does not require modeling priors, it is capable of disentangling ”between” dynamic factors.
In experiments, we show that FAVAE can extract the disentangled dynamic factors.
Representation learning is one of the most fundamental problems in machine learning.
A real world data distribution can be regarded as a low-dimensional manifold in a high-dimensional space BID3 .
Generative models in deep learning, such as the variational autoencoder (VAE) BID25 and the generative adversarial network (GAN) BID15 , are able to learn low-dimensional manifold representation (factor) as a latent variable.
The factors are fundamental components such as position, color, and degree of smiling in an image of a human face BID27 .
Disentangled representation is defined as a single factor being represented by a single latent variable BID3 .
Thus, if in a model of learned disentangled representation, shifting one latent variable while leaving the others fixed generates data showing that only the corresponding factor was changed.
This is called latent traversals (a good demonstration of which was given by BID17 1 ).
There are two advantages of disentangled representation.
First, latent variables are interpretable.
Second, the disentangled representation is generalizable and robust against adversarial attacks BID1 .We
focus on the disentangled representation learning of sequential data. Sequential
data is characterized by dynamic factors and static factors: dynamic factors are time dependent, and static factors are independent of time. With disentangled
representation learning from sequential data, we should be able to extract dynamic factors that cannot be extracted by disentangled representation learning models for non-sequential data such as β-VAE BID17 b) and InfoGAN BID8 . The concept of disentangled
representation learning for sequential data is illustrated in Fig. 1 . Consider that the pseudo-dataset
of the movement of a submarine has a dynamic factor: the trajectory shape. The disentangled representation
learning model for sequential data can extract this shape. On the other hand, since the disentangled
representation learning model for non-sequential data does not consider the sequence of data, it merely extracts the x-position and y-position. Figure 1: Illustration of how FAVAE differs
from β-VAE. β-VAE does not accept data sequentially; it
cannot differentiate data points from different trajectories or sequences of data points. FAVAE considers a sequence of data points,
taking all data points in a trajectory as one datum. For example, for a pseudo-dataset representing
the trajectory of a submarine (1a,1c), β-VAE accepts 11 different positions of the submarine as non-sequential data while FAVAE accepts three different trajectories of the submarine as sequential data. Therefore, the latent variable in β-VAE learns
only the coordinates of the submarine, and the latent traversal shows the change in the submarines position. On the other hand, FAVAE learns the factor that
controls the trajectory of the submarine, so the latent traversal shows the change in the submarines trajectory.There is a wide range of potential applications if we extend disentanglement representation to sequential data such as speech, video, and stock market data. For example, disentangled representation learning
for stock price data can extract the fundamental trend of a given stock price. Another application is the reduction of action space
in reinforcement learning. Extracting dynamic factors would enable the generation
of macro-actions BID11 , which are sets of sequential actions that represent the fundamental factors of the actions. Thus, disentangled representation learning for sequential
data opens the door to new areas of research.Very recent related work BID22 BID26 ) separated factors of sequential data into dynamic and static factors. The factorized hierarchical variational autoencoder (FHVAE
) BID22 ) is based on a graphical model using latent variables with different time dependencies. By maximizing the variational lower bound of the graphical
model, the FHVAE separates the different time dependent factors such as the dynamic and static factors. The VAE architecture developed by BID26 is the same as the
FHVAE in terms of the time dependencies of the latent variables. Since these models require different time dependencies for
the latent variables, these approaches cannot be used disentangle variables with the same time dependency factor.We address this problem by taking a different approach. First, we analyze the root cause of disentanglement from the
perspective of information theory. As a result, the term causing disentanglement is derived from
a more fundamental rule: reduce the mutual dependence between the input and output of an encoder while keeping the reconstruction of the data. This is called the information bottleneck (IB) principle. We
naturally extend this principle to sequential data from the
relationship between x and z to x t:T and z. This enables the separation of multiple dynamic factors as a consequence
of information compression. It is difficult to learn a disentangled representation of sequential data
since not only the feature space but also the time space should be compressed. We created the factorized action variational autoencoder (FAVAE) in which
we implemented the concept of information capacity to stabilize learning and a ladder network to learn a disentangled representation in accordance with the level of data abstraction. Since our model is a more general model without the restriction of a graphical
model design to distinguish between static and dynamic factors, it can separate depen-dency factors occurring at the same time. Moreover, it can separate factors into dynamic and static factors.2 DISENTANGLEMENT
FOR NON-SEQUENTIAL DATA β-VAE BID17 b) is a commonly used method for
learning disentangled representations based on the VAE framework BID25 ) for a generative model. The VAE can estimate the probability density from data x. The objective function of
the VAE maximizes the evidence lower bound (ELBO) of log
p (x) as DISPLAYFORM0 where z is latent variable, D KL is the Kullback-Leibler divergence, and q (z|x) is an approximated distribution of p (z|x). D KL (q (z|x) ||p (z|x)) reduces to zero as the ELBO L VAE increases; thus, q (z|x
) learns a good approximation of p (z|x). The ELBO is defined as DISPLAYFORM1 where the first term, E q(z|x) [log p (x|z)],
is a reconstruction term used to reconstruct x, and the second term D KL (q (z|x) ||p (z)) is a regularization term used to regularize posterior q (z|x). Encoder q (z|x) and decoder p (x|z) are learned in the VAE.Next we will explain how
β-VAE extracts disentangled representations from unlabeled data. β-VAE is an extension of the coefficient β > 1 of the regularization term DISPLAYFORM2
where β > 1 and p (z) = N (0, 1). β-VAE promotes disentangled representation learning via the Kullback-Leibler divergence
term. As β increases, the latent variable q (z|x) approaches the prior p (z) ; therefore, each
z i is pressured to learn the probability distribution of N (0, 1). However, if all latent variables z i become N (0, 1), the model cannot reconstruct x. As
a result, as long as z reconstructs x, β-VAE reduces the information of z. | We propose new model that can disentangle multiple dynamic factors in sequential data | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:835 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Generative networks are promising models for specifying visual transformations.
Unfortunately, certification of generative models is challenging as one needs to capture sufficient non-convexity so to produce precise bounds on the output.
Existing verification methods either fail to scale to generative networks or do not capture enough non-convexity.
In this work, we present a new verifier, called ApproxLine, that can certify non-trivial properties of generative networks.
ApproxLine performs both deterministic and probabilistic abstract interpretation and captures infinite sets of outputs of generative networks.
We show that ApproxLine can verify interesting interpolations in the network's latent space.
Neural networks are becoming increasingly used across a wide range of applications, including facial recognition and autonomous driving.
So far, certification of their behavior has remained predominantly focused on uniform classification of norm-bounded balls Katz et al., 2017; Wong et al., 2018; Gowal et al., 2018; Singh et al., 2018; Raghunathan et al., 2018; Tjeng et al., 2017; Dvijotham et al., 2018b; Salman et al., 2019; Dvijotham et al., 2018c; , which aim to capture invisible perturbations.
However, a system's safety can also depend on its behavior on visible transformations.
For these reasons, investigation of techniques to certify more complex specifications has started to take place (Liu et al., 2019; Dvijotham et al., 2018a; Singh et al., 2019) .
Of particular interest is the work of Sotoudeh & Thakur (2019) which shows that if the inputs of a network are restricted to a line segment, the verification problem can sometimes be efficiently solved exactly.
The resulting method has been used to certify non-norm-bounded properties of ACAS Xu networks (Julian et al., 2018) and improve Integrated Gradients (Sundararajan et al., 2017) .
This work We extend this technique in two key ways:
(i) we demonstrate how to soundly approximate EXACTLINE, handling significantly larger networks faster than even methods based on sampling can (a form of deterministic abstract interpretation), and
(ii) we use this approximation to provide guaranteed bounds on the probabilities of outputs given a distribution over the inputs (a form of probabilistic abstract interpretation).
We believe this is the first time probabilistic abstract interpretation has been applied in the context of neural networks.
Based on these techniques, we also provide the first system capable of certifying interesting properties of generative networks.
In this paper we presented a highly scalable non-convex relaxation to verify neural network properties where inputs are restricted to a line segment.
Our results show that our method is faster and more precise than previous methods for the same networks, including sampling.
This speed and precision permitted us to verify properties based on interesting visual transformations induced by generative networks for the first time, including probabilistic properties. | We verify deterministic and probabilistic properties of neural networks using non-convex relaxations over visible transformations specified by generative models | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:836 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Multi-view video summarization (MVS) lacks researchers’ attention due to their major challenges of inter-view correlations and overlapping of cameras.
Most of the prior MVS works are offline, relying on only summary, needing extra communication bandwidth and transmission time with no focus on uncertain environments.
Different from the existing methods, we propose edge intelligence based MVS and spatio-temporal features based activity recognition for IoT environments.
We segment the multi-view videos on each slave device over edge into shots using light-weight CNN object detection model and compute mutual information among them to generate summary.
Our system does not rely on summary only but encode and transmit it to a master device with neural computing stick (NCS) for intelligently computing inter-view correlations and efficiently recognizing activities, thereby saving computation resources, communication bandwidth, and transmission time.
Experiments report an increase of 0.4 in F-measure score on MVS Office dataset as well as 0.2% and 2% increase in activity recognition accuracy over UCF-50 and YouTube 11 datasets, respectively, with lower storage and transmission time compared to state-of-the-art.
The time complexity is decreased from 1.23 to 0.45 secs for a single frame processing, thereby generating 0.75 secs faster MVS.
Furthermore, we made a new dataset by synthetically adding fog to an MVS dataset to show the adaptability of our system for both certain and uncertain surveillance environments.
Surveillance cameras installed indoor and outdoor at offices, public places, and roads generate huge amount of video data on daily basis.
This gigantic volume of data has two big issues: first one is storage consumption and second is huge computational complexity for its purposeful usage .
Video summarization aims at these problems by condensing the data size via extracting key information from lengthy videos and suppressing the redundant frames.
A video summary generated from a single camera is called single-view video summarization (SVS) (Mahasseni et al., 2017) .
On the other hand, a summary generated from a camera network is known as MVS (Panda et al., 2016a) .
SVS is intensively researched with applications to various domains including surveillance (Wang et al., 2017) , sports (Tejero-de Pablos et al., 2018) , and news videos (Wang et al., 2018) .
In contrast, MVS is not studied deeply because of several challenges such as computing inter-and intra-view correlations, overlapping regions among connected cameras, and variation in light conditions among different views.
The basic flow of MVS includes input acquisition, preprocessing, feature extraction, post-processing, and summary generation.
The mainstream MVS methods follow traditional machine learning approaches such as clustering along with low-level features extracted from entire frame with no focus on specific targets in surveillance.
The most important part of MVS is considering different objects in surveillance that can be useful for summary generation.
However, the existing techniques do not focus on objects such as persons and vehicles while generating summary.
Thus, the final summary may miss some important frames having persons or vehicles that need to be considered for MVS.
Furthermore, all the existing techniques rely only on MVS with no further steps for analysis of the generated summary.
For instance, the generated summary can be used for indexing, browsing, and activity recognition.
The existing methods are functional only in certain environments with no focus on uncertain scenarios (Min et al., 2019) , making them inadequate in real-world environments.
Finally, all the existing methods process data on local/online servers or personal computers with huge computation power.
It requires extra processing time, power of transmission, and does not guarantee quick responsive action for any abnormal situations, if not handled on the edge.
To ensure proper and quick responsive arrangements, activity recognition at edge is a necessary requirement of the current technological era.
Activity recognition literature is mature, but with no focus on processing over the edge.
Almost all the existing techniques classify activities over high computational local or cloud servers.
Classifying activity on edge is an important task of surveillance in smart cities.
Therefore, to tackle these challenges effectively, we present a novel framework applicable in both certain and uncertain environments for MVS and activity recognition over the edge.
Figure 1: Input and output flow of our proposed framework.
(a) Video frames (both certain and uncertain environment) from resource constrained devices.
(b) Annotate frames by detecting objects of interest, apply keyframes selection mechanism, generate summary, encode and transmit it to master device.
(c) Decode generated summary, perform features extraction, and forward it to activity prediction model at master device to get the output class with probability score.
The problems aimed in this paper are different from the schemes presented in existing literature.
We integrated two different domains including MVS and activity recognition under the umbrella of a unified framework in an IoT environment.
We presented interconnected resource constrained IoT devices working together to achieve several targets i.e., object detection, summary generation, and activity recognition as shown in Figure 1 .
The overall framework consists of numerous slaves and a master resource constrained device connected through a common wireless sensor network (WSN).
The slave devices are equipped with a camera to capture multi-view video data, segment it into shots, generate summary, encode a sequence of keyframes, and transmit it to the master device.
The master device is equipped with an INTEL Movidius NCS to classify the ongoing activity in the acquired sequence.
INTEL Movidius is a modular and deep learning accelerator in a standard USB 3.0 stick.
It has a Vision Processing Unit (VPU) that is functional with ultra-low power and better performance.
It enables activity recognition with significantly lower power, storage, and computational cost.
Further, a widely used concept of temporal point processing (Xiao et al., 2019) is utilized for activity classification, ensuring an effective recognition model.
While addressing the problems in MVS and activity recognition over resource constrained devices, we made the following contributions.
• Employing an algorithm for MVS on resource constrained devices, reducing the time complexity compared to existing approaches with higher accuracy.
The generated summary is further utilized to recognize the underlying activity of all the views through an auto-encoder and learned spatio-temporal features followed by different variants of SVM classifiers to demonstrate the efficiency and effectiveness of our proposed framework.
• Adding uncertainties such as fog to an outdoor MVS benchmark dataset to demonstrate the working of proposed framework in any type of scenario and introduce a new trend in MVS literature for researchers.
• The presented framework has high-level adaptability with special care for the capacity and traffic of WSN.
It has many flavors with tradeoff among transmission time, quality of keyframes, and accuracy of activity recognition model with computationally different classifiers.
In the subsequent sections, Section 2 provides a literature review and Section 3 explains the presented framework in detail.
In Section 4, experimental results for MVS and activity recognition are given, and Section 5 concludes the overall paper with future directions.
In this paper, we integrated MVS and activity recognition under an umbrella of a unified framework.
A complete setup including slaves and a master resource constrained device working independently in an IoT is presented.
The hardware requirements include slave devices equipped with camera and wireless sensors, a master device with Intel Movidius NCS for running optimized deep learning models on the edge.
The slave devices capture multi-view video data, detect objects, extract features, compute mutual information, and finally generate summary.
The generated summary is received at master device with optimized trained model for activity recognition.
The MVS algorithm as well activity recognition models presented in this paper outperform state-of-the-art.
In future, we have intention to extend this work by deeply investigating multi-view action recognition algorithms with different parameters and configurations in resource constrained environments.
Further, we want to explore spiking neural networks used for various tasks [40, 41] in our framework for spatio-temporal features extraction advanced to activity recognition. | An efficient multi-view video summarization scheme advanced to activity recognition in IoT environments. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:837 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
A central goal in the study of the primate visual cortex and hierarchical models for object recognition is understanding how and why single units trade off invariance versus sensitivity to image transformations.
For example, in both deep networks and visual cortex there is substantial variation from layer-to-layer and unit-to-unit in the degree of translation invariance.
Here, we provide theoretical insight into this variation and its consequences for encoding in a deep network.
Our critical insight comes from the fact that rectification simultaneously decreases response variance and correlation across responses to transformed stimuli, naturally inducing a positive relationship between invariance and dynamic range.
Invariant input units then tend to drive the network more than those sensitive to small image transformations.
We discuss consequences of this relationship for AI: deep nets naturally weight invariant units over sensitive units, and this can be strengthened with training, perhaps contributing to generalization performance.
Our results predict a signature relationship between invariance and dynamic range that can now be tested in future neurophysiological studies.
Invariances to image transformations, such as translation and scaling, have been reported in single units in visual cortex, but just as often sensitivity to these transformations has been found (El-Shamayleh and Pasupathy, 2016 , Sharpee et al. 2013 , Rust and DiCarlo, 2012 .
Similarly, in deep networks there is variation in translation invariance both within and across layers (Pospisil et al., 2018 , Shen et al., 2016 , Shang et al., 2016 , Goodfellow et al., 2009 .
Notionally, information about the position of the features composing objects may be important to category selectivity.
For example, the detection of eyes, nose, and lips are not sufficient for face recognition, the relative positions of these parts must also be encoded.
Thus it is reasonable to expect some balance between invariance and sensitivity to position.
We empirically observe that in a popular deep network, in both its trained and untrained state, invariant units tend to have higher dynamic range than sensitive units (Figure 1B and C) .
This raises the possibility that the effective gain on invariant units into the subsequent layer is stronger than that of sensitive units.
Here we provide theoretical insight into how rectification in a deep network could naturally biase networks to this difference between invariant and sensitive units.
We do this by examining how co-variance of a multivariate normal distribution is influenced by rectification, and we then test these insights in a deep neural network.
We have documented an empirical relationship between the dynamic range of unrectified units in a deep network and their invariance.
We provided a simple 1st order statistical model to explain this effect in which rectification caused the population representation to primarily vary in dimensions that were invariant to small image perturbations, whereas small perturbations were represented in directions of lower variance.
Further work can investigate whether this imbalance improves generalization because of the emphasis placed on invariant over sensitive units.
We note this relationship is weaker in the trained then untrained network further work can udnerstand this difference.
Our approximations assumed low covariance between input units and homoegenous input variance while this may be expected in a random network it may not be true in a trained network.
More crucially further theoretical work should consider the influence of co-variance between input units and invariance of output units as a function of weights.
To extend insights from simplified, artificial networks to neurobiology, it will first of all be important to test whether cortical neurons showing more invariance also tend to have a higher dynamic range.
If they do, this will establish a fundamental theoretical connection between computations of deep networks and the brain. | Rectification in deep neural networks naturally leads them to favor an invariant representation. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:838 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We study the use of the Wave-U-Net architecture for speech enhancement, a model introduced by Stoller et al for the separation of music vocals and accompaniment.
This end-to-end learning method for audio source separation operates directly in the time domain, permitting the integrated modelling of phase information and being able to take large temporal contexts into account.
Our experiments show that the proposed method improves several metrics, namely PESQ, CSIG, CBAK, COVL and SSNR, over the state-of-the-art with respect to the speech enhancement task on the Voice Bank corpus (VCTK) dataset.
We find that a reduced number of hidden layers is sufficient for speech enhancement in comparison to the original system designed for singing voice separation in music.
We see this initial result as an encouraging signal to further explore speech enhancement in the time-domain, both as an end in itself and as a pre-processing step to speech recognition systems. | The Wave-U-Net architecture, recently introduced by Stoller et al for music source separation, is highly effective for speech enhancement, beating the state of the art. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:839 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Neural networks (NNs) are able to perform tasks that rely on compositional structure even though they lack obvious mechanisms for representing this structure.
To analyze the internal representations that enable such success, we propose ROLE, a technique that detects whether these representations implicitly encode symbolic structure.
ROLE learns to approximate the representations of a target encoder E by learning a symbolic constituent structure and an embedding of that structure into E’s representational vector space.
The constituents of the approximating symbol structure are defined by structural positions — roles — that can be filled by symbols.
We show that when E is constructed to explicitly embed a particular type of structure (e.g., string or tree), ROLE successfully extracts the ground-truth roles defining that structure.
We then analyze a seq2seq network trained to perform a more complex compositional task (SCAN), where there is no ground truth role scheme available.
For this model, ROLE successfully discovers an interpretable symbolic structure that the model implicitly uses to perform the SCAN task, providing a comprehensive account of the link between the representations and the behavior of a notoriously hard-to-interpret type of model.
We verify the causal importance of the discovered symbolic structure by showing that, when we systematically manipulate hidden embeddings based on this symbolic structure, the model’s output is also changed in the way predicted by our analysis.
Finally, we use ROLE to explore whether popular sentence embedding models are capturing compositional structure and find evidence that they are not; we conclude by discussing how insights from ROLE can be used to impart new inductive biases that will improve the compositional abilities of such models.
We have introduced ROLE, a neural network that learns to approximate the representations of an existing target neural network E using an explicit symbolic structure.
ROLE successfully discovers symbolic structure both in models that explicitly define this structure and in an RNN without explicit structure trained on the fully-compositional SCAN task.
When applied to sentence embedding models trained on partially-compositional tasks, ROLE performs better than hand-specified role schemes but still provides little evidence that the sentence encodings represent compositional structure.
Uncovering the latent symbolic structure of NN representations on fully-compositional tasks is a significant step towards explaining how they can achieve the level of compositional generalization that they do, and suggests types of inductive bias to improve such generalization for partially-compositional tasks.
We offer several observations about this algorithm.
1. This algorithm may seem convoluted, but a few observations can illuminate how the roles assigned by such an algorithm support success on the SCAN task.
First, a sequence will contain role 30 if and only if it contains and, and it will contain role 17 if and only if it contains after.
Thus, by implicitly checking for the presence of these two roles (regardless of the fillers bound to them), the decoder can tell whether the output involves one or two basic commands, where the presence of and or after leads to two basic commands and the absence of both leads to one basic command.
Moreover, if there are two basic commands, whether it is role 17 or role 30 that is present can tell the decoder whether the input order of these commands also corresponds to their output order (when it is and in play, i.e., role 30), or if the input order is reversed (when it is after in play, i.e., role 17).
With these basic structural facts established, the decoder can begin to decode the specific commands.
For example, if the input is a sequence with after, it can begin with the command after after, which it can decode by checking which fillers are bound to the relevant roles for that type of command.
It may seem odd that so many of the roles are based on position (e.g., "first word" and "second-to-last word"), rather than more functionally-relevant categories such as "direction word."
However, this approach may actually be more efficient: Each command consists of a single mandatory element (namely, an action word such as walk or jump) followed by several optional modifiers (namely, rotation words, direction words, and cardinalities).
Because most of the word categories are optional, it might be inefficient to check for the presence of, e.g., a cardinality, since many sequences will not have one.
By contrast, every sequence will have a last word, and checking the identity of the last word provides much functionally-relevant information: if that word is not a cardinality, then the decoder knows that there is no cardinality present in the command (because if there were, it would be the last word); and if it is a cardinality, then that is important to know, because the presence of twice or thrice can dramatically affect the shape of the output sequence.
In this light, it is unsurprising that the SCAN encoder has implicitly learned several different roles that essentially mean the last element of a particular subcommand.
2. The algorithm does not constitute a simple, transparent role scheme.
But its job is to describe the representations that the original network produces, and we have no a priori expectation about how complex that process may be.
The role-assignment algorithm implicitly learned by ROLE is interpretable locally (each line is readily expressible in simple English), but not intuitively transparent globally.
We see this as a positive result, in two respects.
First, it shows why ROLE is crucial: no human-generated role scheme would provide a good approximation to this algorithm.
Such an algorithm can only be identified because ROLE is able to use gradient descent to find role schemes far more complex than any we would hypothesize intuitively.
This enables us to analyze networks far more complex than we could analyze previously, being necessarily limited to hand-designed role schemes based on human intuitions about how to perform the task.
Second, when future work illuminates the computation in the original SCAN GRU seq2seq decoder, the baroqueness of the role-assignment algorithm that ROLE has shown to be implicit in the seq2seq encoder can potentially explain certain limitations in the original model, which is known to suffer from severe failures of systematic generalization outside the training distribution (Lake and Baroni, 2018).
It is reasonable to hypothesize that systematic generalization requires that the encoder learn an implicit role scheme that is relatively simple and highly compositional.
Future proposals for improving the systematic generalization of models on SCAN can be examined using ROLE to test the hypothesis that greater systematicity requires greater compositional simplicity in the role scheme implicitly learned by the encoder.
3. While the role-assignment algorithm of A.8.1 may not be simple, from a certain perspective, it is quite surprising that it is not far more complex.
Although ROLE is provided 50 roles to learn to deploy as it likes, it only chooses to use 16 of them (only 16 are ever selected as the arg max(a t ); see Sec. 6.1).
Furthermore, the SCAN grammar generates 20,910 input sequences, containing a total of 151,688 words (an average of 7.25 words per input).
This means that, if one were to generate a series of conditional statements to determine which role is assigned to each word in every context, this could in theory require up to 151,688 conditionals (e.g., "if the filler is 'jump' in the context 'walk thrice after opposite left', then assign role 17").
However, our algorithm involves just 47 conditionals.
This reduction helps explain how the model performs so well on the test set: If it used many more of the 151,688 possible conditional rules, it would completely overfit the training examples in a way that would be unlikely to generalize.
The 47-conditional algorithm we found is more likely to generalize by abstracting over many details of the context.
4. Were it not for ROLE's ability to characterize the representations generated by the original encoder in terms of implicit roles, providing an equally complete and accurate interpretation of those representations would necessarily require identifying the conditions determining the activation level of each of the 100 neurons hosting those representations.
It seems to us grossly overly optimistic to estimate that each neuron's activation level in the representation of a given input could be characterized by a property of the input statable in, say, two lines of roughly 20 words/symbols; yet even then, the algorithm would require 200 lines, whereas the algorithm in A.8.1 requires 47 lines of that scale.
Thus, by even such a crude estimate of the degree of complexity expected for an algorithm describing the representations in terms of neuron activities, the algorithm we find, stated over roles, is 4 times simpler. | We introduce a new analysis technique that discovers interpretable compositional structure in notoriously hard-to-interpret recurrent neural networks. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:84 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Reinforcement learning (RL) is a powerful framework for solving problems by exploring and learning from mistakes.
However, in the context of autonomous vehicle (AV) control, requiring an agent to make mistakes, or even allowing mistakes, can be quite dangerous and costly in the real world.
For this reason, AV RL is generally only viable in simulation.
Because these simulations have imperfect representations, particularly with respect to graphics, physics, and human interaction, we find motivation for a framework similar to RL, suitable to the real world.
To this end, we formulate a learning framework that learns from restricted exploration by having a human demonstrator do the exploration.
Existing work on learning from demonstration typically either assumes the collected data is performed by an optimal expert, or requires potentially dangerous exploration to find the optimal policy.
We propose an alternative framework that learns continuous control from only safe behavior.
One of our key insights is that the problem becomes tractable if the feedback score that rates the demonstration applies to the atomic action, as opposed to the entire sequence of actions.
We use human experts to collect driving data as well as to label the driving data through a framework we call ``Backseat Driver'', giving us state-action pairs matched with scalar values representing the score for the action.
We call the more general learning framework ReNeg, since it learns a regression from states to actions given negative as well as positive examples.
We empirically validate several models in the ReNeg framework, testing on lane-following with limited data.
We find that the best solution in this context outperforms behavioral cloning has strong connections to stochastic policy gradient approaches.
We hypothesized that for the task of learning lane following for autonomous vehicles from demonstration, adding in negative examples would improve model performance.
Our scalar loss model performed over 1.5 times as well as the behavioral cloning baseline, showing our hypothesis to be true.
The specific method of regression with negative examples we used allows for learning deterministic continuous control problems from demonstration from any range of good and bad behavior.
Moreover, the loss function that empirically worked the best in this domain does not require an additional neural network to model it, and it induces a stochastic policy gradient that could be used for fine-tuning with RL.
We also introduced a novel way of collecting continuous human feedback for autonomous vehicles intuitively and efficiently, called Backseat Driver.
We thus believe our work could be extremely useful in the autonomous control industry: with no additional real world time, we can increase performance over supervised learning by simply having a backseat driver.6
APPENDIX 6.1 EXAMPLE INPUT | We introduce a novel framework for learning from demonstration that uses continuous human feedback; we evaluate this framework on continuous control for autonomous vehicles. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:840 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We explore the use of Vector Quantized Variational AutoEncoder (VQ-VAE) models for large scale image generation.
To this end, we scale and enhance the autoregressive priors used in VQ-VAE to generate synthetic samples of much higher coherence and fidelity than possible before.
We use simple feed-forward encoder and decoder networks, thus our model is an attractive candidate for applications where the encoding and decoding speed is critical.
Additionally, this allows us to only sample autoregressively in the compressed latent space, which is an order of magnitude faster than sampling in the pixel space, especially for large images.
We demonstrate that a multi-scale hierarchical organization of VQ-VAE, augmented with powerful priors over the latent codes, is able to generate samples with quality that rivals that of state of the art Generative Adversarial Networks on multifaceted datasets such as ImageNet, while not suffering from GAN's known shortcomings such as mode collapse and lack of diversity.
Deep generative models have significantly improved in the past few years [1; 18; 17] .
This is, in part, thanks to architectural innovations as well as computation advances that allows training them at larger scale in both amount of data and model size.
The samples generated from these models are hard to distinguish from real data without close inspection, and their applications range from super resolution BID14 to domain editing BID31 , artistic manipulation BID23 , or text-to-speech and music generation BID16 .We
distinguish two main types of generative models: maximum likelihood based models, which include VAEs [11; 21] , flow based [5; 20; 6; 12] and autoregressive models [14; 27] ; and implicit generative models such as Generative Adversarial Networks (GANs) BID7 . Each
of these models offer several trade-offs such as sample quality, diversity, speed, etc. GANs
optimize a minimax objective with a generator neural network producing images by mapping random noise onto an image, and a discriminator defining the generators' loss function by classifying its samples as real or fake. Larger
scale GAN models can now generate high-quality and highresolution images [1; 10] . However
, it is well known that samples from these models do not fully capture the diversity of the true distribution. Furthermore
, GANs are challenging to evaluate, and a satisfactory generalization measure on a test set to assess overfitting does not yet exist. For model
comparison and selection, researchers have used image samples or proxy measures of image quality such as Inception Score (IS) BID22 and Fréchet Inception Distance (FID) BID8 .In contrast
, likelihood based methods optimize negative log-likelihood (NLL) of the training data. This objective
allows model-comparison and measuring generalization to unseen data. Additionally,
since the probability that the model assigns to all examples in the training set is maximized, likelihood based models, in principle, cover all modes of the data, and do not suffer from the problems of mode collapse and lack of diversity as seen in GANs. In spite of these
advantages, directly maximizing likelihood in the pixel space can be challenging. First, NLL in pixel
space is not always a good measure of sample quality BID24 , and cannot reliably be used to make comparison between different model classes. There is no intrinsic
incentive for these models to focus on, for example, global structure. Some of these issues
are alleviated by introducing inductive biases such as multi-scale [26; 27; 19; 16] or by modeling the dominant bit planes in an image [13; 12] .In this paper we use
ideas from lossy compression to relieve the generative model from modeling negligible information. Indeed, techniques such
as JPEG BID30 have shown that it is often possible to remove more than 80% of the data without noticeably changing the perceived image quality.As proposed by BID28 , we compress images into a discrete latent space by vector-quantizing intermediate representations of an autoencoder. These representations are
over 30x smaller than the original image, but still allow the decoder to reconstruct the images with little distortion. The prior over these discrete
representations can be modeled with a state of the art PixelCNN [27; 28] with selfattention BID29 , called PixelSnail BID2 . When sampling from this prior
, the decoded images also exhibit the same high quality and coherence of the reconstructions (see FIG0 ). Furthermore, the training and
sampling of this generative model over the discrete latent space is also 30x faster than when directly applied to the pixels, allowing us to train on much higher resolution images. Finally, the encoder and decoder
used in this work retains the simplicity and speed of the original VQ-VAE, which means that the proposed method is an attractive solution for situations in which fast, low-overhead encoding and decoding of large images are required.
We propose a simple method for generating diverse high resolution images using VQ-VAE, combining a vector quantized neural representation learning technique inspired by ideas from lossy compression with powerful autoregressive models as priors.
Our encoder and decoder architectures are kept simple and light-weight as in the original VQ-VAE, with the only difference that we propose using hierarchical multi-scale latent maps for larger images.
The improvements seen in the quality of the samples are largely due to the architectural advances in the PixelCNN style priors that more accurately estimate the distribution over the latent space.
In particular, using self-attention seems to be a crucial component for accurately capturing the structure and geometry of objects encoded in the top-level latent map.
We also observe that the quality of our samples is correlated with the improvements in the negative log-likelihood of the model in the latent space, where small gains in likelihood often translate to dramatic improvements in sample quality.
The fidelity of our best class conditional samples are competitive with the state of the art Generative Adversarial Networks, while we see dramatically broader diversity in several classes, contrasting our method against the known limitations of GANs.
We believe our experiments vindicate maximum likelihood in the latent space as a simple and effective objective for learning large scale generative models that do not suffer from the shortcomings of adversarial training. | scale and enhance VQ-VAE with powerful priors to generate near realistic images. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:841 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We present a graph neural network assisted Monte Carlo Tree Search approach for the classical traveling salesman problem (TSP).
We adopt a greedy algorithm framework to construct the optimal solution to TSP by adding the nodes successively.
A graph neural network (GNN) is trained to capture the local and global graph structure and give the prior probability of selecting each vertex every step.
The prior probability provides a heuristics for MCTS, and the MCTS output is an improved probability for selecting the successive vertex, as it is the feedback information by fusing the prior with the scouting procedure.
Experimental results on TSP up to 100 nodes demonstrate that the proposed method obtains shorter tours than other learning-based methods.
Traveling Salesman Problem (TSP) is a classical combinatorial optimization problem and has many practical applications in real life, such as planning, manufacturing, genetics (Applegate et al., 2006b) .
The goal of TSP is to find the shortest route that visits each city once and ends in the origin city, which is well-known as an NP-hard problem (Papadimitriou, 1977) .
In the literature, approximation algorithms were proposed to solve TSP (Lawler et al., 1986; Goodrich & Tamassia, 2015) .
In particular, many heuristic search algorithms were made to find a satisfactory solution within a reasonable time.
However, the performance of heuristic algorithms depends on handcrafted heuristics to guide the search procedure to find competitive tours efficiently, and the design of heuristics usually requires substantial expertise of the problem (Johnson & McGeoch, 1997; Dorigo & Gambardella, 1997) .
Recent advances in deep learning provide a powerful way of learning effective representations from data, leading to breakthroughs in many fields such as speech recognition (Lecun et al., 2015) .
Efforts of the deep learning approach to tackling TSP has been made under the supervised learning and reinforcement learning frameworks.
Vinyals et al. (Vinyals et al., 2015) introduced a pointer network based on the Recurrent Neural Network (RNN) to model the stochastic policy that assigns high probabilities to short tours given an input set of coordinates of vertices.
Dai et al. (Dai et al., 2017) tackled the difficulty of designing heuristics by Deep Q-Network (DQN) based on structure2vec (Dai et al., 2016b) , and a TSP solution was constructed incrementally by the learned greedy policy.
Most recently, Kool et al. (Kool et al., 2019) used Transformer-Pointer Network (Vaswani et al., 2017) to learn heuristics efficiently and got close to the optimal TSP solution for up to 100 vertices.
These efforts made it possible to solve TSP by an end-to-end heuristic algorithm without special expert skills and complicated feature design.
In this paper, we present a new approach to solving TSP.
Our approach combines the deep neural network with the Monte Carlo Tree Search (MCTS), so that takes advantage of the powerful feature representation and scouting exploration.
A graph neural network (GNN) is trained to capture the local and global graph structure and predict the prior probability, for each vertex, of whether this vertex belongs to the partial tour.
Besides node features, we integrate edge information into each update-layer in order to extract features efficiently from the problem whose solution relies on the edge weight.
Similar to above-learned heuristic approaches, we could greedily select the vertex according to the biggest prior probability and yet the algorithm may fall into the local optimum because the algorithm has only one shot to compute the optimal tour and never goes back and reverses the decision.
To overcome this problem, we introduce a graph neural network assisted Monte Carlo Tree Search (GNN-MCTS) to make the decision more reliable by a large number of scouting simulations.
The trained GNN is used to guide the MCTS procedure that effectively reduces the complexity of the search space and MCTS provides a more reliable policy to avoid stuck in a local optimum.
Experimental results on TSP up to 100 vertices demonstrate that the proposed method obtains shorter tours than other learning-based methods.
The remainder of the paper is organized as follows: After reviewing related work in Section 2, we briefly give a preliminary introduction to TSP in Section
3. Our approach is formulated in Section
4. Experimental results are given in Section 5, followed by the conclusion in Section 6.
We proposed a graph neural network assisted Monte Carlo Tree Search (GNN-MCTS) for the classical traveling salesman problem.
The core idea of our approach lies in converting the TSP into a tree search problem.
To capture the local and global graph structure, we train a graph neural network (GNN) which integrates node feature and edge weight into the feature update process.
Instead of using the prior probability output by GNN in a greedy way, we designed a GNN-MCTS to provide scouting simulation so that the algorithm could avoid being stuck into the local optimum.
The exper-imental results show that the proposed approach can obtain shorter tours than other learning-based methods.
We see the presented work as a step towards a new family of solvers for NP-hard problems that leverage both deep learning and classic heuristics.
We will release code to support future progress in this direction. | A Graph Neural Network Assisted Monte Carlo Tree Search Approach to Traveling Salesman Problem | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:842 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Graph neural networks have recently achieved great successes in predicting quantum mechanical properties of molecules.
These models represent a molecule as a graph using only the distance between atoms (nodes) and not the spatial direction from one atom to another.
However, directional information plays a central role in empirical potentials for molecules, e.g. in angular potentials.
To alleviate this limitation we propose directional message passing, in which we embed the messages passed between atoms instead of the atoms themselves.
Each message is associated with a direction in coordinate space.
These directional message embeddings are rotationally equivariant since the associated directions rotate with the molecule.
We propose a message passing scheme analogous to belief propagation, which uses the directional information by transforming messages based on the angle between them.
Additionally, we use spherical Bessel functions to construct a theoretically well-founded, orthogonal radial basis that achieves better performance than the currently prevalent Gaussian radial basis functions while using more than 4x fewer parameters.
We leverage these innovations to construct the directional message passing neural network (DimeNet).
DimeNet outperforms previous GNNs on average by 77% on MD17 and by 41% on QM9.
In recent years scientists have started leveraging machine learning to reduce the computation time required for predicting molecular properties from a matter of hours and days to mere milliseconds.
With the advent of graph neural networks (GNNs) this approach has recently experienced a small revolution, since they do not require any form of manual feature engineering and significantly outperform previous models .
GNNs model the complex interactions between atoms by embedding each atom in a high-dimensional space and updating these embeddings by passing messages between atoms.
By predicting the potential energy these models effectively learn an empirical potential function.
Classically, these functions have been modeled as the sum of four parts: (Leach, 2001 )
where E bonds models the dependency on bond lengths, E angle on the angles between bonds, E torsion on bond rotations, i.e. the dihedral angle between two planes defined by pairs of bonds, and E non-bonded models interactions between unconnected atoms, e.g. via electrostatic or van der Waals interactions.
The update messages in GNNs, however, only depend on the previous atom embeddings and the pairwise distances between atoms -not on directional information such as bond angles and rotations.
Thus, GNNs lack the second and third terms of this equation and can only model them via complex higher-order interactions of messages.
Extending GNNs to model them directly is not straightforward since GNNs solely rely on pairwise distances, which ensures their invariance to translation, rotation, and inversion of the molecule, which are important physical requirements.
In this paper, we propose to resolve this restriction by using embeddings associated with the directions to neighboring atoms, i.e. by embedding atoms as a set of messages.
These directional message embeddings are equivariant with respect to the above transformations since the directions move with the molecule.
Hence, they preserve the relative directional information between neighboring atoms.
We propose to let message embeddings interact based on the distance between atoms and the angle between directions.
Both distances and angles are invariant to translation, rotation, and inversion of the molecule, as required.
Additionally, we show that the distance and angle can be jointly represented in a principled and effective manner by using spherical Bessel functions and spherical harmonics.
We leverage these innovations to construct the directional message passing neural network (DimeNet).
DimeNet can learn both molecular properties and atomic forces.
It is twice continuously differentiable and solely based on the atom types and coordinates, which are essential properties for performing molecular dynamics simulations.
DimeNet outperforms previous GNNs on average by 76 % on MD17 and by 31 % on QM9.
Our paper's main contributions are:
1. Directional message passing, which allows GNNs to incorporate directional information by connecting recent advances in the fields of equivariance and graph neural networks as well as ideas from belief propagation and empirical potential functions such as Eq. 1.
2. Theoretically principled orthogonal basis representations based on spherical Bessel functions and spherical harmonics.
Bessel functions achieve better performance than Gaussian radial basis functions while reducing the radial basis dimensionality by 4x or more.
3. The Directional Message Passing Neural Network (DimeNet): A novel GNN that leverages these innovations to set the new state of the art for molecular predictions and is suitable both for predicting molecular properties and for molecular dynamics simulations.
In this work we have introduced directional message passing, a more powerful and expressive interaction scheme for molecular predictions.
Directional message passing enables graph neural networks to leverage directional information in addition to the interatomic distances that are used by normal GNNs.
We have shown that interatomic distances can be represented in a principled and effective manner using spherical Bessel functions.
We have furthermore shown that this representation can be extended to directional information by leveraging 2D spherical Fourier-Bessel basis functions.
We have leveraged these innovations to construct DimeNet, a GNN suitable both for predicting molecular properties and for use in molecular dynamics simulations.
We have demonstrated DimeNet's performance on QM9 and MD17 and shown that our contributions are the essential ingredients that enable DimeNet's state-of-the-art performance.
DimeNet directly models the first two terms in Eq. 1, which are known as the important "hard" degrees of freedom in molecules (Leach, 2001) .
Future work should aim at also incorporating the third and fourth terms of this equation.
This could improve predictions even further and enable the application to molecules much larger than those used in common benchmarks like QM9.
Figure 6 : A standard non-directional GNN cannot distinguish between a hexagonal (left) and two triangular molecules (right) with the same bond lengths, since the neighborhood of each atom is exactly the same.
An example of this would be Cyclohexane and two Cyclopropane molecules with slightly stretched bonds, when the GNN either uses the molecular graph or a cutoff distance of c ≤ 2.5 Å.
Directional message passing solves this problem by considering the direction of each bond. | Directional message passing incorporates spatial directional information to improve graph neural networks. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:843 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Training an agent to solve control tasks directly from high-dimensional images with model-free reinforcement learning (RL) has proven difficult.
The agent needs to learn a latent representation together with a control policy to perform the task.
Fitting a high-capacity encoder using a scarce reward signal is not only extremely sample inefficient, but also prone to suboptimal convergence.
Two ways to improve sample efficiency are to learn a good feature representation and use off-policy algorithms.
We dissect various approaches of learning good latent features, and conclude that the image reconstruction loss is the essential ingredient that enables efficient and stable representation learning in image-based RL.
Following these findings, we devise an off-policy actor-critic algorithm with an auxiliary decoder that trains end-to-end and matches state-of-the-art performance across both model-free and model-based algorithms on many challenging control tasks.
We release our code to encourage future research on image-based RL.
Cameras are a convenient and inexpensive way to acquire state information, especially in complex, unstructured environments, where effective control requires access to the proprioceptive state of the underlying dynamics.
Thus, having effective RL approaches that can utilize pixels as input would potentially enable solutions for a wide range of real world problems.
The challenge is to efficiently learn a mapping from pixels to an appropriate representation for control using only a sparse reward signal.
Although deep convolutional encoders can learn good representations (upon which a policy can be trained), they require large amounts of training data.
As existing reinforcement learning approaches already have poor sample complexity, this makes direct use of pixel-based inputs prohibitively slow.
For example, model-free methods on Atari (Bellemare et al., 2013) and DeepMind Control (DMC) (Tassa et al., 2018) take tens of millions of steps (Mnih et al., 2013; Barth-Maron et al., 2018) , which is impractical in many applications, especially robotics.
A natural solution is to add an auxiliary task with an unsupervised objective to improve sample efficiency.
The simplest option is an autoencoder with a pixel reconstruction objective.
Prior work has attempted to learn state representations from pixels with autoencoders, utilizing a two-step training procedure, where the representation is first trained via the autoencoder, and then either with a policy learned on top of the fixed representation (Lange & Riedmiller, 2010; Munk et al., 2016; Higgins et al., 2017b; Zhang et al., 2018; Nair et al., 2018) , or with planning (Mattner et al., 2012; Finn et al., 2015) .
This allows for additional stability in optimization by circumventing dueling training objectives but leads to suboptimal policies.
Other work utilizes end-to-end model-free learning with an auxiliary reconstruction signal in an on-policy manner (Jaderberg et al., 2017) .
We revisit the concept of adding an autoencoder to model-free RL approaches, but with a focus on off-policy algorithms.
We perform a sequence of careful experiments to understand why previous approaches did not work well.
We found that a pixel reconstruction loss is vital for learning a good representation, specifically when trained end-to-end.
Based on these findings, we propose a simple autoencoder-based off-policy method that can be trained end-to-end.
Our method is the first modelfree off-policy algorithm to successfully train simultaneously both the latent state representation and policy in a stable and sample-efficient manner.
(Tassa et al., 2018) used in our experiments.
Each task offers an unique set of challenges, including complex dynamics, sparse rewards, hard exploration, and more.
Refer to Appendix A for more information.
Of course, some recent state-of-the-art model-based RL methods (Hafner et al., 2018; Lee et al., 2019) have demonstrated superior sample efficiency to leading model-free approaches on pixel tasks from (Tassa et al., 2018) .
But we find that our model-free, off-policy, autoencoder-based approach is able to match their performance, closing the gap between model-based and model-free approaches in image-based RL, despite being a far simpler method that does not require a world model.
This paper makes three main contributions:
(i) a demonstration that adding a simple auxiliary reconstruction loss to a model-free off-policy RL algorithm achieves comparable results to state-of-the-art model-based methods on the suite of continuous control tasks from Tassa et al. (2018) ;
(ii) an understanding of the issues involved with combining autoencoders with model-free RL in the off-policy setting that guides our algorithm; and
(iii) an open-source PyTorch implementation of our simple method for researchers and practitioners to use as a strong baseline that may easily be built upon.
We have presented the first end-to-end, off-policy, model-free RL algorithm for pixel observations with only reconstruction loss as an auxiliary task.
It is competitive with state-of-the-art model-based methods, but much simpler, robust, and without requiring learning a dynamics model.
We show through ablations the superiority of end-to-end learning over previous methods that use a two-step training procedure with separated gradients, the necessity of a pixel reconstruction loss over reconstruction to lower-dimensional "correct" representations, and demonstrations of the representation power and generalization ability of our learned representation.
We find that deterministic models outperform β-VAEs (Higgins et al., 2017a) , likely due to the other introduced instabilities, such as bootstrapping, off-policy data, and end-to-end training with auxiliary losses.
We hypothesize that deterministic models that perform better even in stochastic environments should be chosen over stochastic ones with the potential to learn probability distributions, and argue that determinism has the benefit of added interpretability, through handling of simpler distributions.
In the Appendix we provide results across all experiments on the full suite of 6 tasks chosen from DMC (Appendix A), and the full set of hyperparameters used in Appendix B. There are also additional experiments autoencoder capacity (Appendix E), a look at optimality of the learned latent representation (Appendix H), importance of action repeat (Appendix I), and a set of benchmarks on learning from proprioceptive observation (Appendix J).
Finally, we opensource our codebase for the community to spur future research in image-based RL. | We design a simple and efficient model-free off-policy method for image-based reinforcement learning that matches the state-of-the-art model-based methods in sample efficiency | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:844 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Large deep neural networks require huge memory to run and their running speed is sometimes too slow for real applications.
Therefore network size reduction with keeping accuracy is crucial for practical applications.
We present a novel neural network operator, chopout, with which neural networks are trained, even in a single training process, so as to truncated sub-networks perform as well as possible.
Chopout is easy to implement and integrate into most type of existing neural networks.
Furthermore it enables to reduce size of networks and latent representations even after training just by truncating layers.
We show its effectiveness through several experiments. | We present a novel simple operator, chopout, with which neural networks are trained, even in a single training process, so as to truncated sub-networks perform as well as possible. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:845 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Generative Adversarial Networks (GANs) have been shown to produce realistically looking synthetic images with remarkable success, yet their performance seems less impressive when the training set is highly diverse.
In order to provide a better fit to the target data distribution when the dataset includes many different classes, we propose a variant of the basic GAN model, a Multi-Modal Gaussian-Mixture GAN (GM-GAN), where the probability distribution over the latent space is a mixture of Gaussians.
We also propose a supervised variant which is capable of conditional sample synthesis.
In order to evaluate the model's performance, we propose a new scoring method which separately takes into account two (typically conflicting) measures - diversity vs. quality of the generated data.
Through a series of experiments, using both synthetic and real-world datasets, we quantitatively show that GM-GANs outperform baselines, both when evaluated using the commonly used Inception Score, and when evaluated using our own alternative scoring method.
In addition, we qualitatively demonstrate how the unsupervised variant of GM-GAN tends to map latent vectors sampled from different Gaussians in the latent space to samples of different classes in the data space.
We show how this phenomenon can be exploited for the task of unsupervised clustering, and provide quantitative evaluation showing the superiority of our method for the unsupervised clustering of image datasets.
Finally, we demonstrate a feature which further sets our model apart from other GAN models: the option to control the quality-diversity trade-off by altering, post-training, the probability distribution of the latent space.
This allows one to sample higher quality and lower diversity samples, or vice versa, according to one's needs.
Generative models have long been an important and active field of research in machine-learning.
Generative Adversarial Networks BID6 include a family of methods for learning generative models where the computational approach is based on game theory.
The goal of a GAN is to learn a Generator (G) capable of generating samples from the data distribution (p X ), by converting latent vectors from a lower-dimension latent space (Z) to samples in a higher-dimension data space (X ).
Usually, latent vectors are sampled from Z using the uniform or the normal distribution.In order to train G, a Discriminator (D) is trained to distinguish real training samples from fake samples generated by G. Thus D returns a value D(x) ∈ [0, 1] which can be interpreted as the probability that the input sample (x) is a real sample from the data distribution.
In this configuration, G is trained to obstruct D by generating samples which better resemble the real training samples, while D is continuously trained to tell apart real from fake samples.
Crucially, G has no direct access to real samples from the training set, as it learns solely through its interaction with D. Both D and G are implemented by deep differentiable networks, typically consisting of multiple convolutional and fully-connected layers.
They may be alternately trained using Stochastic Gradient Descent.In the short period of time since the introduction of the GAN model, many different enhancement methods and training variants have been suggested to improve their performance (see brief review below).
Despite these efforts, often a large proportion of the generated samples is, arguably, not satisfactorily realistic.
In some cases the generated sample does not resemble any of the real samples from the training set, and human observers find it difficult to classify synthetically generated samples to one of the classes which compose the training set (see illustration in FIG0 ).Figure
1: Images generated by different GANs trained on MNIST (top row), CelebA (middle row) and STL-10 (bottom row). Red square
mark images of, arguably, low quality (best seen in color).This problem
worsens with the increased complexity of the training set, and specifically when the training set is characterized by large inter-class and intra-class diversity. In this work
we focus on this problem, aiming to improve the performance of GANs when the training dataset has large inter-class and intra-class diversity.Related Work. In an attempt
to improve the performance of the original GAN model, many variants and extensions have been proposed in the past few years. These include
architectural changes to G and D as in BID26 , modifications to the loss function as in BID20 ; BID7 , or the introduction of supervision into the training setting as in BID22 ; BID24 . Another branch
of related work, which is perhaps more closely related to our work, involves the learning of a meaningfully structured latent space. Thus Info-GAN
decomposes the input noise into an incompressible source and a "latent code", Adversarial Auto-Encoders BID19 employ GANs to perform variational inference, and BID16 combine a Variational Auto-Encoder with a Generative Adversarial Network (see Appendix A for a more comprehensive description).Our Approach.
Although modifications
to the structure of the latent space have been investigated before as described above, the significance of the probability distribution used for sampling latent vectors was rarely investigated. A common practice today
is to use a standard normal (e.g. N (0, I)) or uniform (e.g. U [0, 1]) probability distribution when sampling latent vectors from the latent space. We wish to challenge this
common practice, and investigate the beneficial effects of modifying the distribution used to sample latent vectors in accordance with properties of the target dataset.Specifically, many datasets, especially those of natural images, are quite diverse, with high interclass and intra-class variability. At the same time, the representations
of these datasets usually span high dimensional spaces, which naturally makes them very sparse. Intuitively, this implies that the underlying
data distribution, which we try to learn using a GAN, is also sparse, i.e. it mostly consists of low-density areas with relatively few areas of high-density.We propose to incorporate this prior-knowledge into the model, by sampling latent vectors using a multi-modal probability distribution which better matches these characteristics of the data space distribution. It is important to emphasize that this architectural
modification is orthogonal to, and can be used in conjunction with, other architectural improvements such as those reviewed above (see for instance FIG6 in Appendix D.) Supervision can be incorporated into this model by adding correspondence (not necessarily injective) between labels and mixture components.The rest of this paper is organized as follows: In Section 2 we describe the family of GM-GAN models. In Section 3 we offer an alternative method which focuses
on measuring the trade-off between sample quality and diversity of generative models. In Section 4 we empirically evaluate our proposed model using
various diverse datasets, showing that GM-GANs outperform the corresponding baseline methods with uni-modal distribution in the latent space. In Section 5 we describe a method for clustering datasets using
GM-GANs, and provide qualitative and quantitative evaluation using various datasets of real images. | Multi modal Guassian distribution of latent space in GAN models improves performance and allows to trade-off quality vs. diversity | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:846 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Distributed optimization is essential for training large models on large datasets.
Multiple approaches have been proposed to reduce the communication overhead in distributed training, such as synchronizing only after performing multiple local SGD steps, and decentralized methods (e.g., using gossip algorithms) to decouple communications among workers.
Although these methods run faster than AllReduce-based methods, which use blocking communication before every update, the resulting models may be less accurate after the same number of updates.
Inspired by the BMUF method of Chen & Huo (2016), we propose a slow momentum (SloMo) framework, where workers periodically synchronize and perform a momentum update, after multiple iterations of a base optimization algorithm.
Experiments on image classification and machine translation tasks demonstrate that SloMo consistently yields improvements in optimization and generalization performance relative to the base optimizer, even when the additional overhead is amortized over many updates so that the SloMo runtime is on par with that of the base optimizer.
We provide theoretical convergence guarantees showing that SloMo converges to a stationary point of smooth non-convex losses.
Since BMUF is a particular instance of the SloMo framework, our results also correspond to the first theoretical convergence guarantees for BMUF. | SlowMo improves the optimization and generalization performance of communication-efficient decentralized algorithms without sacrificing speed. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:847 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Structural planning is important for producing long sentences, which is a missing part in current language generation models.
In this work, we add a planning phase in neural machine translation to control the coarse structure of output sentences.
The model first generates some planner codes, then predicts real output words conditioned on them.
The codes are learned to capture the coarse structure of the target sentence.
In order to learn the codes, we design an end-to-end neural network with a discretization bottleneck, which predicts the simplified part-of-speech tags of target sentences.
Experiments show that the translation performance are generally improved by planning ahead.
We also find that translations with different structures can be obtained by manipulating the planner codes.
When human speaks, it is difficult to ensure the grammatical or logical correctness without any form of planning.
Linguists have found evidence through speech errors or particular behaviors that indicate speakers are planning ahead BID16 .
Such planning can happen in discourse or sentence level, and sometimes we may notice it through inner speech.In contrast to human, a neural machine translation (NMT) model does not have the planning phase when it is asked to generate a sentence.
Although we can argue that the planning is done in the hidden layers, however, such structural information remains uncertain in the continuous vectors until the concrete words are sampled.
In tasks such as machine translation, a source sentence can have multiple valid translations with different syntactic structures.
As a consequence, in each step of generation, the model is unaware of the "big picture" of the sentence to produce, resulting in uncertainty of word prediction.
In this research, we try to let the model plan the coarse structure of the output sentence before decoding real words.
As illustrated in FIG0 , in our proposed framework, we insert some planner codes into the beginning of the output sentences.
The sentence structure of the translation is governed by the codes.An NMT model takes an input sentence X and produce a translation Y .
Let S Y denotes the syntactic structure of the translation.
Indeed, the input sentence already provides rich information about the target-side structure S Y .For
example, given the Spanish sentence in FIG0 , we can easily know that the translation will have a noun, a pronoun and a verb. Such
obvious structural information does not have uncertainty, and thus does not require planning. In this
example, the uncertain part is the order of the noun and the pronoun. Thus, we
want to learn a set of planner codes C Y to disambiguate such uncertain information about the sentence structure. By conditioning
on the codes, we can potentially increase the effectiveness of beam search as the search space is properly regulated.In this work, we use simplified POS tags to annotate the structure S Y . We learn the planner
codes by putting a discretization bottleneck in an end-to-end network that reconstructs S Y with both X and C Y . The codes are merged
with the target sentences in the training data. Thus, no modification
to the NMT model is required. Experiments show the
translation performance is generally improved with structural planning. More interestingly,
we can control the structure of output sentences by manipulating the planner codes.
Instead of learning discrete codes, we can also directly predict the structural annotations (e.g. POS tags), then translate based on the predicted structure.
However, as the simplified POS tags are also long sequences, the error of predicting the tags will be propagated to word generation.
In our experiments, doing so degrades the performance by around 8 BLEU points on IWSLT dataset.
In this paper, we add a planning phase in neural machine translation, which generates some planner codes to control the structure of the output sentence.
To learn the codes, we design an end-to-end neural network with a discretization bottleneck to predict the simplified POS tags of target sentences.
Experiments show that the proposed method generally improves the translation performance.
We also confirm the effect of the planner codes, by being able to sample translations with drastically different structures using different planner codes.The planning phase helps the decoding algorithm by removing the uncertainty of the sentence structure.
The framework described in this paper can be extended to plan other latent factors, such as the sentiment or topic of the sentence. | Plan the syntactic structural of translation using codes | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:848 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Interpolation of data in deep neural networks has become a subject of significant research interest.
We prove that over-parameterized single layer fully connected autoencoders do not merely interpolate, but rather, memorize training data: they produce outputs in (a non-linear version of) the span of the training examples.
In contrast to fully connected autoencoders, we prove that depth is necessary for memorization in convolutional autoencoders.
Moreover, we observe that adding nonlinearity to deep convolutional autoencoders results in a stronger form of memorization: instead of outputting points in the span of the training images, deep convolutional autoencoders tend to output individual training images.
Since convolutional autoencoder components are building blocks of deep convolutional networks, we envision that our findings will shed light on the important question of the inductive bias in over-parameterized deep networks.
As deep convolutional neural networks (CNNs) become ubiquitous in computer vision thanks to their strong performance on a range of tasks (Goodfellow et al., 2016) , recent work has begun to analyze the role of interpolation (perfectly fitting training data) in such networks BID0 Zhang et al., 2017) .
These works show that deep overparametrized networks can interpolate training data even when the labels are random.
For an overparameterized model, there are typically infinitely many interpolating solutions.
Thus it is important to characterize the inductive bias of an algorithm, i.e., the properties of the specific solution chosen by the training procedure.
In this paper we study autoencoders (Goodfellow et al., 2016) , i.e. maps ψ : R d → R d that are trained to satisfy DISPLAYFORM0 Autoencoders are typically trained by solving arg min DISPLAYFORM1 by gradient descent over a parametrized function space Ψ.There are many interpolating solutions to the autoencoding problem in the overparametrized setting.
We characterize the inductive bias as memorization when the autoencoder output is within the span of the training data and strong memorization when the output is close to one of the training examples for almost any input.Studying memorization in the context of autoencoders is relevant since (1) components of convolutional autoencoders are building blocks of many CNNs; (2) layerwise pre-training using autoencoders is a standard technique to initialize individual layers of CNNs to improve training (Belilovsky et al., 2019; Bengio et al., 2007; Erhan et al., 2010) ; and (3) autoencoder architectures are used in many image-to-image tasks such as image segmentation, image impainting, etc. (Ulyanov et al., 2017) .
While the results in this paper hold generally for autoencoders, we concentrate on image data, since this allows identifying memorization by visual inspection of the input and output.To illustrate the memorization phenomenon, consider linear single layer fully connected autoencoders.
This autoencoding problem can be reduced to linear regression (see Appendix A).
It is well-known that solving overparametrized linear regression by gradient descent initialized at zero converges to the minimum norm solution (see, e.g., Theorem 6.1 in (Engl et al., 1996) ).
This minimum norm solution translated to the autoencoding setting corresponds to memorization of the training data: after training the autoencoder, any input image is mapped to an image that lies in the span of the training set.In this paper, we prove that the memorization property extends to nonlinear single layer fully connected autoencoders.We proceed to show that memorization extends to deep (but not shallow) convolutional autoencoders.
As a striking illustration of this phenomenon consider FIG0 .
After training a U-Net architecture (Ronneberger et al., 2015) , which is commonly used in image-to-image tasks (Ulyanov et al., 2017) , on a single training image, any input image is mapped to the training image.
Related ideas were concurrently explored for autoencoders trained on a single example in (Zhang et al., 2019) .The
main contributions of this paper are as follows. Building
on the connection to linear regression, we prove that single layer fully connected nonlinear autoencoders produce outputs in the "nonlinear" span (see Definition 2) of the
training data. Interestingly
, we show in Section 3 that in contrast to fully connected autoencoders, shallow convolutional autoencoders do not memorize training data, even when adding filters to increase the number of parameters.In Section 4, we observe that our memorization results for linear CNNs carry over to nonlinear CNNs. Further, nonlinear
CNNs demonstrate a strong form of memorization: the trained network outputs individual training images rather than just combinations of training images. We end with a short
discussion in Section 5. Appendices E, F, G,
and H provide additional details concerning the effect of downsampling, early stopping, and initialization on memorization in linear and nonlinear convolutional autoencoders.
This paper identified the mechanism behind memorization in autoencoders.
While it is well-known that linear regression converges to a minimum norm solution when initialized at zero, we tied this phenomenon to memorization in non-linear single layer fully connected autoencoders, showing that they produce output in the nonlinear span of the training examples.
Furthermore, we showed that convolutional autoencoders behave quite differently since not every overparameterized convolutional autoencoder memorizes.
Indeed, we showed that overparameterization by adding depth or downsampling is necessary and empirically sufficient for memorization in the convolutional setting, while overparameterization by extending the number of filters in a layer does not lead to memorization.Interestingly, we observed empirically that the phenomenon of memorization is pronounced in the non-linear setting, where nearly arbitrary input images are mapped to output images that are visually identifiable as one of the training images rather than a linear combination thereof as in the linear setting.
While the exact mechanism for this strong form of memorization in the non-linear setting still needs to be understood, this phenomenon is reminiscent of FastICA in Independent Component Analysis (Hyvrinen & Oja, 1997) or more general non-linear eigenproblems (Belkin et al., 2018b) , where every "eigenvector" (corresponding to training examples in our setting) of certain iterative maps has its own basin of attraction.
We conjecture that increasing the depth may play the role of increasing the number of iterations in those methods.Since the use of deep networks with near zero initialization is the current standard for image classification tasks, we expect that our memorization results also carry over to these application domains.
We note that memorization is a particular form of interpolation (zero training loss) and interpolation has been demonstrated to be capable of generalizing to test data in neural networks and a range of other methods (Zhang et al., 2017; Belkin et al., 2018a) .
Our work could provide a mechanism to link overparameterization and memorization with generalization properties observed in deep convolutional networks.Belilovsky, E., Eickenberg, M., and Oyallon, E. | We identify memorization as the inductive bias of interpolation in overparameterized fully connected and convolutional auto-encoders. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:849 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The tremendous success of deep neural networks has motivated the need to better understand the fundamental properties of these networks, but many of the theoretical results proposed have only been for shallow networks.
In this paper, we study an important primitive for understanding the meaningful input space of a deep network: span recovery.
For $k<n$, let $\mathbf{A} \in \mathbb{R}^{k \times n}$ be the innermost weight matrix of an arbitrary feed forward neural network $M: \mathbb{R}^n \to \mathbb{R}$, so $M(x)$ can be written as $M(x) = \sigma(\mathbf{A} x)$, for some network $\sigma: \mathbb{R}^k \to \mathbb{R}$.
The goal is then to recover the row span of $\mathbf{A}$ given only oracle access to the value of $M(x)$. We show that if $M$ is a multi-layered network with ReLU activation functions, then partial recovery is possible: namely, we can provably recover $k/2$ linearly independent vectors in the row span of $\mathbf{A}$ using poly$(n)$ non-adaptive queries to $M(x)$. Furthermore, if $M$ has differentiable activation functions, we demonstrate that \textit{full} span recovery is possible even when the output is first passed through a sign or $0/1$ thresholding function; in this case our algorithm is adaptive.
Empirically, we confirm that full span recovery is not always possible, but only for unrealistically thin layers.
For reasonably wide networks, we obtain full span recovery on both random networks and networks trained on MNIST data.
Furthermore, we demonstrate the utility of span recovery as an attack by inducing neural networks to misclassify data obfuscated by controlled random noise as sensical inputs.
Consider the general framework in which we are given an unknown function f : R n → R, and we want to learn properties about this function given only access to the value f (x) for different inputs x.
There are many contexts where this framework is applicable, such as blackbox optimization in which we are learning to optimize f (x) (Djolonga et al., 2013) , PAC learning in which we are learning to approximate f (x) (Denis, 1998) , adversarial attacks in which we are trying to find adversarial inputs to f (x) (Szegedy et al., 2013) , or structure recovery in which we are learning the structure of f (x).
For example in the case when f (x) is a neural network, one might want to recover the underlying weights or architecture (Arora et al., 2014; .
In this work, we consider the setting when f (x) = M (x) is a neural network that admits a latent low-dimensional structure, namely M (x) = σ(Ax) where A ∈ R k×n is a rank k matrix for some k < n, and σ : R k → R is some neural network.
In this setting, we focus primarily on the goal of recovering the row-span of the weight matrix A. We remark that we can assume that A is full-rank as our results extend to the case when A is not full-rank.
Span recovery of general functions f
(x) = g(Ax), where g is arbitrary, has been studied in some contexts, and is used to gain important information about the underlying function f .
By learning Span(A), we in essence are capturing the relevant subspace of the input to f ; namely, f behaves identically on x as it does on the projection of x onto the row-span of A. In statistics, this is known as effective dimension reduction or the multi-index model Li (1991) ; Xia et al. (2002) .
Another important motivation for span recovery is for designing adversarial attacks.
Given the span of A, we compute the kernel of A, which can be used to fool the function into behaving incorrectly on inputs which are perturbed by vectors in the kernel.
Specifically, if x is a legitimate input correctly classified by f and y is a large random vector in the kernel of A, then x + y will be indistinguishable from noise but we will have f
(x) = f (x +
y).
Several works have considered the problem from an approximation-theoretic standpoint, where the goal is to output a hypothesis function f which approximates f well on a bounded domain.
For instance, in the case that A ∈ R n is a rank 1 matrix and g(Ax) is a smooth function with bounded derivatives, Cohen et al. (2012) gives an adaptive algorithm to approximate f .
Their results also give an approximation A to A, under the assumption that A is a stochastic vector (A i ≥ 0 for each i and i A i = 1).
Extending this result to more general rank k matrices A ∈ R k×n , Tyagi & Cevher (2014) and Fornasier et al. (2012) give algorithms with polynomial sample complexity to find approximations f to twice differentiable functions f .
However, their results do not provide any guarantee that the original matrix A | We provably recover the span of a deep multi-layered neural network with latent structure and empirically apply efficient span recovery algorithms to attack networks by obfuscating inputs. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:850 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, we study the implicit regularization of the gradient descent algorithm in homogeneous neural networks, including fully-connected and convolutional neural networks with ReLU or LeakyReLU activations.
In particular, we study the gradient descent or gradient flow (i.e., gradient descent with infinitesimal step size) optimizing the logistic loss or cross-entropy loss of any homogeneous model (possibly non-smooth), and show that if the training loss decreases below a certain threshold, then we can define a smoothed version of the normalized margin which increases over time.
We also formulate a natural constrained optimization problem related to margin maximization, and prove that both the normalized margin and its smoothed version converge to the objective value at a KKT point of the optimization problem.
Our results generalize the previous results for logistic regression with one-layer or multi-layer linear networks, and provide more quantitative convergence results with weaker assumptions than previous results for homogeneous smooth neural networks.
We conduct several experiments to justify our theoretical finding on MNIST and CIFAR-10 datasets.
Finally, as margin is closely related to robustness, we discuss potential benefits of training longer for improving the robustness of the model.
A major open question in deep learning is why gradient descent or its variants, are biased towards solutions with good generalization performance on the test set.
To achieve a better understanding, previous works have studied the implicit bias of gradient descent in different settings.
One simple but insightful setting is linear logistic regression on linearly separable data.
In this setting, the model is parameterized by a weight vector w, and the class prediction for any data point x is determined by the sign of w x.
Therefore, only the direction w/ w 2 is important for making prediction.
Soudry et al. (2018a; b) ; Ji and Telgarsky (2018) ; Nacson et al. (2018) investigated this problem and proved that the direction of w converges to the direction that maximizes the L 2 -margin while the norm of w diverges to +∞, if we train w with (stochastic) gradient descent on logistic loss.
Interestingly, this convergent direction is the same as that of any regularization path: any sequence of weight vectors {w t } such that every w t is a global minimum of the L 2 -regularized loss L(w) + with λ t → 0 (Rosset et al., 2004) .
Indeed, the trajectory of gradient descent is also pointwise close to a regularization path (Suggala et al., 2018) .
The aforementioned linear logistic regression can be viewed as a single-layer neural network.
A natural and important question is to what extent gradient descent has similiar implicit bias for modern deep neural networks.
For theoretical analysis, a natural candidate is to consider homogeneous neural networks.
Here a neural network Φ is said to be (positively) homogeneous if there is a number L > 0 (called the order) such that the network output Φ(θ; x), where θ stands for the parameter and x stands for the input, satisfies the following:
∀c > 0 : Φ(cθ; x) = c L Φ(θ; x) for all θ and x.
(1) It is important to note that many neural networks are homogeneous (Neyshabur et al., 2015; Du et al., 2018) .
For example, deep fully-connected neural networks or deep CNNs with ReLU or LeakyReLU activations can be made homogeneous if we remove all the bias terms, and the order L is exactly equal to the number of layers.
In (Wei et al., 2018) , it is shown that the regularization path does converge to the max-margin direction for homogeneous neural networks with cross-entropy or logistic loss.
This result suggests that gradient descent or gradient flow may also converges to the max-margin direction by assuming homogeneity, and this is indeed true for some sub-classes of homogeneous neural networks.
For gradient flow, this convergent direction is proven for linear fully-connected networks (Ji and Telgarsky, 2019a) .
For gradient descent on linear fully-connected and convolutional networks, (Gunasekar et al., 2018b ) formulate a constrained optimization problem related to margin maximization and prove that gradient descent converges to the direction of a KKT point or even the max-margin direction, under various assumptions including the convergence of loss and gradient directions.
In an independent work, (Nacson et al., 2019a) generalize the result in (Gunasekar et al., 2018b) to smooth homogeneous models (we will discuss this work in more details in Section 2).
In this paper, we analyze the dynamics of gradient flow/descent of homogeneous neural networks under a minimal set of assumptions.
The main technical contribution of our work is to prove rigorously that for gradient flow/descent, the normalized margin is increasing and converges to a KKT point of a natural max-margin problem.
Our results leads to some natural further questions:
• Can we generalize our results for gradient descent on smooth neural networks to nonsmooth ones?
In the smooth case, we can lower bound the decrement of training loss by the gradient norm squared, multiplied by a factor related to learning rate.
However, in the non-smooth case, no such inequality is known in the optimization literature, and it is unclear what kind of natural assumption can make it holds.
• Can we make more structural assumptions on the neural network to prove stronger results?
In this work, we use a minimal set of assumptions to show that the convergent direction of parameters is a KKT point.
A potential research direction is to identify more key properties of modern neural networks and show that the normalized margin at convergence is locally or globally optimal (in terms of optimizing (P)).
• Can we extend our results to neural networks with bias terms?
In our experiments, the normalized margin of the CNN with bias also increases during training despite that its output is non-homogeneous.
It is very interesting (and technically challenging) to provide a rigorous proof for this fact. | We study the implicit bias of gradient descent and prove under a minimal set of assumptions that the parameter direction of homogeneous models converges to KKT points of a natural margin maximization problem. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:851 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Long-term video prediction is highly challenging since it entails simultaneously capturing spatial and temporal information across a long range of image frames.Standard recurrent models are ineffective since they are prone to error propagation and cannot effectively capture higher-order correlations.
A potential solution is to extend to higher-order spatio-temporal recurrent models.
However, such a model requires a large number of parameters and operations, making it intractable to learn in practice and is prone to overfitting.
In this work, we propose convolutional tensor-train LSTM (Conv-TT-LSTM), which learns higher-orderConvolutional LSTM (ConvLSTM) efficiently using convolutional tensor-train decomposition (CTTD).
Our proposed model naturally incorporates higher-order spatio-temporal information at a small cost of memory and computation by using efficient low-rank tensor representations.
We evaluate our model on Moving-MNIST and KTH datasets and show improvements over standard ConvLSTM and better/comparable results to other ConvLSTM-based approaches, but with much fewer parameters.
Understanding dynamics of videos and performing long-term predictions of the future is a highly challenging problem.
It entails learning complex representation of real-world environment without external supervision.
This arises in a wide range of applications, including autonomous driving, robot control , or other visual perception tasks like action recognition or object tracking (Alahi et al., 2016) .
However, long-term video prediction remains an open problem due to high complexity of the video contents.
Therefore, prior works mostly focus on next or first few frames prediction (Lotter et al., 2016; Finn et al., 2016; Byeon et al., 2018) .
Many recent video models use Convolutional LSTM (ConvLSTM) as a basic block (Xingjian et al., 2015) , where spatio-temporal information is encoded as a tensor explicitly in each cell.
In ConvL-STM networks, each cell is a first-order recurrent model, where the hidden state is updated based on its immediate previous step.
Therefore, they cannot easily capture higher-order temporal correlations needed for long-term prediction.
Moreover, they are highly prone to error propagation.
Various approaches have been proposed to augment ConvLSTM, either by modifying networks to explicitly modeling motion (Finn et al., 2016) , or by integrating spatio-temporal interaction in ConvLSTM cells (Wang et al., 2017; 2018a) .
These approaches are often incapable of capturing longterm dependencies and produce blurry prediction.
Another direction to augment ConvLSTM is to incorporate a higher-order RNNs (Soltani & Jiang, 2016) inside each LSTM cell, where its hidden state is updated using multiple past steps.
However, a higher-order model for high-dimensional data (e.g. video) requires a huge number of model parameters, and the computation grows exponentially with the order of the RNNs.
A principled approach to address the curse of dimensionality is tensor decomposition, where a higher-order tensor is compressed into smaller core tensors (Anandkumar et al., 2014) .
Tensor representations are powerful since they retain rich expressivity even with a small number of parameters.
In this work, we propose a novel convolutional tensor decomposition, which allows for compact higher-order ConvLSTM.
Contributions.
We propose Convolutional Tensor-Train LSTM (Conv-TT-LSTM), a modification of ConvLSTM, to build a higher-order spatio-temporal model.
(1) We introduce Convolutional Tensor-Train Decomposition (CTTD) that factorizes a large convolutional kernel into a chain of
Figure 1: Illustration of
(a) convolutional tensor-train (Eqs.
(5) and (6)) and the difference between convolutional tensor-train LSTM
(b) Fixed window version (Eqs.
(11a) and (10)) and
(c) Sliding window version (Eqs.
(11b) and (10) and 1c ), and we found that the SW version performs better than the FW one.
(4) We found that training higher-order tensor models is not straightforward due to gradient instability.
We present several approaches to overcome this such as good learning schedules and gradient clipping.
(5) In the experiments, we show our proposed Conv-TT-LSTM consistently produces sharp prediction over a long period of time for both Moving-MNIST-2 and KTH action datasets.
Conv-TT-LSTM outperforms the state-of-the-art PredRNN++ (Wang et al., 2018a) in LPIPS (Zhang et al., 2018) by 0.050 on the Moving-MNIST-2 and 0.071 on the KTH action dataset, with 5.6 times fewer parameters.
Thus, we obtain best of both worlds: better long-term prediction and model compression.
In this paper, we proposed convolutional tensor-train decomposition to factorize a large convolutional kernel into a set of smaller core tensors.
We applied this technique to efficiently construct convolutional tensor-train LSTM (Conv-TT-LSTM), a high-order spatio-temporal recurrent model whose parameters are represented in tensor-train format.
We empirically demonstrated that our proposed Conv-TT-LSTM outperforms standard ConvLSTM and produce better/comparable results compared to other state-of-the-art models with fewer parameters.
Utilizing the proposed model for high-resolution videos is still challenging due to gradient vanishing or explosion.
Future direction will include investigating other training strategies or a model design to ease the training process.
In this section, we prove the sequential algorithms in Eq. (3) for tensor-train decomposition (1) and Eq. (6) for convolutional tensor-train decomposition (4) both by induction.
Proof of Eq. (3) For simplicity, we denote the standard tensor-train decomposition in Eq. (1) as
), then Eq. (2) can be rewritten as Eq. (12) since R 0 = 1 and v
i1,r0,r1 · · · T
where R 0 = 1, v | we propose convolutional tensor-train LSTM, which learns higher-order Convolutional LSTM efficiently using convolutional tensor-train decomposition. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:852 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
While deep neural networks have shown outstanding results in a wide range of applications,
learning from a very limited number of examples is still a challenging
task.
Despite the difficulties of the few-shot learning, metric-learning techniques
showed the potential of the neural networks for this task.
While these methods
perform well, they don’t provide satisfactory results.
In this work, the idea of
metric-learning is extended with Support Vector Machines (SVM) working mechanism,
which is well known for generalization capabilities on a small dataset.
Furthermore, this paper presents an end-to-end learning framework for training
adaptive kernel SVMs, which eliminates the problem of choosing a correct kernel
and good features for SVMs.
Next, the one-shot learning problem is redefined
for audio signals.
Then the model was tested on vision task (using Omniglot
dataset) and speech task (using TIMIT dataset) as well.
Actually, the algorithm
using Omniglot dataset improved accuracy from 98.1% to 98.5% on the one-shot
classification task and from 98.9% to 99.3% on the few-shot classification task.
Deep learning has shown the ability to achieve outstanding results for real-world problems in various areas such as image, audio and natural language processing BID18 .
However these networks require large datasets, so the model fitting demands significant computational resources.
On the other hand, there are techniques for learning on small datasets, such as data augmentation and special regularization methods, but these methods' accuracy is far from desirable on a very limited dataset.
As well as slowness of the training process is caused by the many weight update iterations, which is required due to the parametric aspect of the model.Humans are capable of learning the concept from only a few or even from one example.
This learning characteristic differs much from the deep neural networks' learning curve.
This discovery leads us to one-shot learning task BID6 , which consists of learning each class from only one example.
Nevertheless, one single example is not always enough for humans to understand new concepts.
In view of the previous fact, the generalization of one-shot learning task exists as well, it is called few-shot learning or k-shot learning, where the algorithm learns from exactly k samples per class.Deep learning approaches data-poor problems by doing transfer learning BID2 : the parameters are optimized on a closely related data-rich problem and then the model is fine-tuned on the given data.
In contrast, one-shot learning problem is extremely data-poor, but it requires similar approach as transfer learning: in order to learn good representation, the model is trained on similar data, where the classes are distinct from the one-shot dataset.
In the next step, standard machine learning tools are used on the learned features to classify the one-shot samples.
As a matter of fact, BID26 claimed that parameterless models perform the best, but they concentrated on only k-nearest neighbors algorithm.
Considering this observation this work applies Support Vector Machine BID0 , which can be regarded as a parameterless model.
This paper presents the k-shot related former work in the following section.
Then the proposed model, which is called Siamese kernel SVM, is introduced with a brief summary of the used wellknown methods.
In Section 4 the experimental setup is described for both a vision and an auditory task, where minor refinement of the problem is required.
In this work, Siamese kernel SVM was introduced, which is capable of state-of-the-art performance on multiple domains on few-shot learning subject to accuracy.
The key point of this model is combining Support Vector Machines' generalizing capabilities with Siamese networks one-shot learning abilities, which can improve the combined model's results on the k-shot learning task.
The main observation of this work is that learning representation for another model is much easier when the feature extractor is taught as an end-to-end version of the other model.
In addition, parameterless models achieve the best results on the previously defined problem, which makes SVMs an adequate choice for the task.
This paper also introduced the concept of k-sec learning, which can be used for audio and video recognition tasks, and it gave a baseline for this task on the TIMIT dataset.
The author hopes defining k-sec learning task encourage others to measure one-shot learning models' accuracy on various domains. | The proposed method is an end-to-end neural SVM, which is optimized for few-shot learning. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:853 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Most existing 3D CNN structures for video representation learning are clip-based methods, and do not consider video-level temporal evolution of spatio-temporal features.
In this paper, we propose Video-level 4D Convolutional Neural Networks, namely V4D, to model the evolution of long-range spatio-temporal representation with 4D convolutions, as well as preserving 3D spatio-temporal representations with residual connections.
We further introduce the training and inference methods for the proposed V4D.
Extensive experiments are conducted on three video recognition benchmarks, where V4D achieves excellent results, surpassing recent 3D CNNs by a large margin.
3D convolutional neural networks (3D CNNs) and their variants (Ji et al., 2010; Tran et al., 2015; Carreira & Zisserman, 2017; Qiu et al., 2017; Wang et al., 2018b ) provide a simple extension from 2D counterparts for video representation learning.
However, due to practical issues such as memory consumption and computational cost, these models are mainly used for clip-level feature learning instead of training from the whole video.
In this sense, during training, the clip-based methods randomly sample a short clip (e.g., 32 frames) from the video for representation learning.
During testing, they uniformly sample several clips from the whole video in a sliding window manner and calculate the prediction scores for each clip independently.
Finally the prediction scores of all clips are simply averaged to yield the video-level prediction.
Although achieving very competitive accuracy, these clip-based models ignore the video-level structure and long-range spatio-temporal dependency during training, as they only sample a small portion of the entire video.
In fact, sometimes it could be very hard to recognize action class only with partial observation.
Meanwhile, simply averaging the prediction scores of all clips could be also sub-optimal during testing.
To overcome this issue, Temporal Segment Network (TSN) uniformly samples multiple clips from the entire video and uses their average score to guide back-propagation during training.
Thus TSN is a video-level representation learning framework.
However, the inter-clip interaction and video-level fusion in TSN is only performed at very late stage, which fails to capture finer temporal structures.
In this paper, we propose a general and flexible framework for video-level representation learning, called V4D.
As shown in Figure 1 , to model long-range dependency in a more efficient and principled way, V4D is composed of two critical design: (1) holistic sampling strategy and (2) 4D convolutional interaction.
We first introduce a video-level sampling strategy by uniformly sampling a sequence of short-term units covering the holistic video.
Then we model long-range spatio-temporal dependency by designing a unique 4D residual block.
Specifically, we present a 4D convolutional operation to capture inter-clip interaction, which could enhance the representation power of the original cliplevel 3D CNNs.
The 4D residual blocks could be easily integrated into the existing 3D CNNs to perform long-range modeling more earlier and hierarchically than TSN.
We also design a specific video-level inference algorithm for V4D.
Specifically, we verify the effectiveness of V4D on three video action recognition benchmarks, Mini-Kinetics (Xie et al., 2018) , Kinetics-400 (Carreira & Zisserman, 2017) and Something-Something-V1 (Goyal et al., 2017) .
V4D structures achieve very competitive performance on these benchmarks and obtain evident performance improvement over their 3D counterparts.
In this section, we will show that the proposed V4D can be considered as a 4D generalization of a number of recent widely-applied methods, which may partially explain why V4D works practically well on learning meaningful video-level representation.
Temporal Segment Network.
Our V4D is closely related to Temporal Segment Network (TSN).
Although originally designed for 2D CNN, TSN can be directly applied to 3D CNN to model video-level representation.
It also employs a video-level sampling strategy with each action unit named "segment".
During training, each segment is calculated individually and the prediction scores after the fully-connected layer are then averaged.
Since the fully-connected layer is a linear classifier, it is mathematically identical to calculating the average before the fully-connected layer (similar to our global average pooling) or after the fully-connected layer (similar to TSN).
Thus our V4D can be considered as 3D CNN + TSN if all parameters in 4D Blocks are assigned zero.
Dilated Temporal Convolution.
One special form of 4D convolution kernel, k × 1 × 1 × 1, is closely related to Temporal Dilated Convolution (Lea et al., 2016) .
The input tensor V can be considered as a (C, U × T, H, W ) tensor when all action units are concatenated along the temporal dimension.
In this case, the k × 1 × 1 × 1 4D convolution can be considered as a dilated 3D convolution kernel of k × 1 × 1 with a dilation of T frames.
Note that the k × 1 × 1 × 1 kernel is just the simplest form of our 4D convolutions, while our V4D architectures utilize more complex kernels and thus can be more meaningful for learning stronger video representation.
Furthermore, our 4D Blocks utilize residual connections, ensuring that both long-term and short-term representation can be learned jointly.
Simply applying the dilated convolution might discard the short-term fine-grained features.
We have introduced new Video-level 4D Convolutional Neural Networks, namely V4D, to learn strong temporal evolution of long-range spatio-temporal representation, as well as retaining 3D features with residual connections.
In addition, we further introduce the training and inference methods for our V4D.
Experiments were conducted on three video recognition benchmarks, where our V4D achieved the state-of-the-art results.
A APPENDIX | A novel 4D CNN structure for video-level representation learning, surpassing recent 3D CNNs. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:854 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We study the problem of learning and optimizing through physical simulations via differentiable programming.
We present DiffSim, a new differentiable programming language tailored for building high-performance differentiable physical simulations.
We demonstrate the performance and productivity of our language in gradient-based learning and optimization tasks on 10 different physical simulators.
For example, a differentiable elastic object simulator written in our language is 4.6x faster than the hand-engineered CUDA version yet runs as fast, and is 188x faster than TensorFlow.
Using our differentiable programs, neural network controllers are typically optimized within only tens of iterations.
Finally, we share the lessons learned from our experience developing these simulators, that is, differentiating physical simulators does not always yield useful gradients of the physical system being simulated.
We systematically study the underlying reasons and propose solutions to improve gradient quality.
Figure 1: Left: Our language allows us to seamlessly integrate a neural network (NN) controller and a physical simulation module, and update the weights of the controller or the initial state parameterization (blue).
Our simulations typically have 512 ∼ 2048 time steps, and each time step has up to one thousand parallel operations.
Right: 10 differentiable simulators built with DiffSim.
Differentiable physical simulators are effective components in machine learning systems.
For example, de Avila Belbute- Peres et al. (2018a) and Hu et al. (2019b) have shown that controller optimization with differentiable simulators converges one to four orders of magnitude faster than model-free reinforcement learning algorithms.
The presence of differentiable physical simulators in the inner loop of these applications makes their performance vitally important.
Unfortunately, using existing tools it is difficult to implement these simulators with high performance.
We present DiffSim, a new differentiable programming language for high performance physical simulations on both CPU and GPU.
It is based on the Taichi programming language (Hu et al., 2019a) .
The DiffSim automatic differentiation system is designed to suit key language features required by physical simulation, yet often missing in existing differentiable programming tools, as detailed below: Megakernels Our language uses a "megakernel" approach, allowing the programmer to naturally fuse multiple stages of computation into a single kernel, which is later differentiated using source code transformations and just-in-time compilation.
Compared to the linear algebra operators in TensorFlow (Abadi et al., 2016) and PyTorch (Paszke et al., 2017) , DiffSim kernels have higher arithmetic intensity and are therefore more efficient for physical simulation tasks.
Imperative Parallel Programming In contrast to functional array programming languages that are popular in modern deep learning (Bergstra et al., 2010; Abadi et al., 2016; Li et al., 2018b) , most traditional physical simulation programs are written in imperative languages such as Fortran and C++.
DiffSim likewise adopts an imperative approach.
The language provides parallel loops and control flows (such as "if" statements), which are widely used constructs in physical simulations: they simplify common tasks such as handling collisions, evaluating boundary conditions, and building iterative solvers.
Using an imperative style makes it easier to port existing physical simulation code to DiffSim.
Flexible Indexing Existing parallel differentiable programming systems provide element-wise operations on arrays of the same shape, e.g. can only be expressed with unintuitive scatter/gather operations in these existing systems, which are not only inefficient but also hard to develop and maintain.
On the other hand, in DiffSim, the programmer directly manipulates array elements via arbitrary indexing, thus allowing partial updates of global arrays and making these common simulation patterns naturally expressible.
The explicit indexing syntax also makes it easy for the compiler to perform access optimizations (Hu et al., 2019a) .
The three requirements motivated us to design a tailored two-scale automatic differentiation system, which makes DiffSim especially suitable for developing complex and high-performance differentiable physical simulators, possibly with neural network controllers ( Fig. 1, left) .
Using our language, we are able to quickly implement and automatically differentiate 10 physical simulators 1 , covering rigid bodies, deformable objects, and fluids ( Fig. 1, right) .
A comprehensive comparison between DiffSim and other differentiable programming tools is in Appendix A.
We have presented DiffSim, a new differentiable programming language designed specifically for building high-performance differentiable physical simulators.
Motivated by the need for supporting megakernels, imperative programming, and flexible indexing, we developed a tailored two-scale automatic differentiation system.
We used DiffSim to build 10 simulators and integrated them into deep neural networks, which proved the performance and productivity of DiffSim over existing systems.
We hope our programming language can greatly lower the barrier of future research on differentiable physical simulation in the machine learning and robotics communities. | We study the problem of learning and optimizing through physical simulations via differentiable programming, using our proposed DiffSim programming language and compiler. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:855 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Local explanation frameworks aim to rationalize particular decisions made by a black-box prediction model.
Existing techniques are often restricted to a specific type of predictor or based on input saliency, which may be undesirably sensitive to factors unrelated to the model's decision making process.
We instead propose sufficient input subsets that identify minimal subsets of features whose observed values alone suffice for the same decision to be reached, even if all other input feature values are missing.
General principles that globally govern a model's decision-making can also be revealed by searching for clusters of such input patterns across many data points.
Our approach is conceptually straightforward, entirely model-agnostic, simply implemented using instance-wise backward selection, and able to produce more concise rationales than existing techniques.
We demonstrate the utility of our interpretation method on neural network models trained on text and image data.
The rise of neural networks and nonparametric methods in machine learning (ML) has driven significant improvements in prediction capabilities, while simultaneously earning the field a reputation of producing complex black-box models.
Vital applications, which could benefit most from improved prediction, are often deemed too sensitive for opaque learning systems.
Consider the widespread use of ML for screening people, including models that deny defendants' bail [1] or reject loan applicants [2] .
It is imperative that such decisions can be interpretably rationalized.
Interpretability is also crucial in scientific applications, where it is hoped that general principles may be extracted from accurate predictive models [3, 4, 5].One
simple explanation for why a particular black-box decision is reached may be obtained via a sparse subset of the input features whose values form the basis for the model's decision -a rationale. For
text (or image) data, a rationale might consist of a subset of positions in the document (or image) together with the words (or pixel-values) occurring at these positions (see FIG0 . To
ensure interpretations remain fully faithful to an arbitrary model, our rationales do not attempt to summarize the (potentially complex) operations carried out within the model, and instead merely point to the relevant information it uses to arrive at a decision [6] . For
high-dimensional inputs, sparsity of the rationale is imperative for greater interpretability.Here, we propose a local explanation framework to produce rationales for a learned model that has been trained to map inputs x P X via some arbitrary learned function f : X Ñ R. Unlike many other interpretability techniques, our approach is not restricted to vector-valued data and does not require gradients of f . Rather
, each input example is solely presumed to have a set of indexable features x " rx 1 , . . . , x p s, where each x i P R d for i P rps " t1, . . . , pu. We allow
for features that are unordered (set-valued input) and whose number p may vary from input to input. A rationale
corresponds to a sparse subset of these indices S Ď rps together with the specific values of the features in this subset.To understand why a certain decision was made for a given input example x, we propose a particular rationale called a sufficient input subset (SIS). Each SIS consists
of a minimal input pattern present in x that alone suffices for f to produce the same decision, even if provided no other information about the rest of x. Presuming the decision
is based on f pxq exceeding some pre-specified threshold τ P R, we specifically seek a minimal-cardinality subset S of the input features such that f px S q ě τ . Throughout, we use x S
P X to denote a modified input example in which all information about the values of features outside subset S has been masked with features in S remaining at their original values. Thus, each SIS characterizes
a particular standalone input pattern that drives the model toward this decision, providing sufficient justification for this choice from the model's perspective, even without any information on the values of the other features in x.In classification settings, f might represent the predicted probability of class C where we decide to assign the input to class C if f pxq ě τ , chosen based on precision/recall considerations. Each SIS in such an application
corresponds to a small input pattern that on its own is highly indicative of class C, according to our model. Note that by suitably defining
f and τ with respect to the predictor outputs, any particular decision for input x can be precisely identified with the occurrence of f pxq ě τ , where higher values of f are associated with greater confidence in this decision.For a given input x where f pxq ě τ , this work presents a simple method to find a complete collection of sufficient input subsets, each satisfying f px S q ě τ , such that there exists no additional SIS outside of this collection. Each SIS may be understood as
a disjoint piece of evidence that would lead the model to the same decision, and why this decision was reached for x can be unequivocally attributed to the SIS-collection. Furthermore, global insight on
the general principles underlying the model's decision-making process may be gleaned by clustering the types of SIS extracted across different data points (see FIG4 and TAB0 ). Such insights allow us to compare
models based not only on their accuracy, but also on human-determined relevance of the concepts they target. Our method's simplicity facilitates
its utilization by non-experts who may know very little about the models they wish to interrogate.
This work introduced the idea of interpreting black-box decisions on the basis of sufficient input subsets -minimal input patterns that alone provide sufficient evidence to justify a particular decision.
Our methodology is easy to understand for non-experts, applicable to all ML models without any additional training steps, and remains fully faithful to the underlying model without making approximations.
While we focus on local explanations of a single decision, clustering the SISpatterns extracted from many data points reveals insights about a model's general decision-making process.
Given multiple models of comparable accuracy, SIS-clustering can uncover critical operating differences, such as which model is more susceptible to spurious training data correlations or will generalize worse to counterfactual inputs that lie outside the data distribution. | We present a method for interpreting black-box models by using instance-wise backward selection to identify minimal subsets of features that alone suffice to justify a particular decision made by the model. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:856 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, we tackle the problem of detecting samples that are not drawn from the training distribution, i.e., out-of-distribution (OOD) samples, in classification.
Many previous studies have attempted to solve this problem by regarding samples with low classification confidence as OOD examples using deep neural networks (DNNs).
However, on difficult datasets or models with low classification ability, these methods incorrectly regard in-distribution samples close to the decision boundary as OOD samples.
This problem arises because their approaches use only the features close to the output layer and disregard the uncertainty of the features.
Therefore, we propose a method that extracts the uncertainties of features in each layer of DNNs using a reparameterization trick and combines them.
In experiments, our method outperforms the existing methods by a large margin, achieving state-of-the-art detection performance on several datasets and classification models.
For example, our method increases the AUROC score of prior work (83.8%) to 99.8% in DenseNet on the CIFAR-100 and Tiny-ImageNet datasets.
Deep neural networks (DNNs) have achieved high performance in many classification tasks such as image classification (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014) , object detection (Lin et al., 2017; Redmon & Farhadi, 2018) , and speech recognition Hannun et al., 2014) .
However, DNNs tend to make high confidence predictions even for samples that are not drawn from the training distribution, i.e., out-of-distribution (OOD) samples (Hendrycks & Gimpel, 2016) .
Such errors can be harmful to medical diagnosis and automated driving.
Because it is not generally possible to control the test data distribution in real-world applications, OOD samples are inevitably included in this distribution.
Therefore, detecting OOD samples is important for ensuring the safety of an artificial intelligence system (Amodei et al., 2016) .
There have been many previous studies (Hendrycks & Gimpel, 2016; Liang et al., 2017; Lee et al., 2017; DeVries & Taylor, 2018; Lee et al., 2018; Hendrycks et al., 2018) that have attempted to solve this problem by regarding samples that are difficult to classify or samples with low classification confidence as OOD examples using DNNs.
Their approaches work well and they are computationally efficient.
The limitation of these studies is that, when using difficult datasets or models with low classification ability, the confidence of inputs will be low, even if the inputs are in-distribution samples.
Therefore, these methods incorrectly regard such in-distribution samples as OOD samples, which results in their poor detection performance (Malinin & Gales, 2018) , as shown in Figure 1 .
One cause of the abovementioned problem is that their approaches use only the features close to the output layer and the features are strongly related to the classification accuracy.
Therefore, we use not only the features close to the output layer but also the features close to the input layer.
We hypothesize that the uncertainties of the features close to the input layer are the uncertainties of the feature extraction and are effective for detecting OOD samples.
For example, when using convolutional neural networks (CNNs), the filters of the convolutional layer close to the input layer extract features such as edges that are useful for in-distribution classification.
In other words, indistribution samples possess more features that convolutional filters react to than OOD samples.
Therefore, the uncertainties of the features will be larger when the inputs are in-distribution samples.
Another cause of the abovementioned problem is that their approaches disregard the uncertainty of the features close to the output layer.
We hypothesize that the uncertainties of the latent features close Baseline (Hendrycks & Gimpel, 2016) UFEL (ours) max softmax probability Baseline UFEL (ours) degree of uncertainty Figure 1 : Comparison of existing and proposed methods.
We visualized scatter plots of the outputs of the penultimate layer of a CNN that can estimate the uncertainties of latent features using the SVHN dataset (Netzer et al., 2011) .
We used only classes 0, 1, and 2 for the training data.
Classes 0, 1, 2, and OOD, indicated by red, yellow, blue, and black, respectively, were used for the validation data.
We plot the contour of the maximum output of the softmax layer of the model.
Left: Because the image of "204" includes the digits "2" and "0," the maximum value of the softmax output decreases because the model does not know to which class the image belongs.
Right: The sizes of points in the scatter plots indicate the value of the combined uncertainties of features.
We can classify the image of "204" as an in-distribution image according to the value of the combined uncertainties.
to the output layer are the uncertainties of classification and are also effective for detecting OOD samples.
For example, in-distribution samples are embedded in the feature space close to the output layer to classify samples.
In contrast, OOD samples have no fixed regions for embedding.
Therefore, the uncertainties of the features of OOD samples will be larger than those of in-distribution samples.
Based on the hypotheses, we propose a method that extracts the Uncertainties of Features in Each Layer (UFEL) and combines them for detecting OOD samples.
Each uncertainty is easily estimated after training the discriminative model by computing the mean and the variance of their features using a reparameterization trick such as the variational autoencoder (Kingma & Welling, 2013) and variational information bottleneck (Alemi et al., 2016; .
Our proposal is agnostic to the model architecture and can be easily combined with any regular architecture with minimum modifications.
We visualize the maximum values of output probability and the combined uncertainties of the latent features in the feature space of the penultimate layer in Figure 1 .
The combined uncertainties of the features discriminate the in-distribution and OOD images that are difficult to classify.
For example, although the images that are surrounded by the red line are in-distribution samples, they have low maximum softmax probabilities and could be regarded as OOD samples in prior work.
Meanwhile, their uncertainties are smaller than those of OOD samples and they are regarded as in-distribution samples in our method.
In experiments, we validate the hypothesis demonstrating that each uncertainty is effective for detecting OOD examples.
We also demonstrate that UFEL can obtain state-of-the-art performance in several datasets including CIFAR-100, which is difficult to classify, and models including LeNet5 with low classification ability.
Moreover, UFEL is robust to hyperparameters such as the number of in-distribution classes and the validation dataset. | We propose a method that extracts the uncertainties of features in each layer of DNNs and combines them for detecting OOD samples when solving classification tasks. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:857 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, we explore new approaches to combining information encoded within the learned representations of autoencoders.
We explore models that are capable of combining the attributes of multiple inputs such that a resynthesised output is trained to fool an adversarial discriminator for real versus synthesised data.
Furthermore, we explore the use of such an architecture in the context of semi-supervised learning, where we learn a mixing function whose objective is to produce interpolations of hidden states, or masked combinations of latent representations that are consistent with a conditioned class label.
We show quantitative and qualitative evidence that such a formulation is an interesting avenue of research.
The autoencoder is a fundamental building block in unsupervised learning.
Autoencoders are trained to reconstruct their inputs after being processed by two neural networks: an encoder which encodes the input to a high-level representation or bottleneck, and a decoder which performs the reconstruction using the representation as input.
One primary goal of the autoencoder is to learn representations of the input data which are useful BID1 , which may help in downstream tasks such as classification BID27 BID9 or reinforcement learning BID20 BID5 .
The representations of autoencoders can be encouraged to contain more 'useful' information by restricting the size of the bottleneck, through the use of input noise (e.g., in denoising autoencoders, BID23 , through regularisation of the encoder function BID17 , or by introducing a prior BID11 .
Another goal is in learning interpretable representations BID3 BID10 .
In unsupervised learning, learning often involves qualitative objectives on the representation itself, such as disentanglement of latent variables BID12 or maximisation of mutual information BID3 BID0 BID8 .Mixup
BID26 and manifold mixup BID21 are regularisation techniques that encourage deep neural networks to behave linearly between two data samples. These
methods artificially augment the training set by producing random convex combinations between pairs of examples and their corresponding labels and training the network on these combinations. This
has the effect of creating smoother decision boundaries, which can have a positive effect on generalisation performance. In BID21
, the random convex combinations are computed in the hidden space of the network. This procedure
can be viewed as using the high-level representation of the network to produce novel training examples and provides improvements over strong baselines in the supervised learning. Furthermore, BID22
propose a simple and efficient method for semi-supervised classification based on random convex combinations between unlabeled samples and their predicted labels.In this paper we explore the use of a wider class of mixing functions for unsupervised learning, mixing in the bottleneck layer of an autoencoder. These mixing functions
could consist of continuous interpolations between latent vectors such as in BID21 , to binary masking operations to even a deep neural network which learns the mixing operation. In order to ensure that
the output of the decoder given the mixed representation resembles the data distribution at the pixel level, we leverage adversarial learning BID4 , where here we train a discriminator to distinguish between decoded mixed and unmixed representations. This technique affords
a model the ability to simulate novel data points (such as those corresponding to combinations of annotations not present in the training set). Furthermore, we explore
our approach in the context of semi-supervised learning, where we learn a mixing function whose objective is to produce interpolations of hidden states consistent with a conditioned class label.
In this paper, we proposed the adversarial mixup resynthesiser and showed that it can be used to produce realistic-looking combinations of examples by performing mixing in the bottleneck of an autoencoder.
We proposed several mixing functions, including one based on sampling from a uniform distribution and the other a Bernoulli distribution.
Furthermore, we presented a semisupervised version of the Bernoulli variant in which one can leverage class labels to learn a mixing function which can determine what parts of the latent code should be mixed to produce an image consistent with a desired class label.
While our technique can be used to leverage an autoencoder as a generative model, we conjecture that our technique may have positive effects on the latent representation and therefore downstream tasks, though this is yet to be substantiated.
Future work will involve more comparisons to existing literature and experiments to determine the effects of mixing on the latent space itself and downstream tasks. | We leverage deterministic autoencoders as generative models by proposing mixing functions which combine hidden states from pairs of images. These mixes are made to look realistic through an adversarial framework. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:858 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We outline the problem of concept drifts for time series data.
In this work, we analyze the temporal inconsistency of streaming wireless signals in the context of device-free passive indoor localization.
We show that data obtained from WiFi channel state information (CSI) can be used to train a robust system capable of performing room level localization.
One of the most challenging issues for such a system is the movement of input data distribution to an unexplored space over time, which leads to an unwanted shift in the learned boundaries of the output space.
In this work, we propose a phase and magnitude augmented feature space along with a standardization technique that is little affected by drifts.
We show that this robust representation of the data yields better learning accuracy and requires less number of retraining.
Concept drift is one of the most common problems that degrades the predictive performance of passive WiFi-based localization systems.
In most of the predictive models it is assumed that a static relationship between input and output exits.
Thus in the context of machine learning, there is a mapping function f (x) = y, where the algorithm tries to estimate the underlying relationship between the input x and the output y.
The presence of concept drift means that the accuracy of the predictive models that is trained from historical data degrades over time due to evolving nature of the data.
Hence, predictive models often needs to be retrained frequently with a new set of labelled data, which might be expensive to obtain.
These pattern changes can be categorized based on their transition speed from one state to another into abrupt, or gradual drifts BID1 .
In either case, the deployed solution is expected to diagnose unintended changes automatically and adapt accordingly.The problem of concept drift in WiFi-based localization systems, was first mentioned in BID2 , which presents a technology that utilizes only off-the-shelf WiFi-enabled devices such as access points, laptops, smart TV for passive sensing in the environment of interest.
The authors have applied an online semi-supervised approach to automatically detect gradual shifts in the feature space and propose an adaptive learning strategy to regain the prediction accuracy.
We aim to address the same problem without making any assumption about the drift type.
In this work, we illustrate that from time to time, both sudden and gradual drifts, can occur to the streaming WiFi data, which often hinder the performance of the trained models when tested on the measurements.Majority of the existing WiFi-based indoor localization systems are device-based, where the user's location is determined by a WiFi-enabled target device that needs to be carried by the subject all the time BID9 .
Practical challenges of using device-based approaches, impose some restrictions and therefore, a device-free and passive solution is a promising line of research both for academia and industry.
For example, (Wang et al., 2015a; b; BID5 , are some of the existing research where device free passive WiFi localization is used along with deep learning.
In BID0 , the authors address drifts and the inconsistency of WiFi fingerprints for stationary subjects.
However, most of these researches and their experiments were performed in a very controlled environment and within a limited time frames.
On the other hand, the effect of concept drift mostly appears over time due to real-world conditions such as natural WiFi channel or bandwidth switches, or when certain exogenous factor such as temperature and humidity changes.
Therefore, the existing methods do not address them explicitly and the experimental results does not reflect the performance of the model taken from measurements that are a few days apart.
In this paper, we use the idea of feature augmentation in order to include both phase and magnitude of the CSI data.
To the best of our knowledge this is the first work that exploits both the phase and magnitude of the CSI in order to construct a feature space that is less affected by drifts.
We show that once such a feature space has been constructed,we can use classical machine learning algorithms in order to create a more robust model.
In the next sections, we discuss nature of the WiFi CSI data being obtained and how drifts cause a shift in the feature space.
In Section 3 we discuss our methods including the phase and the magnitude sanitization procedure.
In Section ??
we present the training strategy for off line training and online prediction.
Finally in Section 5, we conclude our paper and present discussions on future work.
We have presented a comprehensive study in order to handle drifts for WiFi CSI data.
We focused on the challenges presented by drifts for the application of indoor localization and proposed a combined feature space that is robust to drifts.
We then incorporate this augmented feature space and provided a detailed analysis of the performance of different learning algorithms.
Although we mainly focus on off line training, our work also focuses on robust online prediction in the presence of drifts.
Such a stable feature space will will mean that we do not have to learn the abrupt and gradual drifts and retrain our models each time when there one.
Our proposed feature space will also allow for applying deep convolution neural network, that has been only applied to either the phase or the magnitude information, but not both.
The proposed feature space can be projected into an RGB image where, vital information can captured using a convolution layer which we keep for future work. | We introduce an augmented robust feature space for streaming wifi data that is capable of tackling concept drift for indoor localization | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:859 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
While it has not yet been proven, empirical evidence suggests that model generalization is related to local properties of the optima which can be described via the Hessian.
We connect model generalization with the local property of a solution under the PAC-Bayes paradigm.
In particular, we prove that model generalization ability is related to the Hessian, the higher-order "smoothness" terms characterized by the Lipschitz constant of the Hessian, and the scales of the parameters.
Guided by the proof, we propose a metric to score the generalization capability of the model, as well as an algorithm that optimizes the perturbed model accordingly.
Deep models have proven to work well in applications such as computer vision BID18 BID8 BID14 , speech recognition , and natural language processing BID35 BID6 BID25 .
Many deep models have millions of parameters, which is more than the number of training samples, but the models still generalize well BID11 .On
the other hand, classical learning theory suggests the model generalization capability is closely related to the "complexity" of the hypothesis space, usually measured in terms of number of parameters, Rademacher complexity or VC-dimension. This
seems to be a contradiction to the empirical observations that over-parameterized models generalize well on the test data 1 . Indeed
, even if the hypothesis space is complex, the final solution learned from a given training set may still be simple. This
suggests the generalization capability of the model is also related to the property of the solution. BID15
and BID1 empirically observe that the generalization ability of a model is related to the spectrum of the Hessian matrix ∇ 2 L(w * ) evaluated at the solution, and large eigenvalues of the ∇ 2 L(w * ) often leads to poor model generalization. Also,
BID15 , BID1 and BID31 introduce several different metrics to measure the "sharpness" of the solution, and demonstrate the connection between the sharpness metric and the generalization empirically. BID2
later points out that most of the Hessian-based sharpness measures are problematic and cannot be applied directly to explain generalization. In particular
, they show that the geometry of the parameters in RELU-MLP can be modified drastically by re-parameterization.Another line of work originates from Bayesian analysis. Mackay (1995
) first introduced Taylor expansion to approximate the (log) posterior, and considered the second-order term, characterized by the Hessian of the loss function, as a way of evaluating the model simplicity, or "Occam factor". Recently BID34
use this factor to penalize sharp minima, and determine the optimal batch size. BID4 connect the
PAC-Bayes bound and the Bayesian marginal likelihood when the loss is (bounded) negative log-likelihood, which leads to an alternative perspective on Occam's razor. BID19 , and more
recently, BID7 BID28 BID29 use PAC-Bayes bound to analyze the generalization behavior of the deep models.Since the PAC-Bayes bound holds uniformly for all "posteriors", it also holds for some particular "posterior", for example, the solution parameter perturbed with noise. This provides a
natural The sharp minimum, even though it approximates the true label better, has some complex structures in its predicted labels, while the flat minimum seems to produce a simpler classification boundary.
We connect the smoothness of the solution with the model generalization in the PAC-Bayes framework.
We prove that the generalization power of a model is related to the Hessian and the smoothness of the solution, the scales of the parameters, as well as the number of training samples.
In particular, we prove that the best perturbation level scales roughly as the inverse of the square root of the Hessian, which mostly cancels out scaling effect in the re-parameterization suggested by BID2 .
To the best of our knowledge, this is the first work that integrate Hessian in the model generalization bound rigorously.
It also roughly explains the effect of re-parameterization over the generalization.
Based on our generalization bound, we propose a new metric to test the model generalization and a new perturbation algorithm that adjusts the perturbation levels according to the Hessian.
Finally, we empirically demonstrate the effect of our algorithm is similar to a regularizer in its ability to attain better performance on unseen data. | a theory connecting Hessian of the solution and the generalization power of the model | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:86 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Federated learning improves data privacy and efficiency in machine learning performed over networks of distributed devices, such as mobile phones, IoT and wearable devices, etc.
Yet models trained with federated learning can still fail to generalize to new devices due to the problem of domain shift.
Domain shift occurs when the labeled data collected by source nodes statistically differs from the target node's unlabeled data.
In this work, we present a principled approach to the problem of federated domain adaptation, which aims to align the representations learned among the different nodes with the data distribution of the target node.
Our approach extends adversarial adaptation techniques to the constraints of the federated setting.
In addition, we devise a dynamic attention mechanism and leverage feature disentanglement to enhance knowledge transfer.
Empirically, we perform extensive experiments on several image and text classification tasks and show promising results under unsupervised federated domain adaptation setting.
Data generated by networks of mobile and IoT devices poses unique challenges for training machine learning models.
Due to the growing storage/computational power of these devices and concerns about data privacy, it is increasingly attractive to keep data and computation locally on the device (Smith et al., 2017) .
Federated Learning (FL) (Mohassel & Rindal, 2018; Bonawitz et al., 2017; Mohassel & Zhang, 2017 ) provides a privacy-preserving mechanism to leverage such decen-tralized data and computation resources to train machine learning models.
The main idea behind federated learning is to have each node learn on its own local data and not share either the data or the model parameters.
While federated learning promises better privacy and efficiency, existing methods ignore the fact that the data on each node are collected in a non-i.i.d manner, leading to domain shift between nodes (Quionero-Candela et al., 2009) .
For example, one device may take photos mostly indoors, while another mostly outdoors.
In this paper, we address the problem of transferring knowledge from the decentralized nodes to a new node with a different data domain, without requiring any additional supervision from the user.
We define this novel problem Unsupervised Federated Domain Adaptation (UFDA), as illustrated in Figure 1 (a).
There is a large body of existing work on unsupervised domain adaptation (Long et al., 2015; Ganin & Lempitsky, 2015; Tzeng et al., 2017; Gong et al., 2012; Long et al., 2018) , but the federated setting presents several additional challenges.
First, the data are stored locally and cannot be shared, which hampers mainstream domain adaptation methods as they need to access both the labeled source and unlabeled target data (Tzeng et al., 2014; Long et al., 2017; Ghifary et al., 2016; Sun & Saenko, 2016; Ganin & Lempitsky, 2015; Tzeng et al., 2017) .
Second, the model parameters are trained separately for each node and converge at different speeds, while also offering different contributions to the target node depending on how close the two domains are.
Finally, the knowledge learned from source nodes is highly entangled (Bengio et al., 2013) , which can possibly lead to negative transfer (Pan & Yang, 2010) .
In this paper, we propose a solution to the above problems called Federated Adversarial Domain Adaptation (FADA) which aims to tackle domain shift in a federated learning system through adversarial techniques.
Our approach preserves data privacy by training one model per source node and updating the target model with the aggregation of source gradients, but does so in a way that reduces domain shift.
First, we analyze the federated domain adaptation problem from a theoretical perspective and provide a generalization bound.
Inspired by our theoretical results, we propose an Figure 1 :
(a) We propose an approach for the UFDA setting, where data are not shareable between different domains.
In our approach, models are trained separately on each source domain and their gradients are aggregated with dynamic attention mechanism to update the target model.
(b) Our FADA model learns to extract domain-invariant features using adversarial domain alignment (red lines) and a feature disentangler (blue lines).
efficient adaptation algorithm based on adversarial adaptation and representation disentanglement applied to the federated setting.
We also devise a dynamic attention model to cope with the varying convergence rates in the federated learning system.
We conduct extensive experiments on real-world datasets, including image recognition and natural language tasks.
Compared to baseline methods, we improve adaptation performance on all tasks, demonstrating the effectiveness of our devised model.
In this paper, we first proposed a novel unsupervised federated domain adaptation (UFDA) problem and derived a theoretical generalization bound for UFDA.
Inspired by the theoretical results, we proposed a novel model called Federated Adversarial Domain Adaptation (FADA) to transfer the knowledge learned from distributed source domains to an unlabeled target domain with a novel dynamic attention schema.
Empirically, we showed that feature disentanglement boosts the performance of FADA in UFDA tasks.
An extensive empirical evaluation on UFDA vision and linguistic benchmarks demonstrated the efficacy of FADA against several domain adaptation baselines. | we present a principled approach to the problem of federated domain adaptation, which aims to align the representations learned among the different nodes with the data distribution of the target node. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:860 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
A capsule is a group of neurons whose outputs represent different properties of the same entity.
Each layer in a capsule network contains many capsules.
We describe a version of capsules in which each capsule has a logistic unit to represent the presence of an entity and a 4x4 matrix which could learn to represent the relationship between that entity and the viewer (the pose).
A capsule in one layer votes for the pose matrix of many different capsules in the layer above by multiplying its own pose matrix by trainable viewpoint-invariant transformation matrices that could learn to represent part-whole relationships.
Each of these votes is weighted by an assignment coefficient.
These coefficients are iteratively updated for each image using the Expectation-Maximization algorithm such that the output of each capsule is routed to a capsule in the layer above that receives a cluster of similar votes.
The transformation matrices are trained discriminatively by backpropagating through the unrolled iterations of EM between each pair of adjacent capsule layers.
On the smallNORB benchmark, capsules reduce the number of test errors by 45\% compared to the state-of-the-art.
Capsules also show far more resistance to white box adversarial attacks than our baseline convolutional neural network.
Convolutional neural nets are based on the simple fact that a vision system needs to use the same knowledge at all locations in the image.
This is achieved by tying the weights of feature detectors so that features learned at one location are available at other locations.
Convolutional capsules extend the sharing of knowledge across locations to include knowledge about the part-whole relationships that characterize a familiar shape.
Viewpoint changes have complicated effects on pixel intensities but simple, linear effects on the pose matrix that represents the relationship between an object or object-part and the viewer.
The aim of capsules is to make good use of this underlying linearity, both for dealing with viewpoint variations and for improving segmentation decisions.Capsules use high-dimensional coincidence filtering: a familiar object can be detected by looking for agreement between votes for its pose matrix.
These votes come from parts that have already been detected.
A part produces a vote by multiplying its own pose matrix by a learned transformation matrix that represents the viewpoint invariant relationship between the part and the whole.
As the viewpoint changes, the pose matrices of the parts and the whole will change in a coordinated way so that any agreement between votes from different parts will persist.Finding tight clusters of high-dimensional votes that agree in a mist of irrelevant votes is one way of solving the problem of assigning parts to wholes.
This is non-trivial because we cannot grid the high-dimensional pose space in the way the low-dimensional translation space is gridded to facilitate convolutions.
To solve this challenge, we use a fast iterative process called "routingby-agreement" that updates the probability with which a part is assigned to a whole based on the proximity of the vote coming from that part to the votes coming from other parts that are assigned to that whole.
This is a powerful segmentation principle that allows knowledge of familiar shapes to derive segmentation, rather than just using low-level cues such as proximity or agreement in color or velocity.
An important difference between capsules and standard neural nets is that the activation of a capsule is based on a comparison between multiple incoming pose predictions whereas in a standard neural net it is based on a comparison between a single incoming activity vector and a learned weight vector.
Building on the work of BID21 , we have proposed a new type of capsule system in which each capsule has a logistic unit to represent the presence of an entity and a 4x4 pose matrix to represent the pose of that entity.
We also introduced a new iterative routing procedure between capsule layers, based on the EM algorithm, which allows the output of each lower-level capsule to be routed to a capsule in the layer above in such a way that active capsules receive a cluster of similar pose votes.
This new system achieves significantly better accuracy on the smallNORB data set than the state-of-the-art CNN, reducing the number of errors by 45%.
We have also shown it to be significantly more robust to white box adversarial attacks than a baseline CNN.SmallNORB is an ideal data-set for developing new shape-recognition models precisely because it lacks many of the additional features of images in the wild.
Now that our capsules model works well on NORB, we plan to implement an efficient version to test much larger models on much larger data-sets such as ImageNet. | Capsule networks with learned pose matrices and EM routing improves state of the art classification on smallNORB, improves generalizability to new view points, and white box adversarial robustness. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:861 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We study a general formulation of program synthesis called syntax-guided synthesis(SyGuS) that concerns synthesizing a program that follows a given grammar and satisfies a given logical specification.
Both the logical specification and the grammar have complex structures and can vary from task to task, posing significant challenges for learning across different tasks.
Furthermore, training data is often unavailable for domain specific synthesis tasks.
To address these challenges, we propose a meta-learning framework that learns a transferable policy from only weak supervision.
Our framework consists of three components:
1) an encoder, which embeds both the logical specification and grammar at the same time using a graph neural network;
2) a grammar adaptive policy network which enables learning a transferable policy; and
3) a reinforcement learning algorithm that jointly trains the embedding and adaptive policy.
We evaluate the framework on 214 cryptographic circuit synthesis tasks.
It solves 141 of them in the out-of-box solver setting, significantly outperforming a similar search-based approach but without learning, which solves only 31.
The result is comparable to two state-of-the-art classical synthesis engines, which solve 129 and 153 respectively.
In the meta-solver setting, the framework can efficiently adapt to unseen tasks and achieves speedup ranging from 2x up to 100x.
Program synthesis concerns automatically generating a program that satisfies desired functional requirements.
Promising results have been demonstrated by applying this approach to problems in diverse domains, such as spreadsheet data manipulation for end-users BID21 , intelligent tutoring for students , and code auto-completion for programmers BID19 , among many others.In a common formulation posed by BID3 called syntax-guided synthesis (SyGuS), the program synthesizer takes as input a logical formula φ and a grammar G, and produces as output a program in G that satisfies φ.
In this formulation, φ constitutes a semantic specification that describes the desired functional requirements, and G is a syntactic specification that constrains the space of possible programs.The SyGuS formulation has been targeted by a variety of program synthesizers based on discrete techniques such as constraint solving BID36 , enumerative search BID5 , and stochastic search BID37 .
A key limitation of these synthesizers is that they do not bias their search towards likely programs.
This in turn hinders their efficiency and limits the kinds of programs they are able to synthesize.It is well known that likely programs have predictable patterns BID23 BID1 .
As a result, recent works have leveraged neural networks for program synthesis.
However, they are limited in two aspects.
First, they do not target general SyGuS tasks; more specifically:• They assume a fixed grammar (i.e., syntactic specification G) across tasks.
For example, BID39 learn loop invariants for program verification, but the grammar of loop invariants is fixed across different programs to be verified.
• The functional requirements (i.e., semantic specification φ) are omitted, in applications that concern identifying semantically similar programs BID34 BID0 BID2 , or presumed to be input-output examples BID33 BID7 BID16 BID11 BID13 BID43 BID42 BID35 .In
contrast, the SyGuS formulation allows the grammar G to vary across tasks, thereby affording flexibility to enforce different syntactic requirements in each task. It
also allows to specify functional requirements in a manner more general than input-output examples, by allowing the semantic specification φ to be a logical formula (e.g., f (x) = 2x instead of f (1) = 2 ∧ f (3) = 6). As
a result, the general SyGuS setting necessitates the ability to capture common patterns across different specifications and grammars. A
second limitation of existing approaches is that they rely on strong supervision on the generated program BID33 BID7 BID11 . However
, in SyGuS tasks, ground truth programs f are not readily available; instead, a checker is provided that verifies whether f satisfies φ.In this paper, we propose a framework that is general in that it makes few assumptions on specific grammars or constraints, and has meta-learning capability that can be utilized in solving unseen tasks more efficiently. The key
contributions we make are (1) a joint graph representation of both syntactic and semantic constraints in each task that is learned by a graph neural network model; (2) a grammar adaptive policy network that generalizes across different grammars and guides the search for the desired program; and (3) a reinforcement learning training method that enables learning transferable representation and policy with weak supervision.We demonstrate our meta-learning framework on a challenging and practical instance of the SyGuS problem that concerns synthesizing cryptographic circuits that are provably free of side-channel attacks BID17 . In our
experiments, we first compare the framework in an out-of-box solver setting against a similar search-based approach and two state-of-the-art classical solvers developed in the formal methods community. Then we
demonstrate its capability as a meta-solver that can efficiently adapt to unseen tasks, and compare it to the out-of-box version.
We proposed a framework to learn a transferable representation and strategy in solving a general formulation of program synthesis, i.e. syntax-guided synthesis (SyGuS).
Compared to previous work on neural synthesis, our framework is capable of handling tasks where
1) the grammar and semantic specification varies from task to task, and
2) the supervision is weak.
Specifically, we introduced a graph neural network that can learn a joint representation over different pairs of syntactic and semantic specifications; we implemented a grammar adaptive network that enables program generation to be conditioned on the specific task; and finally, we proposed a meta-learning method based on the Advantage Actor-Critic (A2C) framework.
We compared our framework empirically against one baseline following a similar search fashion and two classical synthesis engines.
Under the outof-box solver setting with limited computational resources and without any prior knowledge from training, our framework is able to solve 141 of 214 tasks, significantly outperforming the baseline ESymbolic by 110.
In terms of the absolute number of solved tasks, the performance is comparable to two state-of-the-art solvers, CVC4 and EUSolver, which solve 129 and 153 respectively.
However, the two state-of-the-art solvers failed on 4 tasks solved by our framework.
When trained as a meta-solver, our framework is capable of accelerating the solving process by 2× to 100×. | We propose a meta-learning framework that learns a transferable policy from only weak supervision to solve synthesis tasks with different logical specifications and grammars. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:862 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Simultaneous machine translation models start generating a target sequence before they have encoded or read the source sequence.
Recent approach for this task either apply a fixed policy on transformer, or a learnable monotonic attention on a weaker recurrent neural network based structure.
In this paper, we propose a new attention mechanism, Monotonic Multihead Attention (MMA), which introduced the monotonic attention mechanism to multihead attention.
We also introduced two novel interpretable approaches for latency control that are specifically designed for multiple attentions.
We apply MMA to the simultaneous machine translation task and demonstrate better latency-quality tradeoffs compared to MILk, the previous state-of-the-art approach.
Code will be released upon publication.
Simultaneous machine translation adds the capability of a live interpreter to machine translation: a simultaneous machine translation model starts generating a translation before it has finished reading the entire source sentence.
Such models are useful in any situation where translation needs to be done in real time.
For example, simultaneous models can translate live video captions or facilitate conversations between people speaking different languages.
In a usual neural machine translation model, the encoder first reads the entire sentence, and then the decoder writes the target sentence.
On the other hand, a simultaneous neural machine translation model alternates between reading the input and writing the output using either a fixed or learned policy.
Monotonic attention mechanisms fall into the learned policy category.
Recent work exploring monotonic attention variants for simultaneous translation include: hard monotonic attention (Raffel et al., 2017) , monotonic chunkwise attention (MoChA) and monotonic infinite lookback attention (MILk) (Arivazhagan et al., 2019) .
MILk in particular has shown better quality / latency trade-offs than fixed policy approaches, such as wait-k (Ma et al., 2019) or wait-if-* (Cho & Esipova, 2016) policies.
MILk also outperforms hard monotonic attention and MoChA; while the other two monotonic attention mechanisms only consider a fixed reading window, MILk computes a softmax attention over all previous encoder states, which may be the key to its improved latencyquality tradeoffs.
These monotonic attention approaches also provide a closed form expression for the expected alignment between source and target tokens.
However, monotonic attention-based models, including the state-of-the-art MILk, were built on top of RNN-based models.
RNN-based models have been outperformed by the recent state-of-the-art Transformer model (Vaswani et al., 2017) , which features multiple encoder-decoder attention layers and multihead attention at each layer.
We thus propose monotonic multihead attention (MMA), which combines the strengths of multilayer multihead attention and monotonic attention.
We propose two variants, Hard MMA (MMA-H) and Infinite Lookback MMA (MMA-IL).
MMA-H is designed with streaming systems in mind where the attention span must be limited.
MMA-IL emphasizes the quality of the translation system.
We also propose two novel latency regularization methods.
The first encourages the model to be faster by directly minimizing the average latency.
The second encourages the attention heads to maintain similar positions, preventing the latency from being dominated by a single or a few heads.
The main contributions of this paper are: (1) A novel monotonic attention mechanism, monotonic multihead attention, which enables the Transformer model to perform online decoding.
This model leverages the power of the Transformer and the efficiency of monotonic attention.
(2) Better latencyquality tradeoffs compared to the MILk model, the previous state-of-the-art, on two standard translation benchmarks, IWSLT15 English-Vietnamese (En-Vi) and WMT15 German-English (De-En).
(3) Analyses on how our model is able to control the attention span and on the relationship between the speed of a head and the layer it belongs to.
We motivate the design of our model with an ablation study on the number of decoder layers and the number of decoder heads.
In this paper, we propose two variants of the monotonic multihead attention model for simultaneous machine translation.
By introducing two new targeted loss terms which allow us to control both latency and attention span, we are able to leverage the power of the Transformer architecture to achieve better quality-latency trade-offs than the previous state-of-the-art model.
We also present detailed ablation studies demonstrating the efficacy and rationale of our approach.
By introducing these stronger simultaneous sequence-to-sequence models, we hope to facilitate important applications, such as high-quality real-time interpretation between human speakers. | Make the transformer streamable with monotonic attention. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:863 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
This paper presents a novel two-step approach for the fundamental problem of learning an optimal map from one distribution to another.
First, we learn an optimal transport (OT) plan, which can be thought as a one-to-many map between the two distributions.
To that end, we propose a stochastic dual approach of regularized OT, and show empirically that it scales better than a recent related approach when the amount of samples is very large.
Second, we estimate a Monge map as a deep neural network learned by approximating the barycentric projection of the previously-obtained OT plan.
This parameterization allows generalization of the mapping outside the support of the input measure.
We prove two theoretical stability results of regularized OT which show that our estimations converge to the OT and Monge map between the underlying continuous measures.
We showcase our proposed approach on two applications: domain adaptation and generative modeling.
Mapping one distribution to another Given two random variables X and Y taking values in X and Y respectively, the problem of finding a map f such that f (X) and Y have the same distribution, denoted f (X) ∼ Y henceforth, finds applications in many areas.
For instance, in domain adaptation, given a source dataset and a target dataset with different distributions, the use of a mapping to align the source and target distributions is a natural formulation BID22 since theory has shown that generalization depends on the similarity between the two distributions BID2 .
Current state-of-the-art methods for computing generative models such as generative adversarial networks BID21 , generative moments matching networks BID26 or variational auto encoders BID24 ) also rely on finding f such that f (X) ∼ Y .
In this setting, the latent variable X is often chosen as a continuous random variable, such as a Gaussian distribution, and Y is a discrete distribution of real data, e.g. the ImageNet dataset.
By learning a map f , sampling from the generative model boils down to simply drawing a sample from X and then applying f to that sample.Mapping with optimality Among the potentially many maps f verifying f (X) ∼ Y , it may be of interest to find a map which satisfies some optimality criterion.
Given a cost of moving mass from one point to another, one would naturally look for a map which minimizes the total cost of transporting the mass from X to Y .
This is the original formulation of Monge (1781) , which initiated the development of the optimal transport (OT) theory.
Such optimal maps can be useful in numerous applications such as color transfer BID17 , shape matching BID46 , data assimilation BID37 , or Bayesian inference BID31 .
In small dimension and for some specific costs, multi-scale approaches BID28 or dynamic formulations BID16 BID3 BID44 can be used to compute optimal maps, but these approaches become intractable in higher dimension as they are based on space discretization.
Furthermore, maps veryfiying f (X) ∼ Y might not exist, for instance when X is a constant but not Y .
Still, one would like to find optimal maps between distributions at least approximately.
The modern approach to OT relaxes the Monge problem by optimizing over plans, i.e. distributions over the product space X × Y, rather than maps, casting the OT problem as a linear program which is always feasible and easier to solve.
However, even with specialized algorithms such as the network simplex, solving that linear program takes O(n 3 log n) time, where n is the size of the discrete distribution (measure) support.Large-scale OT Recently, BID14 showed that introducing entropic regularization into the OT problem turns its dual into an easier optimization problem which can be solved using the Sinkhorn algorithm.
However, the Sinkhorn algorithm does not scale well to measures supported on a large number of samples, since each of its iterations has an O(n 2 ) complexity.
In addition, the Sinkhorn algorithm cannot handle continuous probability measures.
To address these issues, two recent works proposed to optimize variations of the dual OT problem through stochastic gradient methods.
BID20 proposed to optimize a "semi-dual" objective function.
However, their approach still requires O(n) operations per iteration and hence only scales moderately w.r.t. the size of the input measures.
BID1 proposed a formulation that is specific to the so-called 1-Wasserstein distance (unregularized OT using the Euclidean distance as a cost function).
This formulation has a simpler dual form with a single variable which can be parameterized as a neural network.
This approach scales better to very large datasets and handles continuous measures, enabling the use of OT as a loss for learning a generative model.
However, a drawback of that formulation is that the dual variable has to satisfy the non-trivial constraint of being a Lipschitz function.
As a workaround, BID1 proposed to use weight clipping between updates of the neural network parameters.
However, this makes unclear whether the learned generative model is truly optimized in an OT sense.
Besides these limitations, these works only focus on the computation of the OT objective and do not address the problem of finding an optimal map between two distributions.
We proposed two original algorithms that allow for
i) large-scale computation of regularized optimal transport
ii) learning an optimal map that moves one probability distribution onto another (the so-called Monge map).
To our knowledge, our approach introduces the first tractable algorithms for computing both the regularized OT objective and optimal maps in large-scale or continuous settings.
We believe that these two contributions enable a wider use of optimal transport strategies in machine learning applications.
Notably, we have shown how it can be used in an unsupervised domain adaptation setting, or in generative modeling, where a Monge map acts directly as a generator.
Our consistency results show that our approach is theoretically well-grounded.
An interesting direction for future work is to investigate the corresponding convergence rates of the empirical regularized optimal plans.
We believe this is a very complex problem since technical proofs regarding convergence rates of the empirical OT objective used e.g. in BID45 BID6 BID18 do not extend simply to the optimal transport plans.that we have π n = (id, T n )#µ n .
This also impliesπ n = T n so that (id,π n )#µ n = (id, T n )#µ n .
Hence, the second term in the right-hand side of (18) converges to 0 as a result of the stability of optimal transport BID47 [Theorem 5.20] .
Now, we show that the first term also converges to 0 for ε n converging sufficiently fast to 0.
By definition of the pushforward operator, DISPLAYFORM0 g(x, T n (x))dµ n (x) (19) and we can bound, DISPLAYFORM1 where Y n = (y 1 , · · · , y n ) t and K g is the Lipschitz constant of g.
The first inequality follows from g being Lipschitz.
The next equality follows from the discrete close form of the barycentric projection.
The last inequality is obtained through Cauchy-Schwartz.
We can now use the same arguments as in the previous proof.
A convergence result by BID10 shows that there exists positive constants (w.r.t. ε n ) M cn,µn,νn and λ cn,µn,νn such that, where c n = (c(x 1 , y 1 ), · · · , c(x n , y n )).
The subscript indices indicate the dependences of each constant.
Hence, we see that choosing any (ε n ) such that (21) tends to 0 provides the results.
In particular, we can take ε n = λ cn,µn,νn ln(n 2 ||Y n || 1/2 R n×d ,2 M cn,µn,νn )which suffices to have the convergence of (15) to 0 for Lipschitz function g ∈ C l (R d × R d ).
This proves the weak convergence of (id,π εn n )#µ n to (id, f )#µ. | Learning optimal mapping with deepNN between distributions along with theoretical guarantees. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:864 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Learning effective text representations is a key foundation for numerous machine learning and NLP applications.
While the celebrated Word2Vec technique yields semantically rich word representations, it is less clear whether sentence or document representations should be built upon word representations or from scratch.
Recent work has demonstrated that a distance measure between documents called \emph{Word Mover's Distance} (WMD) that aligns semantically similar words, yields unprecedented KNN classification accuracy.
However, WMD is very expensive to compute, and is harder to apply beyond simple KNN than feature embeddings.
In this paper, we propose the \emph{Word Mover's Embedding } (WME), a novel approach to building an unsupervised document (sentence) embedding from pre-trained word embeddings.
Our technique extends the theory of \emph{Random Features} to show convergence of the inner product between WMEs to a positive-definite kernel that can be interpreted as a soft version of (inverse) WMD.
The proposed embedding is more efficient and flexible than WMD in many situations.
As an example, WME with a simple linear classifier reduces the computational cost of WMD-based KNN \emph{from cubic to linear} in document length and \emph{from quadratic to linear} in number of samples, while simultaneously improving accuracy.
In experiments on 9 benchmark text classification datasets and 22 textual similarity tasks the proposed technique consistently matches or outperforms state-of-the-art techniques, with significantly higher accuracy on problems of short length. | A novel approach to building an unsupervised document (sentence) embeddings from pre-trainedword embeddings | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:865 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We present a new approach to assessing the robustness of neural networks based on estimating the proportion of inputs for which a property is violated.
Specifically, we estimate the probability of the event that the property is violated under an input model.
Our approach critically varies from the formal verification framework in that when the property can be violated, it provides an informative notion of how robust the network is, rather than just the conventional assertion that the network is not verifiable.
Furthermore, it provides an ability to scale to larger networks than formal verification approaches.
Though the framework still provides a formal guarantee of satisfiability whenever it successfully finds one or more violations, these advantages do come at the cost of only providing a statistical estimate of unsatisfiability whenever no violation is found.
Key to the practical success of our approach is an adaptation of multi-level splitting, a Monte Carlo approach for estimating the probability of rare events, to our statistical robustness framework.
We demonstrate that our approach is able to emulate formal verification procedures on benchmark problems, while scaling to larger networks and providing reliable additional information in the form of accurate estimates of the violation probability.
The robustness of deep neural networks must be guaranteed in mission-critical applications where their failure could have severe real-world implications.
This motivates the study of neural network verification, in which one wishes to assert whether certain inputs in a given subdomain of the network might lead to important properties being violated BID29 BID1 .
For example, in a classification task, one might want to ensure that small perturbations of the inputs do not lead to incorrect class labels being predicted BID23 BID8 .The
classic approach to such verification has focused on answering the binary question of whether there exist any counterexamples that violate the property of interest. We
argue that this approach has two major drawbacks. Firstly
, it provides no notion of how robust a network is whenever a counterexample can be found. Secondly
, it creates a computational problem whenever no counterexamples exist, as formally verifying this can be very costly and does not currently scale to the size of networks used in many applications.To give a demonstrative example, consider a neural network for classifying objects in the path of an autonomous vehicle. It will
almost certainly be infeasible to train such a network that is perfectly robust to misclassification. Furthermore
, because the network will most likely need to be of significant size to be effective, it is unlikely to be tractable to formally verify the network is perfectly robust, even if such a network exists. Despite this
, it is still critically important to assess the robustness of the network, so that manufacturers can decide whether it is safe to deploy.To address the shortfalls of the classic approach, we develop a new measure of intrinsic robustness of neural networks based on the probability that a property is violated under an input distribution model. Our measure
is based on two key insights. The first is
that for many, if not most, applications, full formal verification is neither necessary nor realistically achievable, such that one actually desires a notion of how robust a network is to a set of inputs, not just a binary answer as to whether it is robust or not. The second is
that most practical applications have some acceptable level of risk, such that it is sufficient to show that the probability of a violation is below a certain threshold, rather than confirm that this probability is exactly zero.By providing a probability of violation, our approach is able to address the needs of applications such as our autonomous vehicle example. If the network
is not perfectly robust, it provides an explicit measure of exactly how robust the network is. If the network
is perfectly robust, it is still able to tractability assert that a violation event is "probably-unsatisfiable". That is it is
able to statistically conclude that the violation probability is below some tolerance threshold to true zero, even for large networks for which formal verification would not be possible.Calculating the probability of violation is still itself a computationally challenging task, corresponding to estimating the value of an intractable integral. In particular
, in most cases, violations of the target property constitute (potentially extremely) rare events. Consequently
, the simple approach of constructing a direct Monte Carlo estimate by sampling from the input model and evaluating the property will be expensive and only viable when the event is relatively common. To address
this, we adapt an algorithm from the Monte Carlo literature, adaptive multi-level splitting (AMLS) BID9 BID18 , to our network verification setting. AMLS is explicitly
designed for prediction of rare events and our adaptation means that we are able to reliably estimate the probability of violation, even when the true value is extremely small.Our resulting framework is easy to implement, scales linearly in the cost of the forward operation of the neural network, and is agnostic both to the network architecture and input model. Assumptions such as
piecewise linearity, Lipschitz continuity, or a specific network form are not required. Furthermore, it produces
a diversity of samples which violate the property as a side-product. To summarize, our main contributions
are:• Reframing neural network verification as the estimation of the probability of a violation, thereby providing a more informative robustness metric for non-verifiable networks; • Adaptation of the AMLS method to our verification framework to allow the tractable estimation of our metric for large networks and rare events; • Validation of our approach on several models and datasets from the literature.
We have introduced a new measure for the intrinsic robustness of a neural network, and have validated its utility on several datasets from the formal verification and deep learning literatures.
Our approach was able to exactly emulate formal verification approaches for satisfiable properties and provide high confidence, accurate predictions for properties which were not.
The two key advantages it provides over previous approaches are:
a) providing an explicit and intuitive measure for how robust networks are to satisfiable properties; and
b) providing improved scaling over classical approaches for identifying unsatisfiable properties.Despite providing a more informative measure of how robust a neural network is, our approach may not be appropriate in all circumstances.
In situations where there is an explicit and effective adversary, instead of inputs being generated by chance, we may care more about how far away the single closest counterexample is to the input, rather than the general prevalence of counterexamples.
Here our method may fail to find counterexamples because they reside on a subset with probability less than P min ; the counterexamples may even reside on a subset of the input space with measure zero with respect to the input distribution.
On the other hand, there are many practical scenarios, such as those discussed in the introduction, where either it is unrealistic for there to be no counterexamples close to the input, the network (or input space) is too large to realistically permit formal verification, or where potential counterexamples are generated by chance rather than by an adversary.
We believe that for these scenarios our approach offers significant advantages to formal verification approaches.Going forward, one way the efficiency of our approach could be improved further is by using a more efficient base MCMC kernel in our AMLS estimator, that is, replace line 12 in Algorithm 1 with a more efficient base inference scheme.
The current MH scheme was chosen on the basis of simplicity and the fact it already gave effective empirical performance.
However, using more advanced inference approaches, such as gradient-based approaches like Langevin Monte Carlo (LMC) BID21 Hamiltonian Monte Carlo (Neal, 2011) , could provide significant speedups by improving the mixing of the Markov chains, thereby reducing the number of required MCMC transitions. | We introduce a statistical approach to assessing neural network robustness that provides an informative notion of how robust a network is, rather than just the conventional binary assertion of whether or not of property is violated. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:866 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recent pretrained transformer-based language models have set state-of-the-art performances on various NLP datasets.
However, despite their great progress, they suffer from various structural and syntactic biases.
In this work, we investigate the lexical overlap bias, e.g., the model classifies two sentences that have a high lexical overlap as entailing regardless of their underlying meaning.
To improve the robustness, we enrich input sentences of the training data with their automatically detected predicate-argument structures.
This enhanced representation allows the transformer-based models to learn different attention patterns by focusing on and recognizing the major semantically and syntactically important parts of the sentences.
We evaluate our solution for the tasks of natural language inference and grounded commonsense inference using the BERT, RoBERTa, and XLNET models.
We evaluate the models' understanding of syntactic variations, antonym relations, and named entities in the presence of lexical overlap.
Our results show that the incorporation of predicate-argument structures during fine-tuning considerably improves the robustness, e.g., about 20pp on discriminating different named entities, while it incurs no additional cost at the test time and does not require changing the model or the training procedure.
Transformer-based language models like BERT (Devlin et al., 2019) , XLNET (Yang et al., 2019) , and RoBERTa (Liu et al., 2019) achieved stateof-the-art performances on various NLP datasets including those of natural language inference (NLI) (Condoravdi et al., 2003; Dagan et al., 2006) , and grounded commonsense reasoning (GCI) (Zellers et al., 2018) .
1 Natural language inference is the task of determining whether the hypothesis entails, contradicts, or is neutral to the given premise.
Grounded commonsense reasoning, as it is defined by the SWAG dataset (Zellers et al., 2018) , is the task of reasoning about what is happening and predict what might come next given a premise that is a partial description about a situation.
Despite their great progress on individual datasets, pretrained language models suffer from various biases, including lexical overlap (McCoy et al., 2019b) .
For instance, given the premise "Neil Armstrong was the first man who landed on the Moon", the model may recognize the sentence "Moon was the first man who landed on the Neil Armstrong" as an entailing hypothesis or a plausible ending because it has a high lexical overlap with the premise.
In this paper, we enhance the text of the input sentences of the training data, which is used for fine-tuning the pretrained language model on the target task, with automatically detected predicateargument structures.
Predicate-argument structures identify who did what to whom for each sentence.
The motivation of using predicate-argument structures is to provide a higher-level abstraction over different surface realizations of the same underlying meaning.
As a result, they can help the model to focus on the more important parts of the sentence and abstract away from the less relevant details.
We show that adding this information during fine-tuning considerably improves the robustness of the examined models against various adversarial settings including those that evaluate models' understanding of syntactic variations, antonym relations, and named entities in the presence of high lexical overlap.
Our solution imposes no additional cost over the linguistic-agnostic counterpart at the test time since it does not require predicateargument structures for the test data.
Besides, compared to existing methods for handling the lexical overlap bias Clark et al., 2019; Mahabadi and Henderson, 2019) , it does not require introducing new models or training procedures and the model's complexity remains unchanged.
The contributions of this work are as follows:
1. We provide three adversarial evaluation sets for the SWAG dataset to evaluate the lexical overlap bias.
These adversarial test sets evaluate the model's understanding of syntactic variation, antonym relation, and named entities.
The performance of all the examined models drops substantially on these datasets.
We will release the datasets to encourage the community to develop models that better capture the semantics of the task instead of relying on surface features.
2. We propose a simple solution for improving the robustness against the lexical overlap bias by adding predicate-argument structures to the fine-tuning data.
Our solution results in no additional cost during the test time, it does not require oracle predicate-argument structures, and it also does not require any changes in the model or the training procedure.
We will release the augmented training data for MultiNLI and SWAG training data.
The findings of this work include:
• While lexical overlap is a known bias for NLI, we show that models that are fine-tuned on SWAG are more prone to this bias.
• The RoBERTa model performs the best on all adversarial test sets and is therefore more robust against the lexical overlap bias.
• Among the examined evaluation settings, discriminating different named entities in the presence of high lexical overlap is the most challenging.
The best accuracy, i.e., the accuracy of the RoBERTa-large model fine-tuned with augmented training data, is 59%.
• Previous work showed that pretrained transformer-based language models capture various linguistic phenomena, e.g., POS tags, syntax, named entities, and predicate-argument structures, without explicit supervision (Hewitt and Manning, 2019; Tenney et al., 2019) .
Yet, our work shows that explicit incorporation of such information is beneficial for improving robustness.
In this paper, we propose a solution to improve the robustness of the state-of-the-art NLP models, i.e., BERT, XLNET, and RoBERTa, against the lexical overlap bias.
We improve the model robustness by extending the input sentences with their corresponding predicate-argument structures.
The addition of these structures helps the transformer model to better recognize the major semantically and syntactically important parts of the sentences and learns more informative attention patterns accordingly.
Our finding, regarding the benefit of explicit incorporation of predicate-argument structures, is despite the fact that transformer-based models already captures various linguistic phenomena, including predicate-argument structures (Tenney et al., 2019) .
Our proposed solution (1) results in considerable improvements in the robustness, e.g., 20pp in accuracy, (2) incurs no additional cost during the test time, (3) does not require ant change in the model or the training procedure, and (4) works with noisy predicate-argument structures.
We evaluate the effectiveness of our solution on the task of natural language inference and grounded commonsense reasoning.
However, since our solution only includes enhancing the training examples, it is not limited to a specific task and it is applicable to other tasks and datasets that suffer from this bias, e.g., paraphrase identification (Zhang et al., 2019) , and question answering (Jia and Liang, 2017) .
We will release the new adversarial evaluation sets for the lexical overlap bias as well as the augmented training data for MultiNLI ans SWAG datasets upon the publication. | Enhancing the robustness of pretrained transformer models against the lexical overlap bias by extending the input sentences of the training data with their corresponding predicate-argument structures | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:867 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose a method for quantifying uncertainty in neural network regression models when the targets are real values on a $d$-dimensional simplex, such as probabilities.
We show that each target can be modeled as a sample from a Dirichlet distribution, where the parameters of the Dirichlet are provided by the output of a neural network, and that the combined model can be trained using the gradient of the data likelihood.
This approach provides interpretable predictions in the form of multidimensional distributions, rather than point estimates, from which one can obtain confidence intervals or quantify risk in decision making.
Furthermore, we show that the same approach can be used to model targets in the form of empirical counts as samples from the Dirichlet-multinomial compound distribution.
In experiments, we verify that our approach provides these benefits without harming the performance of the point estimate predictions on two diverse applications: (1) distilling deep convolutional networks trained on CIFAR-100, and (2) predicting the location of particle collisions in the XENON1T Dark Matter detector.
Artificial neural networks are typically trained by maximizing the conditional likelihood of output targets given input features.
Each target is modeled as a sample from a distribution p(y|x) parameterized by the output activity of the neural network, where the choice of parametric distribution is implied by the choice of objective function.
Thus, the support of the probability distribution should match the target space, but in practice, this is often not the case.Today, the vast majority of neural network output layers implicitly model the targets as samples from one of four distributions: a binomial, a categorical, a Gaussian, or a Laplacian distributionrespectively corresponding to the binomial cross-entropy loss, multi-class cross-entropy loss, mean squared error, and mean absolute error.
These distributions are commonly used even when the target space does not match the support, because the gradient calculations for these distributions are simple (and easy to compute) when paired with the appropriate output layer activation functions.
These distributions dominate to such a degree that few alternatives are even available in most common deep learning software packages such as Keras BID3 and PyTorch BID15 .Alternatives
do exist -using neural networks to parameterize more complex distributions is not new. The standard
regression approach can be generalized to a heteroskedastic Gaussian output layer BID14 BID18 , where the neural network predicts both a mean and a variance for each target. Multi-model
distributions can be modeled with a mixture density BID1 . And more recently
, the Gamma output layer was proposed to model targets in R >0 BID13 . In principle, any
parametric distribution with well-defined gradients could serve as a probabilistic prediction at the output of a neural network model.The approach proposed here is simpler than the one taken by Conditional Variational Autoencoders (CVAEs) BID10 BID16 . While CVAEs can,
in theory, model arbitrary high-dimensional conditional distributions, computing the exact conditional likelihood of a target requires marginalizing over intermediate representations, making exact gradient calculations intractable. Thus, training a
CVAE requires approximating the gradients through sampling. In this work we
show that restricting the output to a particular class of distributions, namely the Dirichlet or Dirichlet-multinomial compound distributions, enables a calculation of the exact likelihood of the targets and the exact gradients.Interpreting the output of a neural network classifier as a probability distribution has obvious benefits. One can derive
different point estimates, define confidence intervals, or integrate over possible outcomes -a necessity for managing risk in decision making. Potentially, it
could also lead to better learning -matching the output support to the target space essentially constrains the learning problem by incorporating outside knowledge. Allowing the network
to output "uninformative" distributions -e.g. a uniform distribution over the support -could make training faster by allowing the network to focus on the easiest training examples first -a self-guided form of curriculum learning.In the present work, we derive gradients for the Beta distribution, Dirichlet distribution, and Dirichlet-multinomial compound distribution. We then propose activation
functions that stabilize numerical optimization with stochastic gradient descent. Finally, we demonstrate through
experiments that this approach can be used to model three common types of targets: (1) targets over the multivariate simplex, (2) real-valued scalar targets with lower and upper bounds, and (3) nonnegative integer-valued counts (samples from the Dirichlet-multinomial compound distribution). The experiments demonstrate that
our approach provides interpretable predictions with learned uncertainty, without decreasing the performance of the point estimates.
In most artificial neural network models, supervised learning corresponds to maximizing the NLL of the training set targets conditioned on the inputs.
In this interpretation, each neural network prediction is a distribution over possible target values.
While the vast majority of neural network classifiers in use today rely on a small set of distributions -the binomial distribution, the categorical distribution, the Gaussian distribution, or the Laplacian distribution -there are many situations for which none of these distributions are appropriate.
Here we propose the use of the Beta distribution, Figure 5 : A deep Dirichlet-multinomial autoencoder was used to learn a two-dimensional embedding of simulated samples from 100-dimensional multinomials.
The 10 different clusters are readily apparent in the embedding of the validation set examples.
The samples shown are colored by their true cluster identity.Dirichlet distribution, and the Dirichlet-multinomial compound distribution as outputs of neural networks.
We show that a neural network can parameterize these distributions and the entire model can be trained using gradient descent on the NLL of the training data targets.This provides a particularly elegant approach to modelling certain types of network targets.
The Beta and Dirichlet provide a better way to model targets that lie on a simplex, such as probabilities or realvalues that lie on a bounded interval, and the Dirichlet-multinomial enables us to model vectors of counts using the elegant mathematical properties of the Dirichlet.
The predicted distributions have the correct support, so we can use them in decision making and for confidence intervals.
Moreover, we have demonstrated through experiments that the expectation over the Dirichlet serves as a good point estimate, with a mean squared error that is similar to optimizing the MSE directly. | Neural network regression should use Dirichlet output distribution when targets are probabilities in order to quantify uncertainty of predictions. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:868 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep neural networks have achieved outstanding performance in many real-world applications with the expense of huge computational resources.
The DenseNet, one of the recently proposed neural network architecture, has achieved the state-of-the-art performance in many visual tasks.
However, it has great redundancy due to the dense connections of the internal structure, which leads to high computational costs in training such dense networks.
To address this issue, we design a reinforcement learning framework to search for efficient DenseNet architectures with layer-wise pruning (LWP) for different tasks, while retaining the original advantages of DenseNet, such as feature reuse, short paths, etc.
In this framework, an agent evaluates the importance of each connection between any two block layers, and prunes the redundant connections.
In addition, a novel reward-shaping trick is introduced to make DenseNet reach a better trade-off between accuracy and float point operations (FLOPs).
Our experiments show that DenseNet with LWP is more compact and efficient than existing alternatives. | Learning to Search Efficient DenseNet with Layer-wise Pruning | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:869 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Unsupervised learning is about capturing dependencies between variables and is driven by the contrast between the probable vs improbable configurations of these variables, often either via a generative model which only samples probable ones or with an energy function (unnormalized log-density) which is low for probable ones and high for improbable ones.
Here we consider learning both an energy function and an efficient approximate sampling mechanism for the corresponding distribution.
Whereas the critic (or discriminator) in generative adversarial networks (GANs) learns to separate data and generator samples, introducing an entropy maximization regularizer on the generator can turn the interpretation of the critic into an energy function, which separates the training distribution from everything else, and thus can be used for tasks like anomaly or novelty detection.
This paper is motivated by the older idea of sampling in latent space rather than data space because running a Monte-Carlo Markov Chain (MCMC) in latent space has been found to be easier and more efficient, and because a GAN-like generator can convert latent space samples to data space samples.
For this purpose, we show how a Markov chain can be run in latent space whose samples can be mapped to data space, producing better samples.
These samples are also used for the negative phase gradient required to estimate the log-likelihood gradient of the data space energy function.
To maximize entropy at the output of the generator, we take advantage of recently introduced neural estimators of mutual information.
We find that in addition to producing a useful scoring function for anomaly detection, the resulting approach produces sharp samples (like GANs) while covering the modes well, leading to high Inception and Fréchet scores.
The early work on deep learning relied on unsupervised learning BID13 BID2 BID17 ) to train energy-based models BID18 , in particular Restricted Boltzmann Machines, or RBMs.
However, it turned out that training energy-based models without an analytic form for the normalization constant is very difficult, because of the challenge of estimating the gradient of the partition function, also known as the negative phase part of the log-likelihood gradient (described in more details below, Sec. 2).
Several algorithms were proposed for this purpose, such as Contrastive Divergence BID12 and Stochastic Maximum Likelihood BID28 BID26 , relying on Monte-Carlo Markov Chains (MCMC) to iteratively sample from the energy-based model.
However, because they appear to suffer from either high bias or high variance (due to long mixing times), training of RBMs and other Boltzmann machines has not remained competitive after the introduction of variational auto-encoders BID16 ) and generative adversarial networks or GANs .In
this paper, we revisit the question of training energy-based models, taking advantage of recent advances in GAN-related research, and propose a novel approach to training energy functions and sampling from them, called EnGAN. The
main inspiration for the proposed solution is the earlier observation BID4 made on stacks of auto-encoders that sampling in latent space (and then applying a decoder to map back to data space) led to faster mixing and more efficient sampling. The
authors observed that whereas the data manifold is generally very complex and curved, the corresponding distribution in latent space tends to be much simpler and flatter. This
was verified visually by interpolating in latent space and projecting back to data space through the decoder, observing that the resulting samples look like data samples (i.e., the latent space manifold is approximately convex, with most points interpolated between examples encoded in latent space also having high probability). We propose
a related approach, EnGAN, which also provides two energy functions, one in data space and one in latent space. A key ingredient
of the proposed approach is the need to regularize the generator (playing the role of the decoder in auto-encoders, but with no need for an encoder) so as to increase its entropy. This is needed to
make sure to produce negative examples that can kill off spurious minima of the energy function. This need was first
identified by BID15 , who showed that in order for an approximate sampler to match the density associated with an energy function, a compromise must be reached between sampling low energy configurations and obtaining a high-entropy distribution. However, estimating
and maximizing the entropy of a complex high-dimensional distribution is not trivial, and we take advantage for this purpose of very recently proposed GAN-based approaches for maximizing mutual information BID1 BID24 , since the mutual information between the input and the output of the generator is equal to the entropy at the output of the generator.In this context, the main contributions of this paper are the following:• proposing EnGAN, a general architecture, sampling and training framework for energy functions, taking advantage of an estimator of mutual information between latent variables and generator output and approximating the negative phase samples with MCMC in latent space, • showing that the resulting energy function can be successfully used for anomaly detection, improving on recently published results with energy-based models, • showing that EnGAN produces sharp images -with competitive Inception and Frechet scores -and which also better cover modes than standard GANs and WGAN-GPs, while not suffering from the common blurriness issue of many maximum likelihood generative models.
We proposed EnGAN, an energy-based generative model that produces energy estimates using an energy model and a generator that produces fast approximate samples.
This takes advantage of novel methods to maximize the entropy at the output of the generator using a GAN-like technique.
We have shown that our energy model learns good energy estimates using visualizations in toy 2D data and through performance in unsupervised anomaly detection.
We have also shown that our generator produces samples of high perceptual quality by measuring Inception and Frchet scores and shown that EnGAN is robust to the respective weaknesses of GAN models (mode dropping) and maximumlikelihood energy-based models (spurious modes).
We found that running an MCMC in latent space rather than in data space (by composing the generator and the data-space energy to obtain a latentspace energy) works substantially better than running the MCMC in data-space. | We introduced entropy maximization to GANs, leading to a reinterpretation of the critic as an energy function. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:87 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We consider the setting of an agent with a fixed body interacting with an unknown and uncertain external world.
We show that models trained to predict proprioceptive information about the agent's body come to represent objects in the external world.
In spite of being trained with only internally available signals, these dynamic body models come to represent external objects through the necessity of predicting their effects on the agent's own body.
That is, the model learns holistic persistent representations of objects in the world, even though the only training signals are body signals.
Our dynamics model is able to successfully predict distributions over 132 sensor readings over 100 steps into the future and we demonstrate that even when the body is no longer in contact with an object, the latent variables of the dynamics model continue to represent its shape.
We show that active data collection by maximizing the entropy of predictions about the body---touch sensors, proprioception and vestibular information---leads to learning of dynamic models that show superior performance when used for control.
We also collect data from a real robotic hand and show that the same models can be used to answer questions about properties of objects in the real world.
Videos with qualitative results of our models are available at https://goo.gl/mZuqAV.
Situation awareness is the perception of the elements in the environment within a volume of time and space, and the comprehension of their meaning, and the projection of their status in the near future.
-Endsley (1987) As artificial intelligence moves off of the server and out into the world at large; be this the virtual world, in the form of simulated walkers, climbers and other creatures BID26 , or the real world in the form of virtual assistants, self driving vehicles BID6 , and household robots BID29 ; we are increasingly faced with the need to build systems that understand and reason about the world around them.When building systems like this it is natural to think of the physical world as breaking into two parts.
The first part is the platform, the part we design and build, and therefore know quite a lot about; and the second part is everything else, which comprises all the strange and exciting situations that the platform might encounter.
As designers, we have very little control over the external part of the world, and the variety of situations that might arise are too numerous to anticipate in advance.
Additionally, while the state of the platform is readily accessible (e.g. through deployment of integrated sensors), the state of the external world is generally not available to the system.
The platform hosts any sensors and actuators that are part of the system, and importantly it can be relied on to be the same across the wide variety situations where the system might be deployed.
A virtual assistant can rely on having access to the camera and microphone on your smart phone, and the control system for a self driving car can assume it is controlling a specific make and model of vehicle, and that it has access to any specialized hardware installed by the manufacturer.
These consistency assumptions hold regardless of what is happening in the external world.This same partitioning of the world occurs naturally for living creatures as well.
As a human being your platform is your body; it maintains a constant size and shape throughout your life (or at least Figure 1 : Illustration of a preprogrammed grasp and release cycle of a single episode of the MPL hand. The target block is only perceivable to the agent through the constraints it imposes on the movement of the hand. Note that the shape of the object is correctly predicted even when the hand is not in contact with it. That is, the hand neural network sensory model has learned persistent representations of the external world, which enable it to be aware of object properties even when not touching the objects.these change vastly slower than the world around you), and you can hopefully rely on the fact that no matter what demands tomorrow might make of you, you will face them with the same number of fingers and toes.This story of partitioning the world into the self and the other, that exchange information through the body, suggests an approach to building models for reasoning about the world.
If the body is a consistent vehicle through which an agent interacts with the world and proprioceptive and tactile senses live at the boundary of the body, then predictive models of these senses should result in models that represent external objects, in order to accurately predict their future effects on the body.
This is the approach we take in this paper.We consider two robotic hand bodies, one in simulation and one in reality.
The hands are induced to grasp a variety of target objects (see Figure 1 for an example) and we build forward models of their proprioceptive signals.
The target objects are perceivable only through the constraints they place on the movement of the body, and we show that this information is sufficient for the dynamics models to form holistic, persistent representations of the targets.
We also show that we can use the learned dynamics models for planning, and that we can illicit behaviors from the planner that depend on external objects, in spite of those objects not being included in the observations directly (see Figure 7 ).Our
simulated body is a model of the hand of the Johns Hopkins Modular Prosthetic Limb BID30 , realized in MuJoCo BID58 . The
model is actuated by 13 motors each capable of exerting a bidirectional force on a single joint. The
model is also instrumented with a series of sensors measuring angles and torques of the joints, as well as pressure sensors measuring contact forces at several locations across its surface. There
are also inertial measurement units located at the end of each finger which measure translational and rotational accelerations. In total
there are 132 sensor measurements whose values we predict using our dynamics model. Our real
body is the Shadow Dexterous Hand, which is a real robotic hand with 20 degree of freedom control. This allows
us to show that that our ideas apply not only in simulation, but succeed in the real world as well. The Shadow
Hand is instrumented with sensors measuring the tension of the tendons driving the fingers, and also has pressure sensors on the pad of each fingertip that measure contact forces with objects in the world. We apply the
same techniques used on the simulated model to data collected from this real platform and use the resulting model to make predictions about states of external objects in the real world.
In this paper we showed that learning a forward predictive model of proprioception we obtain models that can be used to answer questions and reason about objects in the external world.
We demonstrated this in simulation with a series of diagnostic tasks where we use the model features to identify properties of external objects, and also with a control task where we show that we can plan in the model to achieve objectives that were not seen during training.We also showed that the same principles we applied to our simulated models are also successful in reality.
We collected data from a real robotic platform and used the same modelling techniques to predict the orientation of a grasped block.A DERIVING THE RÉNYI ENTROPY OF A MIXTURE OF GAUSSIANS DISPLAYFORM0 where the last step can be computed with Mathematica, and is also given in Bromiley (2003) : DISPLAYFORM1 | We train predictive models on proprioceptive information and show they represent properties of external objects. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:870 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Inspired by the success of generative adversarial networks (GANs) in image domains, we introduce a novel hierarchical architecture for learning characteristic topological features from a single arbitrary input graph via GANs.
The hierarchical architecture consisting of multiple GANs preserves both local and global topological features, and automatically partitions the input graph into representative stages for feature learning.
The stages facilitate reconstruction and can be used as indicators of the importance of the associated topological structures.
Experiments show that our method produces subgraphs retaining a wide range of topological features, even in early reconstruction stages.
This paper contains original research on combining the use of GANs and graph topological analysis.
Graphs have great versatility, able to represent complex systems with diverse relationships between objects and data.
With the rise of social networking, and the importance of relational properties to the "big data" phenomenon, it has become increasingly important to develop ways to automatically identify key structures present in graph data.
Identification of such structures is crucial in understanding how a social network forms, or in making predictions about future network behavior.
To this end, a large number of graph analysis methods have been proposed to analyze network topology at the node BID57 , community BID41 BID55 , and global levels BID64 .Each
level of analysis is greatly influenced by network topology, and thus far algorithms cannot be adapted to work effectively for arbitrary network structures. Modularity-based
community detection BID65 works well for networks with separate clusters, whereas edge-based methods BID37 are suited to dense networks. Similarly, when
performing graph sampling, Random Walk (RW) is suitable for sampling paths BID51 , whereas Forrest Fire (FF) is useful for sampling clusters BID52 . When it comes to
graph generation, Watts-Strogatz (WS) graph models BID62 can generate graphs with small world features, whereas Barabsi-Albert (BA) graph models BID31 simulate super hubs and regular nodes according to the scale-free features of the network.However, real-world networks typically have multiple topological features. Considering real-world
networks also introduces another issue that traditional graph analysis methods struggle with; having a mere single instance of a graph (e.g. the transaction graph for a particular bank), making it difficult to identify the key topological properties in the first place. In particular, we are
interested in both "local topological features" (such as the presence of subgraph structures like triangles) and "global topological features" such as degree distribution.Instead of directly analyzing the entire topology of a graph, GTI first divides the graph into several hierarchical layers. A hierarchical view of
a graph can split the graph by local and global topological features, leading to a better understanding of the graph BID63 . As different layers have
different topological features, GTI uses separate GANs to learn each layer and the associated features. By leveraging GANs renowned
feature identification BID42 on each layer, GTI has the ability to automatically capture arbitrary topological features from a single input graph. Figure 1 demonstrates how GTI
can learn to reproduce an input graph where a single GAN cannot.In addition to learning topological features from the input graph, the GTI method defines a reconstruction process for reproducing the original graph via a series of reconstruction stages (the number Figure 1 : How GTI recovers the original graph while naive GAN methods do not: The DCGAN output looks like a complete graph, whereas GTI can capture the super-hub structure of node 3 and node 2.of which is automatically learned during training). As stages are ranked in order
of their contribution to the full topology of the original graph, early stages can be used as an indicator of the most important topological features. Our focus in this initial work
is on the method itself and demonstrating our ability to learn these important features quickly (via demonstrating the retention of identifiable structures and comparisons to graph sampling methods).
This paper leveraged the success of GANs in (unsupervised) image generation to tackle a fundamental challenge in graph topology analysis: a model-agnostic approach for learning graph topological features.
By using a GAN for each hierarchical layer of the graph, our method allowed us to reconstruct diverse input graphs very well, as well as preserving both local and global topological features when generating similar (but smaller) graphs.
In addition, our method identifies important features through the definition of the reconstruction stages.
A clear direction of future research is in extending the model-agnostic approach to allow the input graph to be directed and weighted, and with edge attributes. | A GAN based method to learn important topological features of an arbitrary input graph. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:871 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In Chinese societies, superstition is of paramount importance, and vehicle license plates with desirable numbers can fetch very high prices in auctions.
Unlike other valuable items, license plates are not allocated an estimated price before auction.
I propose that the task of predicting plate prices can be viewed as a natural language processing (NLP) task, as the value depends on the meaning of each individual character on the plate and its semantics.
I construct a deep recurrent neural network (RNN) to predict the prices of vehicle license plates in Hong Kong, based on the characters on a plate.
I demonstrate the importance of having a deep network and of retraining.
Evaluated on 13 years of historical auction prices, the deep RNN's predictions can explain over 80 percent of price variations, outperforming previous models by a significant margin.
I also demonstrate how the model can be extended to become a search engine for plates and to provide estimates of the expected price distribution.
Chinese societies place great importance on numerological superstition.
Numbers such as 8 (representing prosperity) and 9 (longevity) are often used solely because of the desirable qualities they represent.
For example, the Beijing Olympic opening ceremony occurred on 2008/8/8 at 8 p.m., the Bank of China (Hong Kong) opened on 1988/8/8, and the Hong Kong dollar is linked to the U.S. dollar at a rate of around 7.8.License plates represent a very public display of numbers that people can own, and can therefore unsurprisingly fetch an enormous amount of money.
Governments have not overlooked this, and plates of value are often auctioned off to generate public revenue.
Unlike the auctioning of other valuable items, however, license plates generally do not come with a price estimate, which has been shown to be a significant factor affecting the sale price BID2 BID23 .
The large number of character combinations and of plates per auction makes it difficult to provide reasonable estimates.This study proposes that the task of predicting a license plate's price based on its characters can be viewed as a natural language processing (NLP) task.
Whereas in the West numbers can be desirable (such as 7) or undesirable (such as 13) in their own right for various reasons, in Chinese societies numbers derive their superstitious value from the characters they rhyme with.
As the Chinese language is logosyllabic and analytic, combinations of numbers can stand for sound-alike phrases.
Combinations of numbers that rhyme with phrases that have positive connotations are thus desirable.
For example, "168," which rhythms with "all the way to prosperity" in Chinese, is the URL of a major Chinese business portal (http://www.168.com).
Looking at the historical data analyzed in this study, license plates with the number 168 fetched an average price of US$10,094 and as much as $113,462 in one instance.
Combinations of numbers that rhyme with phrases possessing negative connotations are equally undesirable.
Plates with the number 888 are generally highly sought after, selling for an average of $4,105 in the data, but adding a 5 (rhymes with "no") in front drastically lowers the average to $342.As these examples demonstrate, the value of a certain combination of characters depends on both the meaning of each individual character and the broader semantics.
The task at hand is thus closely related to sentiment analysis and machine translation, both of which have advanced significantly in recent years.Using a deep recurrent neural network (RNN), I demonstrate that a good estimate of a license plate's price can be obtained.
The predictions from this study's deep RNN were significantly more accurate than previous attempts to model license plate prices, and are able to explain over 80 percent of price variations.
There are two immediate applications of the findings in this paper: first, an accurate prediction model facilitates arbitrage, allowing one to detect underpriced plates that can potentially fetch for a higher price in the active second-hand market.
Second, the feature vectors extracted from the last recurrent layer of the model can be used to construct a search engine for historical plate prices.
Among other uses, the search engine can provide highly-informative justification for the predicted price of any given plate.In a more general sense, this study demonstrates the value of deep networks and NLP in making accurate price predictions, which is of practical importance in many industries and has led to a huge volume of research.
As detailed in the following review, studies to date have mostly relied on small, shallow networks.
The use of text data is also rare, despite the large amount of business text data available.
By demonstrating how a deep network can be trained to predict prices from sequential data, this study provides an approach that may improve prediction accuracy in many industrial applications. | Predicting auction price of vehicle license plates in Hong Kong with deep recurrent neural network, based on the characters on the plates. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:872 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We present a new latent model of natural images that can be learned on large-scale datasets.
The learning process provides a latent embedding for every image in the training dataset, as well as a deep convolutional network that maps the latent space to the image space.
After training, the new model provides a strong and universal image prior for a variety of image restoration tasks such as large-hole inpainting, superresolution, and colorization.
To model high-resolution natural images, our approach uses latent spaces of very high dimensionality (one to two orders of magnitude higher than previous latent image models).
To tackle this high dimensionality, we use latent spaces with a special manifold structure (convolutional manifolds) parameterized by a ConvNet of a certain architecture.
In the experiments, we compare the learned latent models with latent models learned by autoencoders, advanced variants of generative adversarial networks, and a strong baseline system using simpler parameterization of the latent space.
Our model outperforms the competing approaches over a range of restoration tasks.
Learning good image priors is one of the core problems of computer vision and machine learning.
One promising approach to obtaining such priors is to learn a deep latent model, where the set of natural images is parameterized by a certain simple-structured set or probabilistic distribution, whereas the complexity of natural images is tackled by a deep ConvNet (often called a generator or a decoder) that maps from the latent space into the space of images.
The best known examples are generative adversarial networks (GANs) (Goodfellow et al., 2014) and autoencoders BID4 .Given
a good deep latent model, virtually any image restoration task can be solved by finding a latent representation that best corresponds to the image evidence (e.g. the known pixels of an occluded image or a low-resolution image). The attractiveness
of such approach is in the universality of the learned image prior. Indeed, applying the
model to a new restoration task can be performed by simply changing the likelihood objective. The same latent model
can therefore be reused for multiple tasks, and the learning process needs not to know the image degradation process in advance. This is in contrast to
task-specific approaches that usually train deep feed-forward ConvNets for individual tasks, and which have a limited ability to generalize across tasks (e.g. a feed-forward network trained for denoising cannot perform large-hole inpainting and vice versa).At the moment, such image
restoration approach based on latent models is limited to low-resolution images. E.g. BID16 showed how a latent
model trained with GAN can be used to perform inpainting of tightly-cropped 64 × 64 face images. Below, we show that such models
trained with GANs cannot generalize to higher resolution (eventhough GAN-based systems are now able to obtain high-quality samples at high resolutions BID9 ). We argue that it is the limited
dimensionality of the latent space in GANs and other existing latent models that precludes them from spanning the space of high-resolution natural images.To scale up latent modeling to high-resolution images, we consider latent models with tens of thousands of latent dimensions (as compared to few hundred latent dimensions in existing works). We show that training such latent
models is possible using direct optimization BID1 and that it leads to good image priors that can be used across a broad variety of reconstruction tasks. In previous models, the latent space
has a simple structure such as a sphere or a box in a Euclidean space, or a full Euclidean space with a Gaussian prior. Such choice, however, is not viable
in our Figure 1 : Restorations using the same Latent Convolutional Model (images 2,4,6) for different image degradations (images 1,3,5). At training time, our approach builds
a latent model of non-degraded images, and at test time the restoration process simply finds a latent representation that maximizes the likelihood of the corrupted image and outputs a corresponding non-degraded image as a restoration result.case, as vectors with tens of thousands of dimensions cannot be easily used as inputs to a generator. Therefore, we consider two alternative
parameterizations of a latent space. Firstly, as a baseline, we consider latent
spaces parameterized by image stacks (three-dimensional tensors), which allows to have "fully-convolutional" generators with reasonable number of parameters.Our full system uses a more sophisticated parameterization of the latent space, which we call a convolutional manifold, where the elements of the manifold correspond to the parameter vector of a separate ConvNet. Such indirect parameterization of images and
image stacks have recently been shown to impose a certain prior BID15 , which is beneficial for restoration of natural images. In our case, we show that a similar prior can
be used with success to parameterize high-dimensional latent spaces.To sum up, our contributions are as follows. Firstly, we consider the training of deep latent
image models with the latent dimensionality that is much higher than previous works, and demonstrate that the resulting models provide universal (w.r.t. restoration tasks) image priors. Secondly, we suggest and investigate the convolutional
parameterization for the latent spaces of such models, and show the benefits of such parameterization.Our experiments are performed on CelebA BID11 (128x128 resolution), SUN Bedrooms BID17 (256x256 resolution), CelebA-HQ BID9 ) (1024x1024 resolution) datasets, and we demonstrate that the latent models, once trained, can be applied to large hole inpainting, superresolution of very small images, and colorization tasks, outperforming other latent models in our comparisons. To the best of our knowledge, we are the first to demonstrate
how "direct" latent modeling of natural images without extra components can be used to solve image restoration problems at these resolutions (Figure 1 ).Other related work. Deep latent models follow a long line of works
on latent image models
that goes back at least to the eigenfaces approach BID14 . In terms of restoration, a competing and more popular approach are feed-forward
networks trained for specific restoration tasks, which have seen rapid progress recently. Our approach does not quite match the quality of e.g. BID6 , that is designed and
trained specifically for the inpainting task, or the quality of e.g. BID18 that is designed and trained specifically for the face superresolution task. Yet the models trained within our approach (like other latent models) are universal
, as they can handle degradations unanticipated at training time.Our work is also related to pre-deep learning ("shallow") methods that learn priors on (potentiallyoverlapping) image patches using maximum likelihood-type objectives such as BID12 BID8 BID21 . The use of multiple layers in our method allows to capture much longer correlations
. As a result, our method can be used successfully to handle restoration tasks that
require exploiting these correlations, such as large-hole inpainting.
The results in this work suggest that high-dimensional latent spaces are necessary to get good image reconstructions on desired hold-out sets.
Further, it shows that parametrizing these spaces using ConvNets imposes further structure on them that allow us to produce good image restorations from a wide variety of degradations and at relatively high resolutions.
More generally, this method can easily be extended to come up with more interesting parametrizations of the latent space, e.g. by interleaving the layers with image-specific and dataset-specific parameters.The proposed approach has several limitations.
First, when trained over very large datasets, the LCM model requires long time to be trained till convergence.
For instance, training an LCM on 150k samples of CelebA at 128 × 128 resolution takes about 14 GPU-days.
Note that the GLO model of the same latent dimensionality takes about 10 GPU-days.
On the other hand, the universality of the models means that they only need to be trained once for a certain image type, and can be applied to any degradations after that.
The second limitation is that both LCM and GLO model require storing their latent representations in memory, which for large datasets and large latent spaces may pose a problem.
Furthermore, we observe that even with the large latent dimensionalities that we use here, the models are not able to fit the training data perfectly suffering from such underfitting.
Our model also assumes that the (log)-likelihood corresponding to the degradation process can be modeled and can be differentiated.
Experiments suggests that however such modeling needs not be very accurate, e.g. simple quadratic log-likelihood can be used to restore JPEG-degraded images (Appendix H).
Finally, our model requires lengthy optimization in latent space, rather than a feedforward pass, at test time.
The number of iterations however can be drastically reduced using degradation-specific or universal feed-forward encoders from image-space to the latent space that may provide a reasonable starting point for optimization. | We present a new deep latent model of natural images that can be trained from unlabeled datasets and can be utilized to solve various image restoration tasks. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:873 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Automatic Essay Scoring (AES) has been an active research area as it can greatly reduce the workload of teachers and prevents subjectivity bias .
Most recent AES solutions apply deep neural network (DNN)-based models with regression, where the neural neural-based encoder learns an essay representation that helps differentiate among the essays and the corresponding essay score is inferred by a regressor.
Such DNN approach usually requires a lot of expert-rated essays as training data in order to learn a good essay representation for accurate scoring.
However, such data is usually expensive and thus is sparse.
Inspired by the observation that human usually scores an essay by comparing it with some references, we propose a Siamese framework called Referee Network (RefNet) which allows the model to compare the quality of two essays by capturing the relative features that can differentiate the essay pair.
The proposed framework can be applied as an extension to regression models as it can capture additional relative features on top of internal information.
Moreover, it intrinsically augment the data by pairing thus is ideal for handling data sparsity.
Experiment shows that our framework can significantly improve the existing regression models and achieve acceptable performance even when the training data is greatly reduced.
Automatic Essay Scoring (AES) is the technique to automatically score an essay over some specific marking scale.
AES has been an eye-catching problem in machine learning due to its promising application in education.
It can free tremendous amount of repetitive labour, boosting the efficiency of educators.
Apart from automation, computers also prevail human beings in consistency, thus eliminate subjectivity and improve fairness in scoring.
Attempts in AES started as early as Project Essay Grade (PEG) (Page, 1967; 2003) , when the most prevalent methods relied on hand-crafted features engineered by human experts.
Recent advances in neural networks bring new possibilities to AES.
Several related works leveraged neural networks and achieved decent results (Dong et al., 2017; Taghipour & Ng, 2016; Tay et al., 2018; Liu et al., 2019) .
As is shown in Figure 1 , these approaches generally follow the 'representation + regression' scheme where a neural network reads in the text embeddings and generates a high level representation that will be fed to some regression model for a score.
However, such model requires a large amount of expert-rated essays for training.
In reality, collecting such dataset is expensive.
Therefore, data sparsity remains a knotty problem to be solved.
Inspired by the observation that human raters usually score an essay by comparing it to a set of references, we propose to leverage the pairwise comparisons for scoring instead of regression.
The goal of the model is shifted from predicting the score directly to comparing two essays, and the final score will be determined by comparing new essays with known samples.
In order to achieve this, we designed a Siamese network called Referee Network (RefNet) and corresponding scoring algorithms.
RefNet is a framework so that it can use various representation encoders as backbones.
What's more, though this model is designed to capture mutual features, it can also benefit from essay internal information via transfer learning.
Scoring essays by comparison has various benefits.
First, RefNet is incredibly strong in dealing with data sparsity problem.
Essays are paired with each other to form the training data for RefNet, which significantly augmented the data size.
Experiments show that our model achieve acceptable performance even when the training data is radically reduced, while regression models are subject to drastic performance degeneration.
Second, unlike end-to-end black-box models, our system scores an essay by comparing it with a set of labeled anchors, providing transparency to a certain degree during inference process.
Last but not least, with information in both internal and mutual perspective, RefNet can have better insight into the quality of essays.
Our contributions can be summarized as follows:
• We designed Referee Network (RefNet), a simple but effective model to compare two essays, and Majority Probability Voting Algorithm to infer the score from pairwise comparison results.
To the best of our knowledge, it is the first time a Siamese neutral network is used in AES.
• Our model intrinsically solves the problem of data sparsity.
It achieves acceptable performance even when the training data is greatly reduced, while regression models are impaired a lot.
Its efficacy in few-shot learning makes it an ideal solution for real applications where labelled data is usually limited.
• RefNet exploits a new realm of information, mutual relationship, by pairwise comparison.
With transfer learning, it also leverages internal features captured by regression.
Moreover, RefNet can be applied as an extension to various regression models and consistently improve the performance.
In this paper we present Referee Network, a framework for automatic essay scoring using pairwise comparisons.
We demonstrate that RefNet is expert in solving data sparsity problems.
It can retain the performance at a high level even when the training data is significantly reduced, which outperforms regression models by a significant margin.
We also show that RefNet can improve conventional regression models by leveraging the additional mutual information between representations.
With only vanilla backbones, our model is able to obtain state-of-the-art results.
Even if the essay representations are fixed and have mediocre quality, our model can still boost up the scoring accuracy.
Furthermore, the capacity of RefNet can go far beyond this context as it is an extendable framework that can be used with any kind of representation encoders.
Besides the simple backbones we tried in this paper, one can by all means utilize more complicated and better performing models as backbones.
In this way, the performance of AES systems can always be pushed to a new record. | Automatically score essays on sparse data by comparing new essays with known samples with Referee Network. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:874 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
One of the fundamental tasks in understanding genomics is the problem of predicting Transcription Factor Binding Sites (TFBSs).
With more than hundreds of Transcription Factors (TFs) as labels, genomic-sequence based TFBS prediction is a challenging multi-label classification task.
There are two major biological mechanisms for TF binding: (1) sequence-specific binding patterns on genomes known as “motifs” and (2) interactions among TFs known as co-binding effects.
In this paper, we propose a novel deep architecture, the Prototype Matching Network (PMN) to mimic the TF binding mechanisms.
Our PMN model automatically extracts prototypes (“motif”-like features) for each TF through a novel prototype-matching loss.
Borrowing ideas from few-shot matching models, we use the notion of support set of prototypes and an LSTM to learn how TFs interact and bind to genomic sequences.
On a reference TFBS dataset with 2.1 million genomic sequences, PMN significantly outperforms baselines and validates our design choices empirically.
To our knowledge, this is the first deep learning architecture that introduces prototype learning and considers TF-TF interactions for large scale TFBS prediction.
Not only is the proposed architecture accurate, but it also models the underlying biology.
Genomic sequences build the basis of a large body of research on understanding the biological processes in living organisms.
Enabling machines to read and comprehend genomes is a longstanding and unfulfilled goal of computational biology.
One of the fundamental task to understand genomes is the problem of predicting Transcription Factor Binding Sites (TFBSs), attracting much attention over the years BID5 .
Transcription Factors (TFs) are proteins which bind (i.e., attach) to DNA and control whether a gene is expressed or not.
Patterns of how different genes expressed or not expressed control many important biological phenomena, including diseases such as cancer.
Therefore accurate models for identifying and describing the binding sites of TFs are essential in understanding cells.Owing to the development of chromatin immunoprecipitation and massively parallel DNA sequencing (ChIP-seq) technologies BID26 ), maps of genome-wide binding sites are currently available for multiple TFs in a few cell types across human and mouse genomes via the ENCODE BID5 database.
However, ChIP-seq experiments are slow and expensive; they have not been performed for many important cell types or organisms.
Therefore, computational methods to identify TFBS accurately remain essential for understanding the functioning and evolution of genomes.An important feature of TFs is that they typically bind to sequence-specific patterns on genomes, known as "motifs" BID25 .
Motifs are essentially a blueprint, or a "prototype" which a TF searches for in order to bind.
However, motifs are only one part in determining whether or not a TF will bind to specific locations.
If a TF binds in the absence of its motif, or it does not bind in the presence of its motif, then it is likely there are some external causes such as an interaction with another TF, known as co-binding effects in biology BID46 .
This indicates that when designing a genomic-sequence based TFBS predictor, we should consider two modeling challenges: (1) how to automatically extract "motifs"-like features and (2) how to model the co-binding patterns and consider such patterns in predicting TFBSs.
In this paper, we address both proposing a novel deep-learning model: prototype matching network (PMN).To
address the first challenge of motif learning and matching, many bioinformatics studies tried to predict TFBSs by constructing motifs using position weight matrices (PWMs) which best represented the positive binding sites. To
test a sequence for binding, the sequence is compared against the PWMs to see if there is a close match BID37 . PWM-matching
was later outperformed by convolutional neural network (CNN) and CNN-variant models that can learn PWM-like filters BID0 . Different from
basic CNNs, our proposed PMN is inspired by the idea of "prototype-matching" BID44 BID14 . These studies
refer to the CNN type of model as the "feature-matching" mode of pattern recognition. While pure feature
matching has proven effective, studies have shown a "prototype effect" where objects are likely recognized as a whole using a similarity measure from a blurred prototype representation, and prototypes do not necessarily match the object precisely BID44 . It is plausible that
humans use a combination of feature matching and prototype matching where feature-matching is used to construct a prototype for testing unseen samples BID14 . For TFBS prediction,
the underlying biology evidently favors computation models that can learn "prototypes" (i.e. effective motifs). Although motifs are
indirectly learned in convolutional layers, existing deep learning studies of TFBS (details in Section 3) have not considered
the angle of "motif-matching" using a similarity measure. We, instead, propose a
novel prototype-matching loss to learn prototype embedding automatically for each TF involved in the data.None of the previous deep-learning studies for TFBS predictions have considered tackling the second challenge of including the co-binding effects among TFs in data modeling. From a machine learning
angle, the genomic sequence based TFBS prediction is a multi-label sequence classification task. Rather than learning a
prediction model for each TF (i.e., each label) predicting if the TF will bind or not on input, a joint model is ideal for outputting how a genomic sequence input is attached by a set of TFs (i.e., labels). The so-called "co-binding
effects" connect deeply to how to model the dependency and combinations of TFs (labels). Multi-label classification
is receiving increasing attention in deep learning BID9 BID47 (detailed review in Section 3). Modeling the multi-label formulation
for TFBS is an extremely challenging task because the number of labels (TFs) is in hundreds to thousands (e.g. 1,391 TFs in BID41 ). The classic solution for multi-label
classification using the powerset idea (i.e., the set of all subsets of the label set) is clearly not feasible BID40 . Possible prior information about TF-TF
interactions is unknown or limited in the biology literature.To tackle these obstacles, our proposed model PMN borrows ideas from the memory network and attention literature. BID43 proposed a "matching network" model
where they train a differentiable nearest neighbor model to find the closest matching image from a support set on a new unseen image. They use a CNN to extract features and then
match those features against the support set images. We replace this support set of images with
a learned support set of prototypes from the large-scale training set of TFBS prediction, and we use this support set to match against a new test sample. The key difference is that our PMN model is
not for few-shot learning and we seek to learn the support set (prototypes). BID43 uses an attentionLSTM to model how a
test sample matches to different items in the support set through softmax based attention. Differently, we use what we call a combinationLSTM
to model how the embedding of a test sample matches to a combination of relevant prototypes. Using multiple "hops", the combinationLSTM updates
the embedding of the input sequence by searching for which TFs (prototypes) are more relevant in the label combination. Instead of explicitly modeling interactions among
labels, we try to use the combinationLSTM to mimic the underlying biology. The combinationLSTM tries to learn prototype embedding
and represent high-order label combinations through a weighted sum of prototype embedding. This weighted summation can model many "co-binding effects
" reported in the biology literature BID46 ) (details in Section 2).In summary, we propose a novel PMN model by combining few-shot
matching and prototype feature learning. To our knowledge, this is the first deep learning architecture
to model TF-TF interactions in an end-to-end model. In addition, this is also the first paper to introduce large scale
prototype learning using a deep learning architecture. On a reference TFBS dataset with 2.1 million genomic sequences, PMN
significantly outperforms the state-of-the-art TFBS prediction baselines. We validate the learned prototypes through an existing database about
TF-TF interactions. The TF groups obtained by clustering prototype embedding evidently captures
the "cooperative effects" that has not been modeled by previous TFBS prediction works.The main contributions of our model are: DISPLAYFORM0 On the left is an overview of the model. The input sequence x is encoded asx using f (3-layer CNN).x is then matched
against the learned prototypes using the combinationLSTM for
K "hops" so that it can update its output based on TF interactions for this input sequence. The final outputŷ is based on a concatenation of the final updated sequence vector
h K from the LSTM, and final read vector r K from the matching. On the right is a closer look at the internal aspects of the combinationLSTM.• We
propose a novel model by combining few-shot matching with large-scale prototype
feature learning.• We design a novel prototype-matching loss to learn "motif"-like features in deep learning
, which is important for the TFBS prediction task.• We extend matching models from the few-shot single-label task to a large-scale multi-label
task for genomic sequence classification.• We implement an attention LSTM module to model label interactions in a novel way.• Our model
favors design choices mimicking the underlying biological processes. We think such
modeling strategies are more fundamental especially on datasets from biology.
Sequence analysis plays an important role in the field of bioinformatics.
A prominent task is to understand how Transcription Factor proteins (TFs) bind to DNA.
Researchers in biology hypothesize that each TF searches for certain sequence patterns on genome to bind to, known as "motifs".
Accordingly we propose a novel prototype matching network (PMN) for learning motif-like prototype features.
On a support set of learned prototypes, we use a combinationLSTM for modeling label dependencies.
The combinationLSTM tries to learn and mimic the underlying biological effects among labels (e.g. co-binding).
Our results on a dataset of 2.1 million genomic strings show that the prototype matching model outperforms baseline variations not having prototype-matching or not using the combinationLSTM.
This empirically validates our design choices to favor those mimicking the underlying biological mechanisms.Our PMN model is a general classification approach and not tied to the TFBS applications.
We show this generality by applying it on the MNIST dataset and obtain convincing results in Appendix Section 7.1.
MNIST differs from TFBS prediction in its smaller training size as well as in its multi-class properties.
We plan a few future directions to extend the PMN.
First, TFBSs vary across different cell types, cell stages and genomes.
Extending PMN for considering the knowledge transfer is especially important for unannotated cellular contexts (e.g., cell types of rare diseases or rare organisms).
Another direction is to add more domain-specific features.
While we show that using prototype matching and the combinationLSTM can help modelling TF combinations, there are additional raw feature extraction methods that we could add in order to obtain better representations of genomics sequences.
These include reverse complement sequence inputs or convolutional parameter sharing BID32 , or an RNN to model lower level spatial interactions BID28 among motifs. | We combine the matching network framework for few shot learning into a large scale multi-label model for genomic sequence classification. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:875 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Previous work shows that adversarially robust generalization requires larger sample complexity, and the same dataset, e.g., CIFAR-10, which enables good standard accuracy may not suffice to train robust models.
Since collecting new training data could be costly, we focus on better utilizing the given data by inducing the regions with high sample density in the feature space, which could lead to locally sufficient samples for robust learning.
We first formally show that the softmax cross-entropy (SCE) loss and its variants convey inappropriate supervisory signals, which encourage the learned feature points to spread over the space sparsely in training.
This inspires us to propose the Max-Mahalanobis center (MMC) loss to explicitly induce dense feature regions in order to benefit robustness.
Namely, the MMC loss encourages the model to concentrate on learning ordered and compact representations, which gather around the preset optimal centers for different classes.
We empirically demonstrate that applying the MMC loss can significantly improve robustness even under strong adaptive attacks, while keeping state-of-the-art accuracy on clean inputs with little extra computation compared to the SCE loss.
The deep neural networks (DNNs) trained by the softmax cross-entropy (SCE) loss have achieved state-of-the-art performance on various tasks (Goodfellow et al., 2016) .
However, in terms of robustness, the SCE loss is not sufficient to lead to satisfactory performance of the trained models.
It has been widely recognized that the DNNs trained by the SCE loss are vulnerable to adversarial attacks (Carlini & Wagner, 2017a; Goodfellow et al., 2015; Kurakin et al., 2017; Papernot et al., 2016) , where human imperceptible perturbations can be crafted to fool a high-performance network.
To improve adversarial robustness of classifiers, various kinds of defenses have been proposed, but many of them are quickly shown to be ineffective to the adaptive attacks, which are adapted to the specific details of the proposed defenses .
Besides, the methods on verification and training provably robust networks have been proposed (Dvijotham et al., 2018a; b; Hein & Andriushchenko, 2017; .
While these methods are exciting, the verification process is often slow and not scalable.
Among the previously proposed defenses, the adversarial training (AT) methods can achieve state-of-the-art robustness under different adversarial settings Zhang et al., 2019b) .
These methods either directly impose the AT mechanism on the SCE loss or add additional regularizers.
Although the AT methods are relatively strong, they could sacrifice accuracy on clean inputs and are computationally expensive (Xie et al., 2019) .
Due to the computational obstruction, many recent efforts have been devoted to proposing faster verification methods Xiao et al., 2019) and accelerating AT procedures (Shafahi et al., 2019; Zhang et al., 2019a) .
However, the problem still remains.
show that the sample complexity of robust learning can be significantly larger than that of standard learning.
Given the difficulty of training robust classifiers in practice, they further postulate that the difficulty could stem from the insufficiency of training samples in the commonly used datasets, e.g., CIFAR-10 (Krizhevsky & Hinton, 2009) .
Recent work intends to solve this problem by utilizing extra unlabeled data (Carmon et al., 2019; Stanforth et al., 2019) , while we focus on the complementary strategy to exploit the labeled data in hand more efficiently.
Note that although the samples in the input space are unchangeable, we could instead manipulate the local sample distribution, i.e., sample density in the feature space via appropriate training objectives.
Intuitively, by inducing high-density feature regions, there would be locally sufficient samples to train robust classifiers and return reliable predictions .
!""# ∈ %&, %& + ∆% (low sample density) !""# ∈ %*, %* + ∆% (high sample density) + , * !
.#/ ∈ %&, %& + ∆% (medium sample density) !.#/ ∈ %*, %* + ∆% (medium sample density)
In this paper, we formally demonstrate that applying the softmax function in training could potentially lead to unexpected supervisory signals.
To solve this problem, we propose the MMC loss to learn more structured representations and induce high-density regions in the feature space.
In our experiments, we empirically demonstrate several favorable merits of our method:
(i) Lead to reliable robustness even under strong adaptive attacks in different threat models;
(ii) Keep high performance on clean inputs comparable TO SCE;
(iii) Introduce little extra computation compared to the SCE loss;
(iv) Compatible with the existing defense mechanisms, e.g., the AT methods.
Our analyses in this paper also provide useful insights for future work on designing new objectives beyond the SCE framework. | Applying the softmax function in training leads to indirect and unexpected supervision on features. We propose a new training objective to explicitly induce dense feature regions for locally sufficient samples to benefit adversarial robustness. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:876 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We present an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level.
We first identify a group of interpretable units that are closely related to object concepts with a segmentation-based network dissection method.
Then, we examine the causal effect of interpretable units by measuring the ability of interventions to control objects in the output.
Finally, we examine the contextual relationship between these units and their surrounding by inserting the discovered object concepts into new images.
We show several practical applications enabled by our framework, from comparing internal representations across different layers and models, to improving GANs by locating and removing artifact-causing units, to interactively manipulating objects in the scene. | GAN representations are examined in detail, and sets of representation units are found that control the generation of semantic concepts in the output. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:877 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We present a simple nearest-neighbor (NN) approach that synthesizes high-frequency photorealistic images from an ``incomplete'' signal such as a low-resolution image, a surface normal map, or edges.
Current state-of-the-art deep generative models designed for such conditional image synthesis lack two important things: (1) they are unable to generate a large set of diverse outputs, due to the mode collapse problem.
(2) they are not interpretable, making it difficult to control the synthesized output.
We demonstrate that NN approaches potentially address such limitations, but suffer in accuracy on small datasets.
We design a simple pipeline that combines the best of both worlds: the first stage uses a convolutional neural network (CNN) to map the input to a (overly-smoothed) image, and the second stage uses a pixel-wise nearest neighbor method to map the smoothed output to multiple high-quality, high-frequency outputs in a controllable manner.
Importantly, pixel-wise matching allows our method to compose novel high-frequency content by cutting-and-pasting pixels from different training exemplars.
We demonstrate our approach for various input modalities, and for various domains ranging from human faces, pets, shoes, and handbags.
We consider the task of generating high-resolution photo-realistic images from incomplete input such as a low-resolution image, sketches, surface normal map, or label mask.
Such a task has a number of practical applications such as upsampling/colorizing legacy footage, texture synthesis for graphics applications, and semantic image understanding for vision through analysis-by-synthesis.
These problems share a common underlying structure: a human/machine is given a signal that is missing considerable details, and the task is to reconstruct plausible details.Consider the edge map of cat in Figure 1 -c.
When we humans look at this edge map, we can easily imagine multiple variations of whiskers, eyes, and stripes that could be viable and pleasing to the eye.
Indeed, the task of image synthesis has been well explored, not just for its practical applications but also for its aesthetic appeal.GANs: Current state-of-the-art approaches rely on generative adversarial networks (GANs) BID17 , and most relevant to us, conditional GANS that generate image conditioned on an input signal BID9 BID36 BID23 .
We argue that there are two prominent limitations to such popular formalisms: (1) First and foremost, humans can imagine multiple plausible output images given a incomplete input.
We see this rich space of potential outputs as a vital part of the human capacity to imagine and generate.
Conditional GANs are in principle able to generate multiple outputs through the injection of noise, but in practice suffer from limited diversity (i.e., mode collapse) FIG1 .
Recent approaches even remove the noise altogether, treating conditional image synthesis as regression problem BID5 .
(2) Deep networks are still difficult to explain or interpret, making the synthesized output difficult to modify.
One implication is that users are not able to control the synthesized output.
Moreover, the right mechanism for even specifying user constraints (e.g., "generate a cat image that looks like my cat") is unclear.
This restricts applicability, particularly for graphics tasks.
We present a simple approach to image synthesis based on compositional nearest-neighbors.
Our approach somewhat suggests that GANs themselves may operate in a compositional "copy-andpaste" fashion.
Indeed, examining the impressive outputs of recent synthesis methods suggests that some amount of local memorization is happening.
However, by making this process explicit, our system is able to naturally generate multiple outputs, while being interpretable and amenable to user constraints.
An interesting byproduct of our approach is dense pixel-level correspondences.
If training images are augmented with semantic label masks, these labels can be transfered using our correspondences, implying that our approach may also be useful for image analysis through label transfer BID30 . | Pixel-wise nearest neighbors used for generating multiple images from incomplete priors such as a low-res images, surface normals, edges etc. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:878 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms.
We address this problem through the principled lens of distributionally robust optimization, which guarantees performance under adversarial input perturbations.
By considering a Lagrangian penalty formulation of perturbing the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data.
For smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization.
Furthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss.
For imperceptible perturbations, our method matches or outperforms heuristic approaches.
Consider the classical supervised learning problem, in which we minimize an expected loss E P0 [ (θ; Z)] over a parameter θ ∈ Θ, where Z ∼ P 0 , P 0 is a distribution on a space Z, and is a loss function.
In many systems, robustness to changes in the data-generating distribution P 0 is desirable, whether they be from covariate shifts, changes in the underlying domain BID2 , or adversarial attacks BID22 BID29 .
As deep networks become prevalent in modern performance-critical systems (perception for self-driving cars, automated detection of tumors), model failure is increasingly costly; in these situations, it is irresponsible to deploy models whose robustness and failure modes we do not understand or cannot certify.Recent work shows that neural networks are vulnerable to adversarial examples; seemingly imperceptible perturbations to data can lead to misbehavior of the model, such as misclassification of the output BID22 BID40 BID29 BID36 .
Consequently, researchers have proposed adversarial attack and defense mechanisms BID41 BID53 BID47 BID12 BID23 BID33 BID51 .
These works provide an initial foundation for adversarial training, but it is challenging to rigorously identify the classes of attacks against which they can defend (or if they exist).
Alternative approaches that provide formal verification of deep networks BID24 BID26 are NP-hard in general; they require prohibitive computational expense even on small networks.
Recently, researchers have proposed convex relaxations of the NP-hard verification problem with some success BID28 BID45 , though they may be difficult to scale to large networks.
In this context, our work is situated between these agendas: we develop efficient procedures with rigorous guarantees for small to moderate amounts of robustness.We take the perspective of distributionally robust optimization and provide an adversarial training procedure with provable guarantees on its computational and statistical performance.
We postulate a class P of distributions around the data-generating distribution P 0 and consider the problem minimize DISPLAYFORM0 The choice of P influences robustness guarantees and computability; we develop robustness sets P with computationally efficient relaxations that apply even when the loss is non-convex.
We provide an adversarial training procedure that, for smooth , enjoys convergence guarantees similar to non-robust approaches while certifying performance even for the worst-case population loss sup P ∈P E P [ (θ; Z)].
On a simple implementation in Tensorflow, our method takes 5-10× as long as stochastic gradient methods for empirical risk minimization (ERM), matching runtimes for other adversarial training procedures BID22 BID29 BID33 .
We show that our procedure-which learns to protect against adversarial perturbations in the training dataset-generalizes, allowing us to train a model that prevents attacks to the test dataset.We briefly overview our approach.
Let c : Z × Z → R + ∪ {∞}, where c(z, z 0 ) is the "cost" for an adversary to perturb z 0 to z (we typically use c(z, z 0 ) = z − z 0 2 p with p ≥ 1).
We consider the robustness region P = {P : W c (P, P 0 ) ≤ ρ}, a ρ-neighborhood of the distribution P 0 under the Wasserstein metric W c (·, ·) (see Section 2 for a formal definition).
For deep networks and other complex models, this formulation of problem FORMULA0 is intractable with arbitrary ρ.
Instead, we consider its Lagrangian relaxation for a fixed penalty parameter γ ≥ 0, resulting in the reformulation minimize θ∈Θ F (θ) := sup DISPLAYFORM1 where φ γ (θ; z 0 ) := sup z∈Z { (θ; z) − γc(z, z 0 )} .(See
Proposition 1 for a rigorous statement of these equalities.) Here
, we have replaced the usual loss (θ; Z) by the robust surrogate φ γ (θ; Z); this surrogate (2b) allows adversarial perturbations of the data z, modulated by the penalty γ. We
typically solve the penalty problem (2) with P 0 replaced by the empirical distribution P n , as P 0 is unknown (we refer to this as the penalty problem below).The
key feature of the penalty problem (2) is that moderate levels of robustness-in particular, defense against imperceptible adversarial perturbations-are achievable at essentially no computational or statistical cost for smooth losses . Specifically
, for large enough penalty γ (by duality, small enough robustness ρ), the function z → (θ; z) − γc(z, z 0 ) in the robust surrogate (2b) is strongly concave and hence easy to optimize if (θ, z) is smooth in z. Consequently
, stochastic gradient methods applied to problem (2) have similar convergence guarantees as for non-robust methods (ERM). In Section
3, we provide a certificate of robustness for any ρ; we give an efficiently computable data-dependent upper bound on the worst-case loss sup P :Wc(P,P0)≤ρ E P [ (θ; Z)]. That is, the
worst-case performance of the output of our principled adversarial training procedure is guaranteed to be no worse than this certificate. Our bound is
tight when ρ = ρ n , the achieved robustness for the empirical objective. These results
suggest advantages of networks with smooth activations rather than ReLU's. We experimentally
verify our results in Section 4 and show that we match or achieve state-of-the-art performance on a variety of adversarial attacks.Robust optimization and adversarial training The standard robust-optimization approach minimizes losses of the form sup u∈U (θ; z + u) for some uncertainty set U BID46 BID3 BID54 . Unfortunately, this
approach is intractable except for specially structured losses, such as the composition of a linear and simple convex function BID3 BID54 BID55 . Nevertheless, this
robust approach underlies recent advances in adversarial training BID49 BID22 BID42 BID12 BID33 , which heuristically perturb data during a stochastic optimization procedure.One such heuristic uses a locally linearized loss function (proposed with p = ∞ as the "fast gradient sign method" BID22 ): DISPLAYFORM2 One form of adversarial training trains on the losses (θ; (x i + ∆ xi (θ), y i )) BID22 BID29 , while others perform iterated variants BID42 BID12 BID33 BID51 . BID33 observe that
these procedures attempt to optimize the objective E P0 [sup u p ≤ (θ; Z + u)], a constrained version of the penalty problem (2). This notion of robustness
is typically intractable: the inner supremum is generally non-concave in u, so it is unclear whether model-fitting with these techniques converges, and there are possibly worst-case perturbations these techniques do not find. Indeed, it is NP-hard to
find worst-case perturbations when deep networks use ReLU activations, suggesting difficulties for fast and iterated heuristics (see Lemma 2 in Appendix B). Smoothness, which can be
obtained in standard deep architectures with exponential linear units (ELU's) BID15 , allows us to find Lagrangian worst-case perturbations with low computational cost.Distributionally robust optimization To situate the current work, we review some of the substantial body of work on robustness and learning. The choice of P in the robust
objective (1) affects both the richness of the uncertainty set we wish to consider as well as the tractability of the resulting optimization problem. Previous approaches to distributional
robustness have considered finitedimensional parametrizations for P, such as constraint sets for moments, support, or directional deviations BID13 BID16 BID21 , as well as non-parametric distances for probability measures such as f -divergences BID4 BID5 BID30 BID34 , and Wasserstein distances BID48 BID7 . In constrast to f -divergences (e.g.
χ 2 -or Kullback-Leibler divergences) which are effective when the support of the distribution P 0 is fixed, a Wasserstein ball around P 0 includes distributions Q with different support and allows (in a sense) robustness to unseen data.Many authors have studied tractable classes of uncertainty sets P and losses . For example, BID4 and BID38 use convex
optimization approaches for fdivergence balls. For worst-case regions P formed by Wasserstein
balls, , BID48 , and BID7 show how to convert the saddle-point problem (1) to a regularized ERM problem, but this is possible only for a limited class of convex losses and costs c. In this work, we treat a much larger class of
losses and costs and provide direct solution methods for a Lagrangian relaxation of the saddle-point problem (1). One natural application area is in domain adaptation
BID31 ; concurrently with this work, Lee & Raginsky provide guarantees similar to ours for the empirical minimizer of the robust saddle-point problem (1) and give specialized bounds for domain adaptation problems. In contrast, our approach is to use the distributionally
robust approach to both defend against imperceptible adversarial perturbations and develop efficient optimization procedures.
Explicit distributional robustness of the form (5) is intractable except in limited cases.
We provide a principled method for efficiently guaranteeing distributional robustness with a simple form of adversarial data perturbation.
Using only assumptions about the smoothness of the loss function , we prove that our method enjoys strong statistical guarantees and fast optimization rates for a large class of problems.
The NP-hardness of certifying robustness for ReLU networks, coupled with our empirical success and theoretical certificates for smooth networks in deep learning, suggest that using smooth networks may be preferable if we wish to guarantee robustness.
Empirical evaluations indicate that our methods are in fact robust to perturbations in the data, and they match or outperform less-principled adversarial training techniques.
The major benefit of our approach is its simplicity and wide applicability across many models and machine-learning scenarios.There remain many avenues for future investigation.
Our optimization result (Theorem 2) applies only for small values of robustness ρ and to a limited class of Wasserstein costs.
Furthermore, our statistical guarantees (Theorems 3 and 4) use · ∞ -covering numbers as a measure of model complexity, which can become prohibitively large for deep networks.
In a learning-theoretic context, where the goal is to provide insight into convergence behavior as well as comfort that a procedure will "work" given enough data, such guarantees are satisfactory, but this may not be enough in security-essential contexts.
This problem currently persists for most learning-theoretic guarantees in deep learning, and the recent works of BID1 , BID18 , and BID39 attempt to mitigate this shortcoming.
Replacing our current covering number arguments with more intricate notions such as margin-based bounds BID1 would extend the scope and usefulness of our theoretical guarantees.
Of course, the certificate (15) still holds regardless.More broadly, this work focuses on small-perturbation attacks, and our theoretical guarantees show that it is possible to efficiently build models that provably guard against such attacks.
Our method becomes another heuristic for protection against attacks with large adversarial budgets.
Indeed, in the large-perturbation regime, efficiently training certifiably secure systems remains an important open question.
We believe that conventional · ∞ -defense heuristics developed for image classification do not offer much comfort in the large-perturbation/perceptible-attack setting: · ∞ -attacks with a large budget can render images indiscernible to human eyes, while, for example, · 1 -attacks allow a concerted perturbation to critical regions of the image.
Certainly · ∞ -attack and defense models have been fruitful in building a foundation for security research in deep learning, but moving beyond them may be necessary for more advances in the large-perturbation regime. | We provide a fast, principled adversarial training procedure with computational and statistical performance guarantees. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:879 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Neural Style Transfer has become a popular technique for
generating images of distinct artistic styles using convolutional neural networks.
This
recent success in image style transfer has raised the question of
whether similar methods can be leveraged to alter the “style” of musical
audio.
In this work, we attempt long time-scale high-quality audio transfer
and texture synthesis in the time-domain that captures harmonic,
rhythmic, and timbral elements related to musical style, using examples that
may have different lengths and musical keys.
We demonstrate the ability
to use randomly initialized convolutional neural networks to transfer
these aspects of musical style from one piece onto another using 3
different representations of audio: the log-magnitude of the Short Time
Fourier Transform (STFT), the Mel spectrogram, and the Constant-Q Transform
spectrogram.
We propose using these representations as a way of
generating and modifying perceptually significant characteristics of
musical audio content.
We demonstrate each representation's
shortcomings and advantages over others by carefully designing
neural network structures that complement the nature of musical audio.
Finally, we show that the most
compelling “style” transfer examples make use of an ensemble of these
representations to help capture the varying desired characteristics of
audio signals.
The problem we seek to explore in this paper is the transfer of artistic "style" from one musical audio example onto another.
The definition and perception of an artistic style in visual art images (e.g., impressionist, pointilist, cubist) shown in Figure 1 is perhaps more straightforward than in the case musical audio.
For images, a successful style transfer algorithm is capable of generating a novel image whose content information, or what is in the image, is matched as well as its stylistic information, or the artistic approach.
In other words, it explores the question, "What would a rendering of scene A by artist B look like?"
Figure 1 : Demonstration of image style transfer courtesy of BID7 .For
our work, we similarly set out to develop an algorithm that explores the question, "What would it sound like if a musical piece by ensemble/artist A was performed by ensemble/artist B?" It
should be noted that we do not approach the problem according to strict musicological definitions (e.g., melodic, harmonic, rhythmic, and structural elements), as one might proceed if given the musical notation of a composition. We
do not presume access to the notation or any music theoretic analysis of a piece. We
are instead interested in transferring the acoustic features related to harmonic, rhythmic, and timbral aspects of one musical piece onto another. Therefore
, for the single instance "style" transfer algorithm we propose in this work, it is more accurate to pose the question as "What would a rendering of musical piece A (by artist A) using the harmonic and rhythmic patterns of piece B (by artist B) sound like?" In this
paper, we define musical "style" transfer according to this type of audio content transformation, and will henceforth drop the use of quotation marks around "style". In texture
generation, we instead ask "What would it sound like for a source musical piece to contain the same musical patterns and higher-order statistics without any of the same local, event-based information?" This can be
achieved in the image or audio domain by only optimizing those terms of the loss function of a transfer algorithm associated with style, and not using any loss term associated with content.Currently, there are two types of approaches to image style transfer. The first method
uses a learned generative model to manipulate the representation of the data such that it maintains its original content rendered into a new style. The second class
of methods, which we investigate and apply in this paper, are concerned with synthesizing new data that matches the representations of data in a learned model in some specific way. Measuring the accuracy
of such algorithms' abilities to transfer style is difficult, since most data is not able to be entirely disentangled into separate content and style components. This is especially true
for musical style.There have been attempts for learning representations of musical style include the use of generative models which use a MIDI representation of audio BID14 . The advantages of using
this representation are the ability to focus solely on a highly understandable representation of musical information in its harmonic and rhythmic components, but lacks the ability to capture other important sonic information like timbre.Our approach utilizes many interesting findings from recent research in image style transfer. We suggest that it is possible
to use the same style transfer algorithm used for images for musical audio, but best performance requires a careful selection of how content and style is represented, given the task. FIG0 shows a spectral visualization
of how a style transfer result contains both local, event based information from the content piece, while also having the characteristic nature of the style signal, as there is clearly more energy in the higher frequencies. However, it is important to note that
despite this visualization in the log-magnitude STFT representation, the audio is ultimately synthesized in the time-domain.
We introduce several improvements for performing musical style transfer on raw audio through the utilization of multiple audio representations.
Our contributions can be summarized as follows: First, we have demonstrated that using additional representations of Mel and CQT spectrograms with accompanying neural structure improve in many cases the capture of musically meaningful style information.
Secondly, we have proposed a novel, key-invariant content representation for musical audio.
Finally we have shown that despite using log-magnitude spectrograms to capture the content and style information, we are still able to synthesize a target audio waveform in the time domain using the backpropogation of the STFT.While our proposed content representations work for audio in different keys, there still is no representation for tempo invariance.
Other future work may include using learned generative models to perform musical style transfer and trying to perform style transfer entirely in the time-domain.
This or the use of complex weights may be able to help improve representation of phase information in neural representations. | We present a long time-scale musical audio style transfer algorithm which synthesizes audio in the time-domain, but uses Time-Frequency representations of audio. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:88 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Amortized inference has led to efficient approximate inference for large datasets.
The quality of posterior inference is largely determined by two factors:
a) the ability of the variational distribution to model the true posterior and
b) the capacity of the recognition network to generalize inference over all datapoints.
We analyze approximate inference in variational autoencoders in terms of these factors.
We find that suboptimal inference is often due to amortizing inference rather than the limited complexity of the approximating distribution.
We show that this is due partly to the generator learning to accommodate the choice of approximation.
Furthermore, we show that the parameters used to increase the expressiveness of the approximation play a role in generalizing inference rather than simply improving the complexity of the approximation.
There has been significant work on improving inference in variational autoencoders (VAEs) BID13 BID22 through the development of expressive approximate posteriors BID21 BID14 BID20 BID27 .
These works have shown that with more expressive approximate posteriors, the model learns a better distribution over the data.In this paper, we analyze inference suboptimality in VAEs: the mismatch between the true and approximate posterior.
In other words, we are interested in understanding what factors cause the gap between the marginal log-likelihood and the evidence lower bound (ELBO).
We refer to this as the inference gap.
Moreover, we break down the inference gap into two components: the approximation gap and the amortization gap.
The approximation gap comes from the inability of the approximate distribution family to exactly match the true posterior.
The amortization gap refers to the difference caused by amortizing the variational parameters over the entire training set, instead of optimizing for each datapoint independently.
We refer the reader to Table 1 for detailed definitions and FIG0 for a simple illustration of the gaps.
In FIG0 , L[q] refers to the ELBO using an amortized distribution q, whereas q * is the optimal q within its variational family.
Our experiments investigate how the choice of encoder, posterior approximation, decoder, and model optimization affect the approximation and amortization gaps.We train VAE models in a number of settings on the MNIST, Fashion-MNIST BID30 , and CIFAR-10 datasets.Our contributions are:
a) we investigate inference suboptimality in terms of the approximation and amortization gaps, providing insight to guide future improvements in VAE inference,
b) we quantitatively demonstrate that the learned true posterior accommodates the choice of approximation, and
c) we demonstrate that using parameterized functions to improve the expressiveness of the approximation plays a large role in reducing error caused by amortization.
Table 1 : Summary of Gap Terms.
The middle column refers to the general case where our variational objective is a lower bound on the marginal log-likelihood (not necessarily the ELBO).
The right most column demonstrates the specific case in VAEs.
q * (z|x) refers to the optimal approximation within a family Q, i.e. q * (z|x) = arg min q∈Q KL (q(z|x)||p(z|x)).
In this paper, we investigated how encoder capacity, approximation choice, decoder capacity, and model optimization influence inference suboptimality in terms of the approximation and amortization gaps.
We found that the amortization gap is often the leading source of inference suboptimality and that the generator reduces the approximation gap by learning a true posterior that fits to the choice of approximate distribution.
We showed that the parameters used to increase the expressiveness of the approximation play a role in generalizing inference rather than simply improving the complexity of the approximation.
We confirmed that increasing the capacity of the encoder reduces the amortization error.
We also showed that optimization techniques, such as entropy annealing, help the generative model to better utilize the flexibility of the expressive variational distribution.
Computing these gaps can be useful for guiding improvements to inference in VAEs.
Future work includes evaluating other types of expressive approximations and more complex likelihood functions.
The VAE model of FIG1 uses a decoder p(x|z) with architecture: 2 − 100 − 784, and an encoder q(z|x) with architecture: 784 − 100 − 4.
We use tanh activations and a batch size of 50.
The model is trained for 3000 epochs with a learning rate of 10 −4 using the ADAM optimizer BID12 . | We decompose the gap between the marginal log-likelihood and the evidence lower bound and study the effect of the approximate posterior on the true posterior distribution in VAEs. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:880 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, we propose a framework that leverages semi-supervised models to improve unsupervised clustering performance.
To leverage semi-supervised models, we first need to automatically generate labels, called pseudo-labels.
We find that prior approaches for generating pseudo-labels hurt clustering performance because of their low accuracy.
Instead, we use an ensemble of deep networks to construct a similarity graph, from which we extract high accuracy pseudo-labels.
The approach of finding high quality pseudo-labels using ensembles and training the semi-supervised model is iterated, yielding continued improvement.
We show that our approach outperforms state of the art clustering results for multiple image and text datasets.
For example, we achieve 54.6% accuracy for CIFAR-10 and 43.9% for 20news, outperforming state of the art by 8-12% in absolute terms.
Semi-supervised methods, which make use of large unlabelled data sets and a small labelled data set, have seen recent success, e.g., ladder networks Rasmus et al. (2015) achieves 99% accuracy in MNIST using only 100 labelled samples.
These approaches leverage the unlabelled data to help the network learn an underlying representation, while the labelled data guides the network towards separating the classes.
In this paper, we ask two questions: is it possible to create the small labelled data set required by semi-supervised methods purely using unsupervised techniques?
If so, can semi-supervised methods leverage this autonomously generated pseudo-labelled data set to deliver higher performance than state-of-the-art unsupervised approaches?
We answer both these questions in the affirmative.
We first find that prior approaches for identifying pseudo-labels Caron et al. (2018) ; Chen (2018); Lee (2013) perform poorly because of their low accuracy (Section 2).
To create a high accuracy pseudo-labelled data set autonomously, we use a combination of ensemble of deep networks with a custom graph clustering algorithm (Section 4).
We first train an ensemble of deep networks in an unsupervised manner.
Each network independently clusters the input.
We then compare two input data points.
If all of the networks agree that these two data points belong to the same cluster, we can be reasonably sure that these data points belong to the same class.
In this way, we identify all input data pairs belonging to the same class with high precision in a completely unsupervised manner.
In the next step, we use these high quality input pairs to generate a similarity graph, with the data points as nodes and edges between data points which are deemed to be similar by our ensemble.
From this graph, we extract tight clusters of data points, which serve as pseudo-labels.
Note that, in this step, we do not cluster the entire dataset, but only a small subset on which we can get high precision.
Extracting high quality clusters from this graph while ensuring that the extracted clusters correspond to different classes is challenging.
We discuss our approach in Section 4.2.1 for solving this problem.
In this way, our method extracts unambiguous samples belonging to each class, which serves as pseudo-labels for semi-supervised learning.
For semi-supervised learning using the labels generated above, one could use ladder networks Rasmus et al. (2015) .
However, we found that ladder networks is unsuitable for the initial unsupervised clustering step as it can degenerate to outputting constant values for all inputs in the absence of unsupervised loss.
To enable unsupervised clustering, we augment ladder networks using information maximization Krause et al. (2010) to create the Ladder-IM, and with a dot product loss to create Ladder-Dot.
We show in Section 5 that Ladder-IM and Ladder-Dot, by themselves, also provide improvements over previous state of the art.
We use the same models for both the first unsupervised learning step as well as the subsequent pseudo-semi-supervised iterations.
Finally, the approach of finding high quality clusters using an ensemble, and using them as labels to train a new ensemble of semi-supervised models, is iterated, yielding continued improvements.
The large gains of our method mainly come from this iterative approach, which can in some cases, yield upto 17% gains in accuracy over the base unsupervised models (see section 5.5).
We name our pseudo-semi-supervised learning approach Kingdra 1 .
Kingdra is independent of the type of data set; we show examples of its use on both image and text data sets in Section 5.
This is in contrast to some previous approaches using CNNs, e.g. Chang et al. (2017) , Caron et al. (2018) , which are specialized for image data sets.
We perform unsupervised classification using Kingdra on several standard image (MNIST, CIFAR10, STL) and text (reuters, 20news) datasets.
On all these datasets, Kingdra is able to achieve higher clustering accuracy compared to current state-of-the-art deep unsupervised clustering techniques.
For example, on the CIFAR10 and 20news datasets, Kingdra is able to achieve classification accuracy of 54.6% and 43.9%, respectively, delivering 8-12% absolute gains over state of the art results Hu et al. (2017) ; Xie et al. (2016) .
Several techniques have been proposed in the literature for generating pseudo-labels (Caron et al. (2018) ; Chen (2018); Lee (2013) .
In Lee (2013) , the output class with the highest softmax value (Argmax) is taken to be the pseudo-label.
In Caron et al. (2018) , the authors perform K-means clustering on the feature vector and use the K-means clusters as pseudo-labels.
Finally, authors in Chen (2018) treat the softmax output as confidence and only label those items whose confidence value is above a high threshold.
Note that none of these techniques for identifying pseudo-labels have been applied in our context, i.e., for unsupervised clustering using semi-supervised models.
In this paper, we introduced Kingdra, a novel pseudo-semi-supervised learning approach for clustering.
Kingdra outperforms current state-of-the-art unsupervised deep learning based approaches, with 8-12% gains in absolute accuracy for CIFAR10 and 20news datasets.
As part of Kingdra, we proposed clustering ladder networks, Ladder-IM and Ladder-Dot, that works well in both unsupervised and semi-supervised settings.
While Kingdra performs well in the datasets we studied, the similarity-based graph clustering algorithm used has difficulty as the number of classes increase.
For example, for the datasets we evaluated, the t pos and t neg can be simply set to the number of models in the ensemble.
However, as the number of classes increase, these thresholds may need some tuning.
For CIFAR100, with 100 classes, our graph clustering algorithm is not able to identify 100 diverse classes effectively.
We are looking at improving the clustering algorithm as part of future work.
We are also evaluating adding diversity to the models in the ensemble, either via changing the model structure, size and/or through changing the standard deviation of random noise used in ladder networks. | Using ensembles and pseudo labels for unsupervised clustering | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:881 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
This paper concerns dictionary learning, i.e., sparse coding, a fundamental representation learning problem.
We show that a subgradient descent algorithm, with random initialization, can recover orthogonal dictionaries on a natural nonsmooth, nonconvex L1 minimization formulation of the problem, under mild statistical assumption on the data.
This is in contrast to previous provable methods that require either expensive computation or delicate initialization schemes.
Our analysis develops several tools for characterizing landscapes of nonsmooth functions, which might be of independent interest for provable training of deep networks with nonsmooth activations (e.g., ReLU), among other applications.
Preliminary synthetic and real experiments corroborate our analysis and show that our algorithm works well empirically in recovering orthogonal dictionaries.
Dictionary learning (DL), i.e. , sparse coding, concerns the problem of learning compact representations, i.e., given data Y , one tries to find a representation basis A and coefficients X, so that Y ≈ AX where X is most sparse.
DL has numerous applications especially in image processing and computer vision (Mairal et al., 2014) .
When posed in analytical form, DL seeks a transformation Q such that QY is sparse; in this sense DL can be considered as an (extremely!) primitive "deep" network (Ravishankar & Bresler, 2013) .Many
heuristic algorithms have been proposed to solve DL since the seminal work of Olshausen & Field (1996) , most of them surprisingly effective in practice (Mairal et al., 2014; Sun et al., 2015) . However
, understandings on when and how DL is solvable have only recently started to emerge. Under
appropriate generating models on A and X, Spielman et al. (2012) showed that complete (i.e., square, invertible) A can be recovered from Y , provided that X is ultra-sparse. Subsequent
works BID0 BID1 Chatterji & Bartlett, 2017; BID4 provided similar guarantees for overcomplete (i.e. fat) A, again in the ultra-sparse regime. The latter
methods are invariably based on nonconvex optimization with model-dependent initialization, rendering their practicality on real data questionable.The ensuing developments have focused on breaking the sparsity barrier and addressing the practicality issue. Convex relaxations
based on the sum-of-squares (SOS) SDP hierarchy can recover overcomplete A when X has linear sparsity BID6 Ma et al., 2016; Schramm & Steurer, 2017) , while incurring expensive computation (solving large-scale SDP's or large-scale tensor decomposition). By contrast, Sun et
al. (2015) showed that complete A can be recovered in the linear sparsity regime by solving a certain nonconvex problem with arbitrary initialization. However, the second-order
optimization method proposed there is still expensive. This problem is partially
addressed by (Gilboa et al., 2018) which proved that the first-order gradient descent with random initialization enjoys a similar performance guarantee.A standing barrier toward practicality is dealing with nonsmooth functions. To promote sparsity in the
coefficients, the 1 norm is the function of choice in practical DL, as is common in modern signal processing and machine learning BID10 : despite its nonsmoothness, this choice often admits highly scalable numerical methods, such as proximal gradient method and alternating directionThe reader is welcome to refer to our arXiv version for future updates.method (Mairal et al., 2014) . The analyses in Sun et al.
(2015) ; Gilboa et al. (2018) , however, focused on characterizing the algorithm-independent function landscape of a certain nonconvex formulation of DL, which takes a smooth surrogate to 1 to get around the nonsmoothness. The tactic smoothing there
introduced substantial analysis difficulty, and broke the practical advantage of computing with the simple 1 function.In this paper, we show that working directly with a natural 1 norm formulation results in neat analysis and a practical algorithm. We focus on the problem of
learning orthogonal dictionaries: given data {y i } i∈ [m] generated as y i = Ax i , where A ∈ R n×n is a fixed unknown orthogonal matrix and each x i ∈ R n is an iid Bernoulli-Gaussian random vector with parameter θ ∈ (0, 1), recover A. This statistical model is the same as in previous works (Spielman et al., 2012; Sun et al., 2015) .Write Y . = [y 1 , . . . ,
y m ] and
similarly X . = [x 1 , . . . , x m ]. We
propose to recover A by
solving the following nonconvex (due to the constraint), nonsmooth (due to the objective) optimization problem: DISPLAYFORM0 |q y i | subject to q 2 = 1.(1.1)Based on the statistical model, q Y = q AX has the highest sparsity when q is a column of A (up to sign) so that q A is 1-sparse. Spielman et al. (2012) formalized
this intuition and optimized the same objective as Eq. (1.1) with a q ∞ = 1 constraint, which only works when θ ∼ O(1/ √ n). Sun et al. (2015) worked with the
sphere constraint but replaced the 1 objective with a smooth surrogate, introducing substantial analytical and computational deficiencies as alluded above.In constrast, we show that with sufficiently many samples, the optimization landscape of formulation (1.1) is benign with high probability (over the randomness of X), and a simple Riemannian subgradient descent algorithm can provably recover A in polynomial time. Theorem 1.1 (Main result, informal
version of Theorem 3.1). Assume θ ∈ [1/n, 1/2]. For m ≥ Ω(θ
−2 n 4 log 4 n), the following
holds with high probability: there exists a poly(m, −1 )-time algorithm, which runs Riemannian subgradient descent on formulation (1.1) from at most O(n log n) independent, uniformly random initial points, and outputs a set of vectors { a 1 , . . . , a n } such that up to permutation and sign change, a i − a i 2 ≤ for all i ∈ [n].In words, our algorithm works also in the linear
sparsity regime, the same as established in Sun et al. (2015) ; Gilboa et al. (2018) , at a lower sample complexity O(n 4 ) in contrast to the existing O(n 5.5 ) in Sun et al. (2015) . 1 As for the landscape, we show that (Theorems 3.4
and 3.6) each of the desired solutions {±a i } i∈ [n] is a local minimizer of formulation (1.1) with a sufficiently large basin of attraction so that a random initialization will land into one of the basins with at least constant probability. To obtain the result, we integrate and develop elements
from nonsmooth analysis (on Riemannian manifolds), set-valued analysis, and random set theory, which might be valuable to studying other nonconvex, nonsmooth optimization problems.
This paper presents the first theoretical guarantee for orthogonal dictionary learning using subgradient descent on a natural 1 minimization formulation.
Along the way, we develop tools for analyzing the optimization landscape of nonconvex nonsmooth functions, which could be of broader interest.For futute work, there is an O(n 2 ) sample complexity gap between what we established in Theorem 3.1, and what we observed in the simulations alongside previous results based on the SOS method BID6 Ma et al., 2016; Schramm & Steurer, 2017) .
As our main geometric result Theorem 3.6 already achieved tight bounds on the directional derivatives, further sample complexity improvement could potentially come out of utilizing second-order information such as the strong negative curvature (Lemma B.2), or careful algorithm-dependent analysis.While our result applies only to (complete) orthogonal dictionaries, a natural question is whether we can generalize to overcomplete dictionaries.
To date the only known provable algorithms for learning overcomplete dictionaries in the linear sparsity regime are based on the SOS method BID6 Ma et al., 2016; Schramm & Steurer, 2017) .
We believe that our nonsmooth analysis has the potential of handling over-complete dictionaries, as for reasonably well-conditioned overcomplete dictionaries A, each a i (columns of A) makes a A approximately 1-sparse and so a i AX gives noisy estimate of a certain row of X. So the same formulation as Eq. (1.1) intuitively still works.
We would like to leave that to future work.Nonsmooth phase retrieval and deep networks with ReLU mentioned in Section 1.1 are examples of many nonsmooth, nonconvex problems encountered in practice.
Most existing theoretical results on these problems tend to be technically vague about handling the nonsmooth points: they either prescribe a rule for choosing a subgradient element, which effectively disconnects theory and practice because numerical testing of nonsmooth points is often not reliable, or ignore the nonsmooth points altogether, assuming that practically numerical methods would never touch these points-this sounds intuitive but no formalism on this appears in the relevant literature yet.
Besides our work, (Laurent & von Brecht, 2017; Kakade & Lee, 2018 ) also warns about potential problems of ignoring nonsmooth points when studying optimization of nonsmooth functions in machine learning.
We need the Hausdorff metric to measure differences between nonempty sets.
For any set X and a point p in R n , the point-to-set distance is defined as DISPLAYFORM0 For any two sets X 1 , X 2 ∈ R n , the Hausdorff distance is defined as DISPLAYFORM1 Moreover, for any sets DISPLAYFORM2 (A.4) On the sets of nonempty, compact subsets of R n , the Hausdorff metric is a valid metric; particularly, it obeys the triangular inequality: for nonempty, compact subsets X, Y, Z ⊂ R n , DISPLAYFORM3 (A.5) See, e.g., Sec. 7.1 of Sternberg (2013) for a proof.
Lemma A.1 (Restatement of Lemma A.1).
For convex compact sets X, Y ⊂ R n , we have DISPLAYFORM4 where h S (u) .
= sup x∈S x, u is the support function associated with the set S. | Efficient dictionary learning by L1 minimization via a novel analysis of the non-convex non-smooth geometry. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:882 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We study model recovery for data classification, where the training labels are generated from a one-hidden-layer fully -connected neural network with sigmoid activations, and the goal is to recover the weight vectors of the neural network.
We prove that under Gaussian inputs, the empirical risk function using cross entropy exhibits strong convexity and smoothness uniformly in a local neighborhood of the ground truth, as soon as the sample complexity is sufficiently large.
This implies that if initialized in this neighborhood, which can be achieved via the tensor method, gradient descent converges linearly to a critical point that is provably close to the ground truth without requiring a fresh set of samples at each iteration.
To the best of our knowledge, this is the first global convergence guarantee established for the empirical risk minimization using cross entropy via gradient descent for learning one-hidden-layer neural networks, at the near-optimal sample and computational complexity with respect to the network input dimension.
Neural networks have attracted a significant amount of research interest in recent years due to the success of deep neural networks BID18 in practical domains such as computer vision and artificial intelligence BID24 BID15 BID27 .
However, the theoretical underpinnings behind such success remains mysterious to a large extent.
Efforts have been taken to understand which classes of functions can be represented by deep neural networks BID7 BID16 BID0 Telgarsky, 2016) , when (stochastic) gradient descent is effective for optimizing a non-convex loss function BID8 , and why these networks generalize well BID1 .One
important line of research that has attracted extensive attention is a model-recovery setup, i.e., given that the training samples (x i , y i ) ∼ (x, y)
are generated i.i.d. from a distribution D based on a neural network model with the ground truth parameter W , the goal is to recover the underlying model parameter W , which is important for the network to generalize well BID22 . Previous
studies along this topic can be mainly divided into two types of data generations. First, a
regression problem, for example, assumes that each sample y is generated as y = 1 K K k=1 φ(w k x), where
w k ∈ R d is the weight vector of the kth neuron, 1 ≤ k ≤ K, and the input x ∈ R d is Gaussian. This type
of regression problem has been studied in various settings. In particular
, BID28 studied the single-neuron model under ReLU activation, BID38 ) studied the onehidden-layer multi-neuron network model, and BID19 ) studied a two-layer feedforward networks with ReLU activations and identity mapping. Second, for
a classification problem, suppose each label y ∈ {0, 1} is drawn under the conditional distribution P(y = 1|x) = 1 K K k=1 φ(w k x), where w
k ∈ R d is the weight vector of the kth neuron, 1 ≤ k ≤ K, and the input x ∈ R d is Gaussian. Such a problem
has been studied in BID21 in the case with a single neuron.For both the regression and the classification settings, in order to recover the neural network parameters, all previous studies considered (stochastic) gradient descent over the squared loss, i.e., qu (W ; x, y) = DISPLAYFORM0
Furthermore, previous studies provided two types of statistical guarantees for such model recovery problems using the squared loss. More specifically
, BID38 showed that in the local neighborhood of the ground truth, the Hessian of the empirical loss function is positive definite for each given point under independent high probability event. Hence, their guarantee
for gradient descent to converge to the ground truth requires a fresh set of samples at every iteration, thus the total sample complexity will depend on the number of iterations. On the other hand, studies
such as BID21 BID28 establish certain types of uniform geometry such as strong convexity so that resampling per iteration is not needed for gradient descent to have guaranteed linear convergence as long as it enters such a local neighborhood. However, such a stronger statistical
guarantee without per-iteration resampling have only been shown for the squared loss function. In this paper, we aim at developing
such a strong statistical guarantee for the loss function in eq. (2), which is much more challenging but more practical than the squared loss for the classification problem.
In this paper, we have studied the model recovery of a one-hidden-layer neural network using the cross entropy loss in a multi-neuron classification problem.
In particular, we have characterized the sample complexity to guarantee local strong convexity in a neighborhood (whose size we have characterized as well) of the ground truth when the training data are generated from a classification model.
This guarantees that with high probability, gradient descent converges linearly to the ground truth if initialized properly.
In the future, it will be interesting to extend the analysis in this paper to more general class of activation functions, particularly ReLU-like activations; and more general network structures, such as convolutional neural networks BID10 BID37 . | We provide the first theoretical analysis of guaranteed recovery of one-hidden-layer neural networks under cross entropy loss for classification problems. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:883 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
With the deployment of neural networks on mobile devices and the necessity of transmitting neural networks over limited or expensive channels, the file size of trained model was identified as bottleneck.
We propose a codec for the compression
of neural networks which is based on transform coding for convolutional and dense layers and on clustering for biases and normalizations.
With this codec, we achieve average compression factors between 7.9–9.3 while the accuracy of the compressed networks for image classification decreases only by 1%–2%, respectively.
Deep neural networks spread to many scientific and industrial applications (1; 2; 3; 4).
Often, the necessity of large amounts of training data, long training duration and the computational complexity of the inference operation are noted as bottlenecks in deep learning pipelines.
More recently, the memory footprint of saved neural networks was recognized as challenge for implementations in which neural networks are not executed on servers or in the cloud but on mobile devices or on embedded devices.
In these use cases, the storage capacities are limited and/or the neural networks need to be transmitted to the devices over limited transmission channels (e.g. app updates).
Therefore, an efficient compression of neural networks is desirable.
General purpose compressors like Deflate (combination of Lempel-Ziv-Storer-Szymanski with Huffman coding) perform only poorly on neural networks as the networks consist of many slightly different floating-point weights.In this paper, we propose a complete codec pipeline for the compression of neural networks which relies on a transform coding method for the weights of convolutional and dense layers and a clusteringbased compression method for biases and normalizations.
Our codec provides high coding efficiency, negligible impact on the desired output of the neural network (e.g. accuracy), reasonable complexity and is applicable to existing neural network models, i.e. no (iterative) retraining is required.Several related works were proposed in the literature.
These works mainly rely on techniques like quantization and pruning.
The tensorflow framework provides a quantization method to convert the trained floating-point weights to 8 bit fixed-point weights.
We will demonstrate that considerable coding gains on top of those due to quantization can be achieved by our proposed methods.
Han et al. proposed the Deep Compression framework for the efficient compression of neural networks BID4 .
In addition to quantization, their method is based on an iterative pruning and retraining phase.
In contrast to Deep Compression, we aim at transparent compression of existing network models without the necessity of retraining and without modifying the network architecture.
It is known from other domains like video coding that transparent coding and coding modified content are different problems (6; 7).
Iandola et al. propose a novel network architecture called SqueezeNet which particularly aims at having as few weights in the network as possible BID7 .
We will demonstrate that our method can still reduce the size of this already optimized SqueezeNet network by a factor of up to 7.4.
It is observable that the filters in neural networks contain structural information not completely different from blocks in natural pictures.
Reasoned by this observation, the encoder base for convolutional filters consists of a two-dimensional discrete cosine transform (2D DCT) followed by a quantization step.
This combination is often referred to as transform coding.For the DCT, the transformation block size is set accordingly to the size of the filter (e.g. a 7 × 7 DCT for a 7 × 7 filter).
Subsequent to the transformation, the coefficients are quantized.
The bit depth of the quantizer can be tuned according to the needs of the specific application.
Typical values are 5-6 bit/coefficient with only a small accuracy impact.The weights of dense layers (also referred to as fully-connected layers) and of 1 × 1 convolutions (no spatial filtering but filtering over the depth of the previous layer, typically used in networks for depth reduction) are arranged block-wise prior to transform coding.K-means clustering is used for the coding of the biases and normalizations.
The number of clusters is set analogously to the quantizer bit depth according to the quality settings.
Code books are generated for biases and normalizations.
Thereby, the usage of the clustering algorithm is beneficial if less bits are needed for coding the quantizer indices and the code book itself than for coding the values directly.
The clustering approach has the advantage that the distortion is smaller than for uniform quantization.
In consequence, the accuracy of the network is measured to be higher for a given number of quantizer steps.
However, the occurrence of code book indices is also more uniformly distributed.
Due to the higher entropy of this distribution, the compression factor is considerably smaller (see Sec. 3).
In particular the Burrow-Wheeler transform and the move-to-front transform which are both invoked for entropy coding are put at a disadvantage by the uniform distribution.
We chose to use use the same number of quantizer steps for all parameters.
For this reason the clustering was chosen for those network parameters which are too sensible to the higher distortion caused by uniform quantization.The processed data from the transform coding and from the clustering are entropy coded layer-wise using BZip2, serialized and written to the output file.
In addition, meta data is stored.
It includes the architecture of the layers in the network, shapes and dimensions of the filters, details on the block arrangements, scaling factors from the pre-scaling, scaling factors and offsets from the quantizer, and the code books for the clustering.
In this paper, we proposed a codec for the compression of neural networks which is based on transform coding and clustering.
The codec enables a low-complexity and high efficient transparent compression of neural networks.
The impact on the neural network performance is negligible. | Our neural network codec (which is based on transform coding and clustering) enables a low complexity and high efficient transparent compression of neural networks. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:884 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
As Machine Learning (ML) gets applied to security-critical or sensitive domains, there is a growing need for integrity and privacy for outsourced ML computations.
A pragmatic solution comes from Trusted Execution Environments (TEEs), which use hardware and software protections to isolate sensitive computations from the untrusted software stack.
However, these isolation guarantees come at a price in performance, compared to untrusted alternatives.
This paper initiates the study of high performance execution of Deep Neural Networks (DNNs) in TEEs by efficiently partitioning DNN computations between trusted and untrusted devices.
Building upon an efficient outsourcing scheme for matrix multiplication, we propose Slalom, a framework that securely delegates execution of all linear layers in a DNN from a TEE (e.g., Intel SGX or Sanctum) to a faster, yet untrusted, co-located processor.
We evaluate Slalom by running DNNs in an Intel SGX enclave, which selectively delegates work to an untrusted GPU.
For canonical DNNs (VGG16, MobileNet and ResNet variants) we obtain 6x to 20x increases in throughput for verifiable inference, and 4x to 11x for verifiable and private inference.
Machine learning is increasingly used in sensitive decision making and security-critical settings.
At the same time, the growth in both cloud offerings and software stack complexity widens the attack surface for ML applications.
This raises the question of integrity and privacy guarantees for ML computations in untrusted environments, in particular for ML tasks outsourced by a client to a remote server.
Prominent examples include cloud-based ML APIs (e.g., a speech-to-text application that consumes user-provided data) or general ML-as-a-Service platforms.Trusted Execution Environments (TEEs), e.g, Intel SGX BID31 , ARM TrustZone BID0 or Sanctum offer a pragmatic solution to this problem.
TEEs use hardware and software protections to isolate sensitive code from other applications, while attesting to its correct execution.
Running outsourced ML computations in TEEs provides remote clients with strong privacy and integrity guarantees.For outsourced ML computations, TEEs outperform pure cryptographic approaches (e.g, BID16 BID34 BID15 BID26 ) by multiple orders of magnitude.
At the same time, the isolation guarantees of TEEs still come at a steep price in performance, compared to untrusted alternatives (i.e., running ML models on contemporary hardware with no security guarantees).
For instance, Intel SGX BID24 incurs significant overhead for memory intensive tasks BID36 BID20 , has difficulties exploiting multi-threading, and is currently limited to desktop CPUs that are outmatched by untrusted alternatives (e.g., GPUs or server CPUs).
Thus, our thesis is that for modern ML workloads, TEEs will be at least an order of magnitude less efficient than the best available untrusted hardware.Contributions.
We propose Slalom, a framework for efficient DNN inference in any trusted execution environment (e.g., SGX or Sanctum).
To evaluate Slalom, we build a lightweight DNN library for Intel SGX, which may be of independent interest.
Our library allows for outsourcing all linear layers to an untrusted GPU without compromising integrity or privacy.
Our code is available at https://github.com/ftramer/slalom.We formally prove Slalom's security, and evaluate it on multiple canonical DNNs with a variety of computational costs-VGG16 BID41 , MobileNet (Howard et al., 2017) , and ResNets BID21 .
Compared to running all computations in SGX, outsourcing linear layers to an untrusted GPU increases throughput (as well as energy efficiency) by 6× to 20× for verifiable inference, and by 4× to 11× for verifiable and private inference.
Finally, we discuss open challenges towards efficient verifiable training of DNNs in TEEs.
This paper has studied the efficiency of evaluating a DNN in a Trusted Execution Environment (TEE) to provide strong integrity and privacy guarantees.
We explored new approaches for segmenting a DNN evaluation to securely outsource work from a trusted environment to a faster co-located but untrusted processor.We designed Slalom, a framework for efficient DNN evaluation that outsources all linear layers from a TEE to a GPU.
Slalom leverage Freivalds' algorithm for verifying correctness of linear operators, and additionally encrypts inputs with precomputed blinding factors to preserve privacy.
Slalom can work with any TEE and we evaluated its performance using Intel SGX on various workloads.
For canonical DNNs (VGG16, MobileNet and ResNet variants), we have shown that Slalom boosts inference throughput without compromising security.Securely outsourcing matrix products from a TEE has applications in ML beyond DNNs (e.g., non negative matrix factorization, dimensionality reduction, etc.) We have also explored avenues and challenges towards applying similar techniques to DNN training, an interesting direction for future work.
Finally, our general approach of outsourcing work from a TEE to a faster co-processor could be applied to other problems which have fast verification algorithms, e.g., those considered in BID30 BID51 . | We accelerate secure DNN inference in trusted execution environments (by a factor 4x-20x) by selectively outsourcing the computation of linear layers to a faster yet untrusted co-processor. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:885 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
While deep neural networks are a highly successful model class, their large memory footprint puts considerable strain on energy consumption, communication bandwidth, and storage requirements.
Consequently, model size reduction has become an utmost goal in deep learning.
A typical approach is to train a set of deterministic weights, while applying certain techniques such as pruning and quantization, in order that the empirical weight distribution becomes amenable to Shannon-style coding schemes.
However, as shown in this paper, relaxing weight determinism and using a full variational distribution over weights allows for more efficient coding schemes and consequently higher compression rates.
In particular, following the classical bits-back argument, we encode the network weights using a random sample, requiring only a number of bits corresponding to the Kullback-Leibler divergence between the sampled variational distribution and the encoding distribution.
By imposing a constraint on the Kullback-Leibler divergence, we are able to explicitly control the compression rate, while optimizing the expected loss on the training set.
The employed encoding scheme can be shown to be close to the optimal information-theoretical lower bound, with respect to the employed variational family.
Our method sets new state-of-the-art in neural network compression, as it strictly dominates previous approaches in a Pareto sense: On the benchmarks LeNet-5/MNIST and VGG-16/CIFAR-10, our approach yields the best test performance for a fixed memory budget, and vice versa, it achieves the highest compression rates for a fixed test performance.
With the celebrated success of deep learning models and their ever increasing presence, it has become a key challenge to increase their efficiency.
In particular, the rather substantial memory requirements in neural networks can often conflict with storage and communication constraints, especially in mobile applications.
Moreover, as discussed in BID4 , memory accesses are up to three orders of magnitude more costly than arithmetic operations in terms of energy consumption.
Thus, compressing deep learning models has become a priority goal with a beneficial economic and ecological impact.
Traditional approaches to model compression usually rely on three main techniques: pruning, quantization and coding.
For example, Deep Compression BID5 proposes a pipeline employing all three of these techniques in a systematic manner.
From an information-theoretic perspective, the central routine is coding, while pruning and quantization can be seen as helper heuristics to reduce the entropy of the empirical weight-distribution, leading to shorter encoding lengths BID15 .
Also, the recently proposed Bayesian Compression BID13 falls into this scheme, despite being motivated by the so-called bits-back argument BID8 which theoretically allows for higher compression rates.1
While the bits-back argument certainly motivated the use of variational inference in Bayesian Compression, the downstream encoding is still akin to Deep Compression (and other approaches).
In particular, the variational distribution is merely used to derive a deterministic set of weights, which is subsequently encoded with Shannonstyle coding.
This approach, however, does not fully exploit the coding efficiency postulated by the bits-back argument.In this paper, we step aside from the pruning-quantization pipeline and propose a novel coding method which approximately realizes bits-back efficiency.
In particular, we refrain from constructing a deterministic weight-set but rather encode a random weight-set from the full variational posterior.
This is fundamentally different from first drawing a weight-set and subsequently encoding it -this would be no more efficient than previous approaches.
Rather, the coding scheme developed here is allowed to pick a random weight-set which can be cheaply encoded.
By using results from BID6 , we show that such an coding scheme always exists and that the bits-back argument indeed represents a theoretical lower bound for its coding efficiency.
Moreover, we propose a practical scheme which produces an approximate sample from the variational distribution and which can indeed be encoded with this efficiency.
Since our algorithm learns a distribution over weightsets and derives a random message from it, while minimizing the resulting code length, we dub it Minimal Random Code Learning (MIRACLE).From
a practical perspective, MIRACLE has the advantage that it offers explicit control over the expected loss and the compression size. This
is distinct from previous techniques, which require tedious tuning of various hyper-parameters and/or thresholds in order to achieve a certain coding goal. In our
method, we can simply control the KL-divergence using a penalty factor, which directly reflects the achieved code length (plus a small overhead), while simultaneously optimizing the expected training loss. As a result
, we were able to trace the trade-off curve for compression size versus classification performance ( FIG4 ). We clearly
outperform previous state-of-the-art in a Pareto sense: For any desired compression rate, our encoding achieves better performance on the test set; vice versa, for a certain performance on the test set, our method achieves the highest compression. To summarize
, our main contributions are:• We introduce MIRACLE, an innovative compression algorithm that exploits the noise resistance of deep learning models by training a variational distribution and efficiently encodes a random set of weights.• Our method
is easy to implement and offers explicit control over the loss and the compression size.• We provide
theoretical justification that our algorithm gets close to the theoretical lower bound on the encoding length.• The potency
of MIRACLE is demonstrated on two common compression tasks, where it clearly outperforms previous state-of-the-art methods for compressing neural networks.In the following section, we discuss related work and introduce required background. In Section 3
we introduce our method. Section 4 presents
our experimental results and Section 5 concludes the paper.
In this paper we followed through the philosophy of the bits-back argument for the goal of coding model parameters.
The basic insight here is that restricting to a single deterministic weight-set and aiming to coding it in a classic Shannon-style is greedy and in fact sub-optimal.
Neural networks -and other deep learning models -are highly overparameterized, and consequently there are many "good" parameterizations.
Thus, rather than focusing on a single weight set, we showed that this fact can be exploited for coding, by selecting a "cheap" weight set out of the set of "good" ones.
Our algorithm is backed by solid recent information-theoretic insights, yet it is simple to implement.
We demonstrated that the presented coding algorithm clearly outperforms previous state-of-the-art.
An important question remaining for future work is how efficient MIRACLE can be made in terms of memory accesses and consequently for energy consumption and inference time.
There lies clear potential in this direction, as any single weight can be recovered by its block-index and relative index within each block.
By smartly keeping track of these addresses, and using pseudo-random generators as algorithmic lookup-tables, we could design an inference machine which is able to directly run our compressed models, which might lead to considerable savings in memory accesses.
This is shown by proving that q(w) − p i (w) ≤ q(w)(1 − p(w)) i for i ∈ N.In order to bound the encoding length, one has to first show that if the accepted sample has index i * , then E[log i * ] ≤ KL(q||p) + O(1) .Following
this, one can employ the prefix-free binary encoding of BID16 . Let l(n)
be the length of the encoding for n ∈ N using the encoding scheme proposed by BID16 . Their method
is proven to have |l(n)| = log n + 2 log log(n + 1) + O(1), from which the upper bound follows: DISPLAYFORM0 | This paper proposes an effective method to compress neural networks based on recent results in information theory. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:886 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Most existing neural networks for learning graphs deal with the issue of permutation invariance by conceiving of the network as a message passing scheme, where each node sums the feature vectors coming from its neighbors.
We argue that this imposes a limitation on their representation power, and instead propose a new general architecture for representing objects consisting of a hierarchy of parts, which we call Covariant Compositional Networks (CCNs).
Here covariance means that the activation of each neuron must transform in a specific way under permutations, similarly to steerability in CNNs.
We achieve covariance by making each activation transform according to a tensor representation of the permutation group, and derive the corresponding tensor aggregation rules that each neuron must implement.
Experiments show that CCNs can outperform competing methods on some standard graph learning benchmarks.
Learning on graphs has a long history in the kernels literature, including approaches based on random walks BID14 BID1 BID11 , counting subgraphs BID35 , spectral ideas BID41 , label propagation schemes with hashing BID36 Neumann et al., 2016) , and even algebraic ideas BID21 .
Many of these papers address moderate size problems in chemo-and bioinformatics, and the way they represent graphs is essentially fixed.Recently, with the advent of deep learning and much larger datasets, a sequence of neural network based approaches have appeared to address the same problem, starting with BID33 .
In contrast to the kernels framework, neural networks effectively integrate the classification or regression problem at hand with learning the graph representation itself, in a single, end-to-end system.
In the last few years, there has been a veritable explosion in research activity in this area.
Some of the proposed graph learning architectures BID8 BID18 BID29 directly seek inspiration from the type of classical CNNs that are used for image recognition BID25 Krizhevsky et al., 2012) .
These methods involve first fixing a vertex ordering, then moving a filter across vertices while doing some computation as a function of the local neighborhood to generate a representation.
This process is then repeated multiple times like in classical CNNs to build a deep graph representation.
Other notable works on graph neural networks include BID26 BID34 BID0 BID20 .
Very recently, BID15 showed that many of these approaches can be seen to be specific instances of a general message passing formalism, and coined the term message passing neural networks (MPNNs) to refer to them collectively.While MPNNs have been very successful in applications and are an active field of research, they differ from classical CNNs in a fundamental way: the internal feature representations in CNNs are equivariant to such transformations of the inputs as translation and rotations BID4 , the internal representations in MPNNs are fully invariant.
This is a direct result of the fact that MPNNs deal with the permutation invariance issue in graphs simply by summing the messages coming from each neighbor.
In this paper we argue that this is a serious limitation that restricts the representation power of MPNNs.MPNNs are ultimately compositional (part-based) models, that build up the representation of the graph from the representations of a hierarchy of subgraphs.
To address the covariance issue, we study the covariance behavior of such networks in general, introducing a new general class of neural network architectures, which we call compositional networks (comp-nets).
One advantage of this generalization is that instead of focusing attention on the mechanics of how information propagates from node to node, it emphasizes the connection to convolutional networks, in particular, it shows that what is missing from MPNNs is essentially the analog of steerability.Steerability implies that the activations (feature vectors) at a given neuron must transform according to a specific representation (in the algebraic sense) of the symmetry group of its receptive field, in our case, the group of permutations, S m .
In this paper we only consider the defining representation and its tensor products, leading to first, second, third etc. order tensor activations.
We derive the general form of covariant tensor propagation in comp-nets, and find that each "channel" in the network corresponds to a specific way of contracting a higher order tensor to a lower order one.
Note that here by tensor activations we mean not just that each activation is expressed as a multidimensional array of numbers (as the word is usually used in the neural networks literature), but also that it transforms in a specific way under permutations, which is a more stringent criterion.
The parameters of our covariant comp-nets are the entries of the mixing matrix that prescribe how these channels communicate with each other at each node.
Our experiments show that this new architecture can beat scalar message passing neural networks on several standard datasets.
On the subsampled HCEP dataset, CCN outperforms all other methods by a very large margin.
For the graph kernels datasets, SVM with the Weisfeiler-Lehman kernels achieve the highest accuracy on NCI1 and NCI109, while CCN wins on MUTAG and PTC.
Perhaps this poor performance is to be expected, since the datasets are small and neural network approaches usually require tens of thousands of training examples at minimum to be effective.
Indeed, neural graph fingerprints and PSCN also perform poorly compared to the Weisfeiler-Lehman kernels.In the QM9 experiments, CCN beats the three other algorithms in both mean absolute error and root mean squared error.
It should be noted that BID15 obtained stronger results on QM9, but we cannot properly compare our results with theirs because our experiments only use the adjacency matrices and atom labels of each node, while theirs includes comprehensive chemical features that better inform the target quantum properties.
We have presented a general framework called covariant compositional networks (CCNs) for constructing covariant graph neural networks, which encompasses other message passing approaches as special cases, but takes a more general and principled approach to ensuring covariance with respect to permutations.
Experimental results on several benchmark datasets show that CCNs can outperform other state-of-the-art algorithms.
clearly true, since f a = f a = ξ(a
) . Now
assume that it is true for all nodes with height up to h * . For
any node n a with h(a)
= h * + 1, f a = Φ(f c1 , f c2 , . . . , f c k ), where each of the children c 1 , . . . , c k are of height at most h * , therefore f a = Φ(f c1 , f c2 , . . . , f c k ) = Φ(f c1 , f c2 , . . . , f c k ) = f a .Thus
, f a = f a for every node in G. The proposition follows by φ(G) = f r = f r = φ(G ).Proof
of Proposition 3. Let G
, G , N and N be as in Definition 5. As in Definition 6, for each node (neuron) n i in N there is a node n j in N such that their receptive fields are equivalent up to permutation. That
is, if |P i | = m, then |P j | = m, and there is a permutation π ∈ S m , such that if P i = (e p1 , . . . , e pm ) and P j = (e q1 , . . . , e qm ), then e q π(a) =
e pa . By covariance
, then f j = R π (f i ).Now let G be
a third equivalent object, and N the corresponding comp-net. N must also
have a node, n k , that corresponds to n i and n j . In particular
, letting its receptive field be P k = (e r1 , . . . , e rm ), there is a permutation σ ∈ S m for which e r σ(b) = e q b .
Therefore, f
k = R σ (f j ).At the same time
, n k is also in correspondence with n i . In particular,
letting τ = σπ (which corresponds to first applying the permutation π, then applying σ), e r τ (a) = e pa , and
therefore f k = R τ (f i ). Hence, the {R π
} maps must satisfy Case 4. Follows directly from 3.Case 5. Finally, if A 1 , ..., A u are k'th order P -tensors and C = j α j A j then DISPLAYFORM0 so C is a k'th order P -tensor.Proof of Proposition 5. Under the action of a permutation π ∈ S m on P b , χ (dropping the a→b superscipt) transforms to χ , where χ i,j = χ π −1 (i),j . However, this can
also be written as DISPLAYFORM1 Therefore, F i1,...,i k transforms to DISPLAYFORM2 so F is a P -tensor.Proof of Proposition 6. By Proposition 5, under the action of any permutation π, each of the F pj slices of F transforms as DISPLAYFORM3 At the same time, π also permutes the slices amongst each other according to DISPLAYFORM4 so F is a k + 1'th order P -tensor.Proof of Proposition 7. Under any permutation π ∈ S m of P i , A↓ P i transforms to A↓ P i , where [A↓ P i ] π(a),π(b) = [A↓ Pi
] a,b
. Therefore, A↓ Pi is
a second order P -tensor. By the first case of
Proposition 4, F ⊗ A↓ Pi is then a k + 2'th order P -tensor. | A general framework for creating covariant graph neural networks | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:887 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In recent years, three-dimensional convolutional neural network (3D CNN) are intensively applied in the video analysis and action recognition and receives good performance.
However, 3D CNN leads to massive computation and storage consumption, which hinders its deployment on mobile and embedded devices.
In this paper, we propose a three-dimensional regularization-based pruning method to assign different regularization parameters to different weight groups based on their importance to the network.
Our experiments show that the proposed method outperforms other popular methods in this area.
In recent years, convolutional neural network (CNN) has developed rapidly and has achieved remarkable success in computer vision tasks such as identification, classification and segmentation.
However, due to the lack of motion modeling, this image-based end-to-end feature can not directly apply to videos.
In BID0 BID1 , the authors use three-dimensional convolutional networks (3D CNN) to identify human actions in videos.
Tran et al. proposed a 3D CNN for action recognition which contains 1.75 million parameters BID2 .
The development of 3D CNN also brings challenges because of its higher dimensions.
This leads to massive computing and storage consumption, which hinders its deployment on mobile and embedded devices.In order to reduce the computation cost, researchers propose methods to compress CNN models, including knowledge distillation BID3 , parameter quantization BID4 BID5 , matrix decomposition BID6 and parameter pruning BID7 .
However, all of the above methods are based on two-dimensional convolution.
In this paper, we expand the idea of BID8 to 3D CNN acceleration.
The main idea is to add group regularization items to the objective function and prune weight groups gradually, where the regularization parameters for different weight groups are differently assigned according to some importance criteria.
In this paper, we implement the regularization based method for 3D CNN acceleration.
By assigning different regularization parameters to different weight groups according to the importance estimation, we gradually prune weight groups in the network.
The proposed method achieves better performance than other two popular methods in this area. | In this paper, we propose a three-dimensional regularization-based pruning method to accelerate the 3D-CNN. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:888 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, we propose data statements as a design solution and professional practice for natural language processing technologists, in both research and development — through the adoption and widespread use of data statements, the field can begin to address critical scientific and ethical issues that result from the use of data from certain populations in the development of technology for other populations.
We present a form that data statements can take and explore the implications of adopting them as part of regular practice.
We argue that data statements will help alleviate issues related to exclusion and bias in language technology; lead to better precision in claims about how NLP research can generalize and thus better engineering results; protect companies from public embarrassment; and ultimately lead to language technology that meets its users in their own preferred linguistic style and furthermore does not mis- represent them to others.
** To appear in TACL **
As technology enters widespread societal use it is important that we, as technologists, think critically about how the design decisions we make and systems we build impact people -including not only users of the systems but also other people who will be affected by the systems without directly interacting with them.
For this paper, we focus on natural language processing (NLP) technology.
Potential adverse impacts include NLP systems that fail to work for specific subpopulations (e.g. children or speakers of language varieties which are not supported by training or test data) or systems that reify and reinforce biases present in training data (e.g. a resume-review system that ranks female candidates as less qualified for computer programming jobs because of biases present in training text).
There are both scientific and ethical reasons to be concerned.
Scientifically, there is the issue of generalizability of results; ethically, the potential for significant real-world harms.
While there is increasing interest in ethics in NLP, 1 there remains the open and urgent question of how we integrate ethical considerations into the everyday practice of our field.
This question has no simple answer, but rather will require a constellation of multi-faceted solutions.Toward that end, and drawing on value sensitive design BID22 , this paper contributes one new professional practicecalled data statements -which we argue will bring about improvements in engineering and scientific outcomes while also enabling more ethically responsive NLP technology.
A data statement is a characterization of a dataset which provides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software.
In developing this practice, we draw on analogous practices from the fields of psychology and medicine that require some standardized information about the populations studied (e.g. APA 2009; BID41 BID26 BID40 .
Though the construct of data statements applies more broadly, in this paper we focus specifically on data statements for NLP systems.
Data statements should be included in most writing on NLP including: papers presenting new datasets, papers reporting experimental work with datasets, and documentation for NLP systems.
Data statements should help us as a field engage with the ethical issues of exclusion, overgeneralization, and underexposure BID30 .
Furthermore, as data statements bring our datasets and their represented populations into better focus, they should also help us as a field deal with scientific issues of generalizability and reproducibility.
Adopting this practice will position us to better understand and describe our results and, ultimately, do better and more ethical science and engineering.
2 We begin by defining terms ( §2), discuss why NLP needs data statements ( §3) and relate our proposal to current practice ( §4).
Next is the substance of our contribution: a detailed proposal for data statements for NLP ( §5), illustrated with two case studies ( §6).
In §7 we discuss how data statements can mitigate bias and use the technique of 'value scenarios' to envision potential effects of their adoption.
Finally, we relate data statements to similar emerging proposals ( §8), make recommendations for how to implement and promote the uptake of data statements ( §9), and lay out considerations for tech policy ( §10).
As researchers and developers working on technology in widespread use, capable of impacting people beyond its direct users, we have an obligation to consider the ethical implications of our work.
This will only happen reliably if we find ways to integrate such thought into our regular practice.
In this paper, we have put forward one specific, concrete proposal which we believe will help with issues related to exclusion and bias in language technology: the practice of including 'data statements' in all publications and documentation for all NLP systems.We believe this practice will have beneficial effects immediately and into the future: In the short term, it will foreground how our data does and doesn't represent the world (and the people our systems will impact).
In the long term, it should enable research that specifically addresses issues of bias and exclusion, promote the development of more representative datasets, and make it easier and more normative for researchers to take stakeholder values into consideration as they work.
In foregrounding the information about the data we work with, we can work toward making sure that the systems we build work for diverse populations and also toward making sure we are not teaching computers about the world based on the world views of a limited subset of people.Granted, it will take time and experience to develop the skill of writing carefully crafted data statements.
However, we see great potential benefits: For the scientific community, researchers will be better able to make precise claims about how results should generalize and perform more targeted experiments around reproducing results for datasets that differ in specific characteristics.
For industry, we believe that incorporating data statements will encourage the kind of conscientious software development that protects companies' reputations (by avoiding public embarrassment) and makes them more competitive (by creating systems used more fluidly by more people).
For the public at large, data statements are one piece of a larger collection of practices that will enable the development of NLP systems that equitably serves the interests of users and indirect stakeholders. | A practical proposal for more ethical and responsive NLP technology, operationalizing transparency of test and training data | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:889 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
To communicate with new partners in new contexts, humans rapidly form new linguistic conventions.
Recent language models trained with deep neural networks are able to comprehend and produce the existing conventions present in their training data, but are not able to flexibly and interactively adapt those conventions on the fly as humans do.
We introduce a repeated reference task as a benchmark for models of adaptation in communication and propose a regularized continual learning framework that allows an artificial agent initialized with a generic language model to more accurately and efficiently understand their partner over time.
We evaluate this framework through simulations on COCO and in real-time reference game experiments with human partners.
Linguistic communication depends critically on shared knowledge about the meanings of words BID9 .
However, the real-world demands of communication often require speakers and listeners to go beyond dictionary meanings to understand one another BID0 BID15 .
The social world continually presents new communicative challenges, and agents must continually coordinate on new meanings to meet them.For example, consider a nurse visiting a bed-ridden patient in a cluttered home.
The first time they ask the nurse to retrieve a particular medication, the patient must painstakingly refer to unfamiliar pills, e.g. "the vasoprex-tecnoblek meds for my blood pressure, in a small bluish bottle, on the bookcase in my bathroom."
After a week of care, however, they may just ask for their "Vasotec."This type of flexible language use poses a challenge for models of language in machine learning.
Approaches based on deep neural networks typically learn a monolithic meaning function during training, with fixed weights during use.
For an in-home robot to communicate as flexibly and efficiently with patients as a human nurse, it must be equipped with a continual learning mechanism.
Such a mechanism would present two specific advantages for interaction and communication applications.
First, to the extent that current models have difficulty communicating in a new setting, an adaptive approach can quickly improve performance on the relevant subset of language.
Second, for human-robot contexts, an adaptive model enables speakers to communicate more efficiently as they build up common ground, remaining understandable while expending significantly fewer words as humans naturally do BID1 .In
this paper, we introduce a benchmark communication task and general continual learning framework for transforming neural language models into adaptive models that can be deployed in real-time interactions with other agents.Our key insight is that through continual interactions with the same partner in a shared context, an adaptive listener can more effectively communicate with its partner FIG0 .We
are motivated by hierarchical Bayesian approaches to task-specific adaptation. Our
approach integrates two core components: (i)
a loss function combining speaker and listener information, and (ii
) a regularization scheme for fine-tuning model weights without overfitting.
Human language use is flexible, continuously adapting to the needs of the current situation.
In this paper, we introduced a challenging repeated reference game benchmark for artificial agents, which requires such adaptability to succeed.
We proposed a continual learning approach that forms context-specific conventions by adapting general-purpose semantic knowledge.
Even when models based on generalpurpose knowledge perform poorly, our approach allows human speakers working with adapted variants of such models to become more accurate and more efficient over time. | We propose a repeated reference benchmark task and a regularized continual learning approach for adaptive communication with humans in unfamiliar domains | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:89 |
Subsets and Splits