|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:34:29.386130Z" |
|
}, |
|
"title": "Mode recovery in neural autoregressive sequence modeling", |
|
"authors": [ |
|
{ |
|
"first": "Ilia", |
|
"middle": [], |
|
"last": "Kulikov", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "New York University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Sean", |
|
"middle": [], |
|
"last": "Welleck", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Washington", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "New York University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Despite its wide use, recent studies have revealed unexpected and undesirable properties of neural autoregressive sequence models trained with maximum likelihood, such as an unreasonably high affinity to short sequences after training and to infinitely long sequences at decoding time. We propose to study these phenomena by investigating how the modes, or local maxima, of a distribution are maintained throughout the full learning chain of the ground-truth, empirical, learned and decodinginduced distributions, via the newly proposed mode recovery cost. We design a tractable testbed where we build three types of groundtruth distributions: (1) an LSTM based structured distribution, (2) an unstructured distribution where probability of a sequence does not depend on its content, and (3) a product of these two which we call a semi-structured distribution. Our study reveals both expected and unexpected findings. First, starting with data collection, mode recovery cost strongly relies on the ground-truth distribution and is most costly with the semi-structured distribution. Second, after learning, mode recovery cost from the ground-truth distribution may increase or decrease compared to data collection, with the largest cost degradation occurring with the semi-structured ground-truth distribution. Finally, the ability of the decodinginduced distribution to recover modes from the learned distribution is highly impacted by the choices made earlier in the learning chain. We conclude that future research must consider the entire learning chain in order to fully understand the potentials and perils and to further improve neural autoregressive sequence models.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Despite its wide use, recent studies have revealed unexpected and undesirable properties of neural autoregressive sequence models trained with maximum likelihood, such as an unreasonably high affinity to short sequences after training and to infinitely long sequences at decoding time. We propose to study these phenomena by investigating how the modes, or local maxima, of a distribution are maintained throughout the full learning chain of the ground-truth, empirical, learned and decodinginduced distributions, via the newly proposed mode recovery cost. We design a tractable testbed where we build three types of groundtruth distributions: (1) an LSTM based structured distribution, (2) an unstructured distribution where probability of a sequence does not depend on its content, and (3) a product of these two which we call a semi-structured distribution. Our study reveals both expected and unexpected findings. First, starting with data collection, mode recovery cost strongly relies on the ground-truth distribution and is most costly with the semi-structured distribution. Second, after learning, mode recovery cost from the ground-truth distribution may increase or decrease compared to data collection, with the largest cost degradation occurring with the semi-structured ground-truth distribution. Finally, the ability of the decodinginduced distribution to recover modes from the learned distribution is highly impacted by the choices made earlier in the learning chain. We conclude that future research must consider the entire learning chain in order to fully understand the potentials and perils and to further improve neural autoregressive sequence models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Neural autoregressive sequence modeling has become the standard approach to modeling sequences in a variety of natural language processing applications (Aharoni et al., 2019; Brown et al., 2020; Roller et al., 2020) . In this modeling paradigm, the probability of a sequence is decomposed into the product of the conditional probability of each token given the previous tokens. Each conditional probability is modeled by a shared neural network, typically implemented as a recurrent neural network (Hochreiter and Schmidhuber, 1997) or a transformer (Vaswani et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 174, |
|
"text": "(Aharoni et al., 2019;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 194, |
|
"text": "Brown et al., 2020;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 195, |
|
"end": 215, |
|
"text": "Roller et al., 2020)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 498, |
|
"end": 532, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 550, |
|
"end": 572, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Despite its success, recent studies have identified peculiarities in neural autoregressive sequence models. Lee et al. (2018) identify hallucinations in neural machine translation, in which a well-trained model suddenly generates a nonsense translation when a rare token is artificially introduced to a source sentence. Stahlberg and Byrne (2019) observe that a vast portion of probability mass is concentrated on the empty sequence in neural machine translation, although the models they studied were never presented with empty sequences during training. Holtzman et al. (2019) report that large-scale language models often produce pathological sequences with many n-gram repetitions, at a rate which far exceeds that of the training data. Welleck et al. (2020a) show that neural language models can generate infinite-length sequences despite being trained on only finite sequences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 125, |
|
"text": "Lee et al. (2018)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 320, |
|
"end": 346, |
|
"text": "Stahlberg and Byrne (2019)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 556, |
|
"end": 578, |
|
"text": "Holtzman et al. (2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 741, |
|
"end": 763, |
|
"text": "Welleck et al. (2020a)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A common theme underlying these findings is that well-trained models can assign unreasonably high probabilities to sequences that are dissimilar to any sequence from the training set. In particular, the modes of the model's distribution appear to be undesired, implying that the model failed to recover the modes of the empirical distribution, which we term mode recovery degradation. The situation is further complicated by the fact that we only approximate the model's modes with a decoding algorithm, so it is unclear whether the decoding algorithm, the model, or even the data collection is at fault.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we isolate and study mode recovery degradation by characterizing each stage of neural sequence modeling as inducing a new sequence distribution, then directly analyzing each distribution's modes. With this approach, we diagnose at what stage, and to what extent, sequences receive unreasonably high probabilities. To do so, we first define a learning chain that consists of the ground-truth distribution, the empirical distribution induced by data collection, the learned distribution, and the decoding-induced distribution. We then quantify the extent to which the most probable sequences under each distribution match the most probable sequences under the ground-truth distribution by defining a mode recovery cost, which measures how expensive it is for a later distribution to recover the most probable sequences of an earlier distribution in the chain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In summary, we find that mode recovery cost is non-trivial at each part of the neural autoregressive learning pipeline. The pattern of how mode recovery changes heavily depends on the properties of the ground-truth distribution. In particular, when the ground-truth distribution is parameterized as a product of highly structured distribution based on LSTM neural network and unstructured distribution where the probability of every sequence is sampled independently from all the others, its modes are more costly to recover. Furthermore, the ability of a decoding algorithm to recover modes is also dependent upon all choices made earlier in the chain including the underlying ground-truth distribution, even in the case of modes of the learned distribution. These observations make a meaningful step towards better understanding of mode degradation in neural autoregressive sequence modeling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We consider the problem of modeling a distribution p * (s) over variable-length, discrete sequences s. Formally, s \u2208 \u03a3 l , where l \u2208 {1, 2, . . . , L}, \u03a3 is a finite set of tokens, and \u2126 \u2282 L l=1 \u03a3 l denotes the space of all possible sequences. Every sequence s \u2208 \u2126 ends with a special token eos \u2208 \u03a3 which only appears at the end of each sequence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural autoregressive sequence modeling", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In neural autoregressive sequence modeling, we model the distribution", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural autoregressive sequence modeling", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "p * (s) as p \u03b8 (s) = |s| t=1 p \u03b8 (s t |s <t )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural autoregressive sequence modeling", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": ", with each conditional distribution parameterized by a shared neural network.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural autoregressive sequence modeling", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Maximum likelihood. To learn the model, we use maximum likelihood estimation (MLE) , which trains the model p \u03b8 to maximize the log-likelihood of a set of training sequences D = s 1 , . . . , s N :", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 82, |
|
"text": "(MLE)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural autoregressive sequence modeling", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "arg max \u03b8 1 N N n=1 L n t=1 log p \u03b8 (s n t |s n <t ).", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Neural autoregressive sequence modeling", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Approximate decoding. Given a trained model, we obtain a set of highly probable sequences. In practice, this problem is often intractable due to the size of \u2126, which grows exponentially in sequence length. As a result, we resort to approximating the optimization problem using a decoding algorithm that returns a set of k sequences F(p \u03b8 ; \u03b3), where F denotes the decoding algorithm, and \u03b3 denotes its hyper-parameters. Concretely, we consider two decoding approaches: a deterministic decoding algorithm that produces a set of sequences using beam search with beam-width k, and a stochastic decoding algorithm that forms a set of sequences using ancestral sampling until k unique sequences are obtained. 1 We refer readers to Welleck et al. (2020a) for detailed descriptions of those decoding algorithms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 704, |
|
"end": 705, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 726, |
|
"end": 748, |
|
"text": "Welleck et al. (2020a)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural autoregressive sequence modeling", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Learning chain. The neural autoregressive sequence modeling approach consists of four probability distributions, which together form a learning chain. The first distribution is the ground-truth distribution p * (s). This distribution is almost always unknown and is assumed to be highly complicated. Second, the dataset used in maximum likelihood (Eq. 1) determines an empirical distribution,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural autoregressive sequence modeling", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p emp (s) = 1 |D| s \u2208D I(s = s ),", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Neural autoregressive sequence modeling", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where D is a set of sequences drawn from the ground-truth distribution p * and I is the indicator function. The third distribution is the learned distribution p model captured by a neural autoregressive model trained on D.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural autoregressive sequence modeling", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Finally, we introduce the decoding-induced distribution p F , which allows us to compare the set of probable sequences obtained with a decoding algorithm F against highly probable sequences in the ground-truth, empirical, and learned distributions. Specifically, we turn this set into the distribution", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural autoregressive sequence modeling", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p F (s) = 1 Z p \u03b8 (s) s \u2208 F(p \u03b8 ; \u03b3), 0 s \u2208 F(p \u03b8 ; \u03b3),", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Neural autoregressive sequence modeling", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where Z = s \u2208F (p \u03b8 ;\u03b3) p \u03b8 (s ). Each sequence is weighted according to the model's probability, which reflects the practice of ordering and sampling beam search candidates by their probabilities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural autoregressive sequence modeling", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "There is a natural order of dependencies among these four distributions in the learning chain, p * data collection p emp learning p model decoding p F . We are interested in how a distribution in the later part of the chain recovers the highly probable sequences of an earlier distribution. To study this, we next introduce the notion of mode recovery.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural autoregressive sequence modeling", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Mode sets We define a k-mode set as a set of top-k sequences under a given distribution:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "S k (p) = argtop-k s\u2208\u2126 p(s).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "argtop-k selects all the elements within \u2126 whose probabilities p(s) are greater than the probability assigned to the (k + 1)-st most likely sequence, which could result in fewer than k sequences. This is due to potentially having multiple sequences of the same probability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Mode recovery cost. We characterize the recovery of the modes of the distribution p by the distribution q as the cost required to recover the k-mode set S k (p) using the distribution q. That is, how many likely sequences under q must be considered to recover all the sequences in the k-mode set of p.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Formally, given a pair of distributions p and q, we define the k-mode recovery cost from p to q as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "O k (p q) = min k S k (p) \u2286 S k (q) . (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The cost is minimized (= |S k (p)|) when the kmode set of q perfectly overlaps with that of p. The cost increases toward |\u2126| as the number of modes from q that must be considered to include the kmode set from p increases. The cost is maximized (=|\u2126|) when the top-k set S k (p) of p is not a subset of the support of the distribution q.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The limited support of q. As mentioned earlier, the mode recovery cost O k (p q) is ill-defined when the support of the distribution q, supp(q), is not a super-set of the k-mode set of the distribution p . In this situation, we say that the distribution q fails to recover modes from the k-mode set of the distribution p. In particular, this happens with decoding-induced distributions because of their limited support, which is equal to the size of the candidate set of sequences F(p \u03b8 , \u03b3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We introduce the k-mode set overlap I k (p q) = |S k (p) \u2229 supp(q)|, which equals the size of the intersection between the k-mode set of the distribution p and the support of the distribution q. The k-mode set overlap is maximized and equals |S k (p)| when the mode recovery is successful. We call it a recovery failure whenever the overlap is smaller than |S k (p)|. We use k-mode set overlap only when mode recovery fails, because it is not able to detect if the modes from the corresponding k-mode set have high probability under the induced distribution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The recent success of neural sequence modeling has operated on the assumption that we can find sequences that are reasonably similar to training sequences by fitting a neural autoregressive model to maximize the log-probabilities of the training sequences (maximum-likelihood learning) and searching for the most likely sequences under the trained model (maximum a posteriori inference). However, recent studies suggest that the most likely sequences may not resemble training sequences at all. For instance, the learning stage can yield a distribution p model which places high probability on empty (Stahlberg and Byrne, 2019) or repetitive (Holtzman et al., 2019) sequences, while the decoding stage can yield a distribution p F which places non-zero mass on infinite-length sequences (Welleck et al., 2020a) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 600, |
|
"end": 627, |
|
"text": "(Stahlberg and Byrne, 2019)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 642, |
|
"end": 665, |
|
"text": "(Holtzman et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 787, |
|
"end": 810, |
|
"text": "(Welleck et al., 2020a)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Why do we study mode recovery?", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "As a result, various workarounds have been proposed in the form of alternative learning or decoding algorithms (Andor et al., 2016; Sountsov and Sarawagi, 2016; Murray and Chiang, 2018; Welleck et al., 2020b; Welleck and Cho, 2020; Martins et al., 2020; Deng et al., 2020; Basu et al., 2021; Shi et al., 2020) . A particularly relevant work by Eikema and Aziz (2020) argues that the modes of neural sequence models are inadequate and thus we must discard maximum-a-posteriori inference altogether. Rather than advocating for a particular solution, we instead seek an understanding of why the conventional approach displays these peculiar behaviors. While we do not claim to provide a full explanation, the first step is developing a way of quantifying the problem, then localizing it. To this end, we develop the mode recovery cost and measure it along the learning chain p * p emp p model p F . This focus on modes departs from the conventional focus on evaluating the full distribution with a probabilistic divergence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 111, |
|
"end": 131, |
|
"text": "(Andor et al., 2016;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 132, |
|
"end": 160, |
|
"text": "Sountsov and Sarawagi, 2016;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 161, |
|
"end": 185, |
|
"text": "Murray and Chiang, 2018;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 186, |
|
"end": 208, |
|
"text": "Welleck et al., 2020b;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 209, |
|
"end": 231, |
|
"text": "Welleck and Cho, 2020;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 253, |
|
"text": "Martins et al., 2020;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 254, |
|
"end": 272, |
|
"text": "Deng et al., 2020;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 291, |
|
"text": "Basu et al., 2021;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 309, |
|
"text": "Shi et al., 2020)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 344, |
|
"end": 366, |
|
"text": "Eikema and Aziz (2020)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Why do we study mode recovery?", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Mode recovery vs. probabilistic divergence. Mode recovery is related to but distinct from a probabilistic divergence. Often a probabilistic divergence is designed to consider the full support of one of two distributions between which the divergence is computed. For each point within this support, a probabilistic divergence considers the ratio, or difference, between the actual probabilities/densities assigned by the two distributions. For instance, the KL divergence KL(p q) computes", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Why do we study mode recovery?", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "x\u223cp p(x) log p(x) q(x). Another example is the total variation (TV) distance, which is equivalent to \u03c9\u2208\u2126 |p(\u03c9) \u2212 q(\u03c9)|/2 when the sample set \u2126 is finite. The TV distance considers the entire sample set and computes the cumulative absolute difference between the probabilities assigned to each event by two distributions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Why do we study mode recovery?", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We find mode recovery more interesting than probabilistic divergence in this paper, because our goal is to check whether a decision rule, that is to (approximately) choose the most likely sequence based on an available distribution, changes as we follow the chain of induced distributions. Furthermore, we are not interested in how precisely unlikely sequences are modeled and what probabilities they are being assigned. We thus fully focus on mode recovery in this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Why do we study mode recovery?", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "It is intractable to measure mode recovery cost (Eq. 4) on real-world datasets that are popular in neural sequence modeling, e.g. wikitext-103 (Merity et al., 2016) given the exponential growth of the sequence space with sequence length. For example, the training part of Wikitext-103 consists of 28k sequences with 3.5k tokens, each drawn from a vocabulary of 267k tokens. Furthermore, these datasets do not provide access to the ground-truth distribution, which prevents us from computing any recovery cost involving p * .", |
|
"cite_spans": [ |
|
{ |
|
"start": 143, |
|
"end": 164, |
|
"text": "(Merity et al., 2016)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A testbed for evaluating mode recovery", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In order to allow exact computations of mode recovery cost, we design a controllable testbed. This testbed consists of (1) the ground-truth distribution, which permits explicit control over the structuredness, (2) the data collection step, which controls the complexity of the empirical distribution, (3) the learning step, which allows us to induce the learned distribution with neural autoregressive models, and (4) the decoding step, where the decoding algorithm induces the approximation of the learned distribution. In the rest of this section we describe each distribution in detail.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A testbed for evaluating mode recovery", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We set the size of the sequence space of the testbed so that all computations are feasible. We limit the vocabulary size |\u03a3| to 7 tokens and use a maximum sequence length L of 10 tokens. This results in a sequence space size |\u2126| of around 12 million sequences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A testbed for evaluating mode recovery", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Ground-truth distribution. We define each ground-truth distribution as a product of two components:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A testbed for evaluating mode recovery", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "p * \u03b1 (s) \u221d p \u03b8 (s) \u03b1 p(s; \u00b5, \u03c3) (1\u2212\u03b1) ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A testbed for evaluating mode recovery", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "where p \u03b8 (s) is an autoregressive distribution with parameters \u03b8. The probability p(s; \u00b5, \u03c3) is constructed by p(s; \u00b5, \u03c3) \u221d exp(x(s)), where x(s) \u223c Laplace(\u00b5, \u03c3) is a fixed random sample for each s, and \u03b1 \u2208 [0, 1].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A testbed for evaluating mode recovery", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We implement p \u03b8 using a randomly initialized LSTM neural network, with two layers and 512 LSTM units in every layer. We build p(s; \u00b5, \u03c3) with \u00b5 = 0.0 and \u03c3 = 1.0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A testbed for evaluating mode recovery", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We build the ground-truth distribution to reflect some properties of real data. First, real data has strong statistical dependencies among the tokens within each sequence. We induce these dependencies by assuming that each sequence is produced from left to right by generating each token conditioned on the previously generated sub-sequences of tokens. We implement this procedure using the LSTM neural network.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A testbed for evaluating mode recovery", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Second, there exist exceptional sequences in real data which receive high probability even though those sequences do not reflect statistical dependencies mentioned above. We build another distribution component in order to introduce exceptions in a way that there are no statistical dependencies in the given sequence. We use independent samples from a Laplace distribution as unnormalized probabilities of every sequence from the sequence space \u2126. We thus ensure that there are no statistical dependencies among the tokens under this unstructured distribution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A testbed for evaluating mode recovery", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We thus construct the product of two distributions described above so that it exhibits structured and unstructured aspects of the generating process. The mixing coefficient \u03b1 allows us to interpolate between the heavily structured to heavily unstruc- tured ground-truth distributions. We call it semistructured when 0 < \u03b1 < 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A testbed for evaluating mode recovery", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Empirical distribution. We create each empirical distribution p emp (Eq. 2) by drawing samples with replacement from the ground-truth distribution. We sample a training multi-set and a validation multi-set, then form the empirical distribution with their union . We denote the size of the training dataset as N train , and set the size of the validation set to .05 \u00d7 N train .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A testbed for evaluating mode recovery", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Learned distribution. We obtain each learned distribution p model by training an LSTM model on the training dataset D train using maximum likelihood (Eq. 1). We vary the complexity of the learned distribution using the number of LSTM units of every layer of the LSTM neural network from the set N model hs \u2208 {128, 512}. Variable-length sequences are padded with a pad token in order to form equal-length batches of 5120 sequences. We use the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 10 \u22124 . We compute validation loss every 5\u00d710 2 steps, and apply early stopping with a patience of 5 validation rounds based on increasing validation loss. We train the model for up to 2\u00d710 4 steps. After training, the checkpoint with the lowest validation loss is selected to parameterize the learned distribution p model .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A testbed for evaluating mode recovery", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Decoding-induced distribution. We form decoding-induced distributions (Eq. 3) using beam search and ancestral sampling. For beam search, we set N beam = 500. For ancestral sampling, we sample sequences and discard duplicates until a given number of unique sequences, N anc = 500, are obtained.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A testbed for evaluating mode recovery", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Randomness. To account for randomness that occurs when initializing the ground-truth distribution, sampling the empirical distribution, and using ancestral sampling during decoding, we run each configuration of the learning chain (i.e. ground- truth, empirical, learned, and decoding-induced distributions) with 10 different random seeds, and report the median and 25-th and 75-th quantiles, if available, of each evaluation metric.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A testbed for evaluating mode recovery", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We use our testbed to empirically study mode recovery degradation by measuring mode recovery cost in the data collection, learning, and decoding, stages of the learning chain. We use k \u2264 500.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery in the learning chain", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Data collection: recovering ground-truth modes with the empirical distribution. We start by asking: does mode degradation happen during data collection? We fix N train = 5 \u00d7 10 5 and compute mode recovery cost from the groundtruth distribution with the empirical distribution for the range of k \u2264 500 presented in Fig.1 using three configurations of ground-truth distributions. It shows that mode recovery cost grows as k increases. Furthermore, we observe different patterns of mode recovery cost given each choice of the ground-truth distribution.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 314, |
|
"end": 319, |
|
"text": "Fig.1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Mode recovery in the learning chain", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We observe distinct patterns of mode recovery with either structured (\u03b1 = 1.0) and unstructured (\u03b1 = 0.0) ground-truth distributions. We found that the structured ground-truth distribution assigns higher probabilities to shorter sequences because of LSTM neural network and autoregressive factorization. This implies that sequences which are sorted w.r.t. their probabilities are also sorted w.r.t. their lengths. Because of this property the empirical distribution can recover modes from the structured ground-truth distribution almost perfectly for particular k. In the case of the unstructured groundtruth distribution mode recovery cost is lower compared to other cases. This ground-truth distribution has no statistical dependencies within modes which makes it less interesting to us due to the lack of similarity with real data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery in the learning chain", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Finally, in the case of the semi-structured ground- truth distribution (\u03b1 = 0.3) the cost of recovering its modes grows increasingly as k increases. In other words, empirical distributions recover modes from ground-truth distributions less effectively when latter exhibit statistical dependencies as well as many exceptional sequence probabilities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery in the learning chain", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Now we focus on the influence of the training set size N train on mode recovery during data collection. We fix k = 200 and compute mode recovery cost from the ground-truth distribution using the empirical distribution when N train \u2208 {10 5 , 5 \u00d7 10 5 , 10 6 , 5 \u00d7 10 6 , 10 7 }, shown in Fig.2 . Mode recovery cost naturally decreases as we increase the number of training instances as seen on the right-most side of Fig.2 . The left-most side is more interesting to us because it corresponds to values of N train that reflect real world problems. For instance, in the case of N train = 10 5 it is significantly more costly to recover modes from the semistructured ground-truth distribution compared to both structured and unstructured variants. We thus conclude that mode recovery degradation happens already during data collection, and that parameterization of ground-truth distributions impacts mode recovery cost.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 287, |
|
"end": 292, |
|
"text": "Fig.2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 416, |
|
"end": 421, |
|
"text": "Fig.2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Mode recovery in the learning chain", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Learning: recovering modes with the learned distribution. The next stage in the chain is learning, p emp learning p model , in which we train a model using a training dataset with the expectation that the model will match the ground-truth distribution. Our experiments center on the question: how does mode recovery degradation in the learning stage compare to that of the data collection stage? For instance, we anticipate that the learned model will have a mode recovery cost that is at least as bad as that of the empirical distribution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery in the learning chain", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We measure the mode recovery cost reduction log-rate from empirical to learned distributions, log Fig.3 shows the reduction log-rate as a function of k with fixed N train = 5 \u00d7 10 5 , for three different ground-truth distributions. We observe three different cases, with a clear dependency on what kind of data was used during learning.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 103, |
|
"text": "Fig.3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Mode recovery in the learning chain", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "O k (p * \u03b1 pemp) O k (p * \u03b1 p model ) .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery in the learning chain", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Learning with data coming from the unstructured ground-truth distribution (\u03b1 = 0.0) results in mode recovery cost reduction log-rate being close to zero. This implies that the underlying LSTM model is able to memorize the unstructured data points coming from the empirical distribution, but it can not recover any other modes from the ground-truth distribution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery in the learning chain", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "With the structured ground-truth distribution (\u03b1 = 1.0), we observe positive log-rate for some values of k. This means that the learned distribution is able to recover modes of the ground-truth distribution at a lower cost than the empirical distribution does. Similarly to data collection stage, this largely happens due to the property of LSTM to put high probabilities on short sequences. The learned distribution's ability in mode recovery goes above that of the empirical distribution when there is a match between the parameterization of models behind the ground-truth distribution and the learned distribution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery in the learning chain", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In the case of the semi-structured ground-truth distribution (\u03b1 = 0.3), the learned distribution has severe mode recovery degradation even with smaller values of k (left-most side of Fig.3) . The model is unable to perfectly learn an underlying dataset which has a few statistical exceptions within it.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 189, |
|
"text": "Fig.3)", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Mode recovery in the learning chain", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In addition to our observations about recovering modes from ground-truth distributions, Fig.4 shows at what cost modes of each empirical distribution are recovered by the learned distribution as a function of N train . The learned distribution recovers modes of the empirical distribution with the highest cost when the latter was induced using the semi-structured ground-truth distribution. Mode recovery cost of all empirical distributions naturally decreases as number of training instances N train becomes unrealistically high. We conjecture that the combination of sequences with statistical dependencies and sequences which do not share any statistical dependencies in the dataset makes the learned distribution struggling at mode recovery from both ground-truth and empirical distributions.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 93, |
|
"text": "Fig.4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Mode recovery in the learning chain", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We conclude that properties of ground-truth distributions have direct impacts on the ability of the learned distributions to recover modes from groundtruth and empirical distributions. Learning struggles to capture all patterns from the underlying distributions when the latter exhibit exceptions in statistical dependencies within data points.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery in the learning chain", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Decoding: recovering modes with the decodinginduced distribution. The final stage in the learning chain is decoding, p model decoding p F , in which we use a decoding algorithm F to obtain highly-probable sequences. We study both a deterministic decoding algorithm, implemented using beam search, and a stochastic decoding algorithm, implemented using ancestral sampling. Our experiments are centered on two questions: (1) how do the choices made earlier in the learning chain affect the decoding behavior? and (2) how is this behavior affected by the choice of the decoding algorithm?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery in the learning chain", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We consider six different datasets that we train models on, each of which is a combination of the ground-truth distribution where \u03b1 \u2208 {0.0, 0.3, 1.0}, and the number of training points N train \u2208 {5 \u00d7 10 5 , 5 \u00d7 10 6 }. Our previous analysis revealed each of those datasets leads to a substantially different ability of the learned distributions to recover modes from earlier distributions along the learning chain. We set N model hs to be equal to 512. Our choice of decoding algorithms results in decoding-induced distributions with a limited support. Hence the induced distribution p F often fails to recover modes of distributions from the earlier stage of the chain especially as k increases. As we described in Sec. 3, we use the k-mode set overlap I k (\u2022 p F ) to examine the degree to which a given decoding algorithm F fails at mode recovery.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery in the learning chain", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "First, we study how well the decoding algorithm recovers modes from the learned distribution. Fig.5 shows k-mode set overlap between learned and decoding-induced distributions using both beam search (left) and ancestral sampling (right). Both algorithms fail increasingly more often as k increases.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 99, |
|
"text": "Fig.5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Mode recovery in the learning chain", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Ancestral sampling fails substantially more often than beam search. This is expected given that ancestral sampling was not designed to find highly probable sequences, unlike beam search. Both of these decoding algorithms fail to recover modes from the learned distribution most when the learned distribution was obtained using the semi-structured ground-truth distribution (\u03b1 = 0.3), regardless of the size of the dataset. In other words, the choices made earlier along the learning chain impact the decoding-induced distribution's ability to recover modes from the learned distribution, regardless of which decoding algorithm was used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery in the learning chain", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Second, we investigate how the choice of the decoding algorithm influences the difference in how the decoding-induced distribution recovers modes of ground-truth and learned distributions. We thus look at the k-mode set overlap reduction from ground-truth to learned distributions (I k (p * \u03b1 p F )\u2212 I k (p model p F )) for both beam search and ancestral sampling. The positive overlap reduction in Fig.6 means that the decoding algorithm fails more to recover modes from the learned distribution than from the ground-truth distribution.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 399, |
|
"end": 404, |
|
"text": "Fig.6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Mode recovery in the learning chain", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Each decoding algorithm shows a different pattern of the overlap reduction. Reduction is more or less flat and is close to zero for ancestral sampling regardless of the choice of the dataset. It is, however, different with beam search where we have three observations. First, the reduction overlap deviates from zero as k increases. Second, with the semi-structured ground-truth distribution (\u03b1 = 0.3) the overlap deviates most, which is then followed by the unstructured variant (\u03b1 = 0.0). Third, the number of training points N train leads to significant difference in the case of the semi-structured distribution. Reduction overlap goes very negative with the smaller number of training instances, while the trend flips when we have ten times more data. We thereby conclude that the pattern of mode recovery degradation along the entire learning chain depends on the choice of the decoding algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mode recovery in the learning chain", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In this paper, we studied the propensity of neural autoregressive sequence models to assign high probabilities to sequences that differ from those in the ground-truth distribution. To measure this phenomenon, we defined mode recovery cost, which measures a distribution's ability to recover the highly probable sequences of another distribution. Recovery succeeds Ntrain = 5 \u00d7 10 5 , = 0.0 Ntrain = 5 \u00d7 10 6 , = 0.0 Ntrain = 5 \u00d7 10 5 , = 0.3 Ntrain = 5 \u00d7 10 6 , = 0.3 Ntrain = 5 \u00d7 10 5 , = 1.0 Ntrain = 5 \u00d7 10 6 , = 1.0 Recovery succeeds Ntrain = 5 \u00d7 10 5 , = 0.0 Ntrain = 5 \u00d7 10 6 , = 0.0 Ntrain = 5 \u00d7 10 5 , = 0.3 Ntrain = 5 \u00d7 10 6 , = 0.3 Ntrain = 5 \u00d7 10 5 , = 1.0 Ntrain = 5 \u00d7 10 6 , = 1.0 Figure 5 : k-mode set overlap between the learned distribution and the decoding-induced distribution as a function of k. Choices made earlier in the learning chain (including ground-truth distribution, data collection and learning) affect the degree to which the decoding-induced distribution fails to recover modes from the learned distribution. Ntrain = 5 \u00d7 10 5 , = 0.0 Ntrain = 5 \u00d7 10 6 , = 0.0 Ntrain = 5 \u00d7 10 5 , = 0.3 Ntrain = 5 \u00d7 10 6 , = 0.3 Ntrain = 5 \u00d7 10 5 , = 1.0 Ntrain = 5 \u00d7 10 6 , = 1.0 Figure 6 : k-mode set overlap reduction from the ground-truth distribution to the learned distribution using the decoding-induced distribution as a function of k. The choice of the decoding algorithm affects the pattern of mode recovery degradation along the entire learning chain.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 694, |
|
"end": 702, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1197, |
|
"end": 1205, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We developed a testbed for evaluating mode recovery cost throughout the entire learning chain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We provided evidence of non-trivial mode recovery cost within this testbed, and observed that the increase in the cost relies heavily on the structuredness of the ground-truth distribution. Mode recovery from earlier distributions was more costly along the learning chain when the ground-truth distribution was constructed as a product of fullystructured and fully-unstructured distributions such that it reflects patterns in real data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Mode recovery cost at each stage depended on all the choices made earlier at all the previous stages. The empirical distribution induced during data collection recovered modes from the ground-truth distribution imperfectly regardless of the dataset size. It was particularly high when we used the semi-structured ground-truth distribution. As expected, mode recovery cost was negatively correlated with a number of training instances.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Mode recovery after learning was directly affected by the choice of the ground-truth distribution as well. In general, the learned distribution failed to recover modes from the ground-truth distribution as well as the empirical distribution does. This trend flipped, however, when the learned distribution was parameterized identically to the ground-truth distribution. Distributions induced during decoding recovered modes of learned distributions with sig-nificantly different costs depending on all choices made at previous stages of the learning chain. The choice of decoding algorithm was also found to influence patterns of mode recovery cost. Based on these observations, we conclude that we have to use the entire learning chain to study mode recovery in neural autoregressive sequence modeling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Future directions. We highlight three main directions of research based on our findings and conclusions. First, mode recovery along the learning chain must be studied in the context of real world problems. To do so, there is a need for future work on approximation schemes of mode recovery cost computable in real tasks. Second, the relationship between the ground-truth and learned distributions may be changed to better match real-world cases, for instance by considering structured ground-truth distributions that are less similar to the learned model family, or unstructured components that are informed by sequence content. Third, we have considered standard practices of neural autoregressive modeling while constructing the learning chain. Extending the learning chain to study the effects of new approaches such as knowledge distillation (Kim and Rush, 2016) or back translation (Sennrich et al., 2016) is another fruitful direction for future research.", |
|
"cite_spans": [ |
|
{ |
|
"start": 846, |
|
"end": 866, |
|
"text": "(Kim and Rush, 2016)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 887, |
|
"end": 910, |
|
"text": "(Sennrich et al., 2016)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Ancestral sampling recursively samples st \u223c p \u03b8 (st|s<t).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Massively multilingual neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Roee", |
|
"middle": [], |
|
"last": "Aharoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1903.00089" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. arXiv preprint arXiv:1903.00089.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Globally normalized transition-based neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Andor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Alberti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aliaksei", |
|
"middle": [], |
|
"last": "Severyn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Presta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kuzman", |
|
"middle": [], |
|
"last": "Ganchev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 -Long Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/p16-1231" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normal- ized transition-based neural networks. In 54th An- nual Meeting of the Association for Computational Linguistics, ACL 2016 -Long Papers.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Mirostat: A neural text decoding algorithm that directly controls perplexity", |
|
"authors": [ |
|
{ |
|
"first": "Sourya", |
|
"middle": [], |
|
"last": "Basu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Govardana", |
|
"middle": [], |
|
"last": "Sachitanandam Ramachandran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Shirish Keskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lav", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Varshney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sourya Basu, Govardana Sachitanandam Ramachan- dran, Nitish Shirish Keskar, and Lav R. Varshney. 2021. Mirostat: A neural text decoding algorithm that directly controls perplexity. In International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Language models are few-shot learners", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Tom B Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nick", |
|
"middle": [], |
|
"last": "Mann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melanie", |
|
"middle": [], |
|
"last": "Ryder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jared", |
|
"middle": [], |
|
"last": "Subbiah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prafulla", |
|
"middle": [], |
|
"last": "Kaplan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arvind", |
|
"middle": [], |
|
"last": "Dhariwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pranav", |
|
"middle": [], |
|
"last": "Neelakantan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Girish", |
|
"middle": [], |
|
"last": "Shyam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanda", |
|
"middle": [], |
|
"last": "Sastry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Askell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2005.14165" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Residual energybased models for text generation", |
|
"authors": [ |
|
{ |
|
"first": "Yuntian", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anton", |
|
"middle": [], |
|
"last": "Bakhtin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arthur", |
|
"middle": [], |
|
"last": "Szlam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuntian Deng, Anton Bakhtin, Myle Ott, Arthur Szlam, and Marc'Aurelio Ranzato. 2020. Residual energy- based models for text generation. In International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Is MAP decoding all you need? the inadequacy of the mode in neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Bryan", |
|
"middle": [], |
|
"last": "Eikema", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wilker", |
|
"middle": [], |
|
"last": "Aziz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 28th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4506--4520", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.coling-main.398" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bryan Eikema and Wilker Aziz. 2020. Is MAP decod- ing all you need? the inadequacy of the mode in neu- ral machine translation. In Proceedings of the 28th International Conference on Computational Linguis- tics, pages 4506-4520, Barcelona, Spain (Online). International Committee on Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Long shortterm memory", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural Computation", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Hochreiter and J. Schmidhuber. 1997. Long short- term memory. Neural Computation, 9:1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "The curious case of neural text degeneration", |
|
"authors": [ |
|
{ |
|
"first": "Ari", |
|
"middle": [], |
|
"last": "Holtzman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Buys", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxwell", |
|
"middle": [], |
|
"last": "Forbes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1904.09751" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Sequencelevel knowledge distillation", |
|
"authors": [ |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/d16-1139" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoon Kim and Alexander M. Rush. 2016. Sequence- level knowledge distillation. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1412.6980" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Hallucinations in neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Katherine", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Agarwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Fannjiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Sussillo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katherine Lee, Orhan Firat, Ashish Agarwal, Clara Fannjiang, and David Sussillo. 2018. Hallucinations in neural machine translation.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Sparse text generation", |
|
"authors": [ |
|
{ |
|
"first": "Pedro", |
|
"middle": [ |
|
"Henrique" |
|
], |
|
"last": "Martins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zita", |
|
"middle": [], |
|
"last": "Marinho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andr\u00e9", |
|
"middle": [ |
|
"F T" |
|
], |
|
"last": "Martins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4252--4273", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.348" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pedro Henrique Martins, Zita Marinho, and Andr\u00e9 F. T. Martins. 2020. Sparse text generation. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4252-4273, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Pointer sentinel mixture models", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Merity", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Bradbury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture mod- els.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Correcting length bias in neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Murray", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "212--223", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-6322" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenton Murray and David Chiang. 2018. Correct- ing length bias in neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 212-223, Brus- sels, Belgium. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Recipes for building an open-domain chatbot", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Roller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Dinan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Da", |
|
"middle": [], |
|
"last": "Ju", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [], |
|
"last": "Williamson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kurt", |
|
"middle": [], |
|
"last": "Shuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.13637" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M Smith, et al. 2020. Recipes for building an open-domain chatbot. arXiv preprint arXiv:2004.13637.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Improving neural machine translation models with monolingual data", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/p16-1009" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Why neural machine translation prefers empty outputs", |
|
"authors": [ |
|
{ |
|
"first": "Xing", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yijun", |
|
"middle": [], |
|
"last": "Xiao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xing Shi, Yijun Xiao, and Kevin Knight. 2020. Why neural machine translation prefers empty outputs.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Length bias in encoder decoder models and a case for global conditioning", |
|
"authors": [ |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Sountsov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunita", |
|
"middle": [], |
|
"last": "Sarawagi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "EMNLP 2016 -Conference on Empirical Methods in Natural Language Processing, Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/d16-1158" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pavel Sountsov and Sunita Sarawagi. 2016. Length bias in encoder decoder models and a case for global conditioning. In EMNLP 2016 -Conference on Em- pirical Methods in Natural Language Processing, Proceedings.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "On NMT search errors and model errors: Cat got your tongue?", |
|
"authors": [ |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Stahlberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Byrne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3354--3360", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1331" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Felix Stahlberg and Bill Byrne. 2019. On NMT search errors and model errors: Cat got your tongue? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3354- 3360, Hong Kong, China. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1706.03762" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Mle-guided parameter search for task loss minimization in neural sequence modeling", |
|
"authors": [ |
|
{ |
|
"first": "Sean", |
|
"middle": [], |
|
"last": "Welleck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sean Welleck and Kyunghyun Cho. 2020. Mle-guided parameter search for task loss minimization in neu- ral sequence modeling.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Consistency of a recurrent language model with respect to incomplete decoding", |
|
"authors": [ |
|
{ |
|
"first": "Sean", |
|
"middle": [], |
|
"last": "Welleck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilia", |
|
"middle": [], |
|
"last": "Kulikov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaedeok", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [ |
|
"Yuanzhe" |
|
], |
|
"last": "Pang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2002.02492" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sean Welleck, Ilia Kulikov, Jaedeok Kim, Richard Yuanzhe Pang, and Kyunghyun Cho. 2020a. Consistency of a recurrent language model with respect to incomplete decoding. arXiv preprint arXiv:2002.02492.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Neural text generation with unlikelihood training", |
|
"authors": [ |
|
{ |
|
"first": "Sean", |
|
"middle": [], |
|
"last": "Welleck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilia", |
|
"middle": [], |
|
"last": "Kulikov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Roller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Dinan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Di- nan, Kyunghyun Cho, and Jason Weston. 2020b. Neural text generation with unlikelihood training. In International Conference on Learning Representa- tions.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Mode recovery cost of the empirical distribution from the ground-truth distribution as a function of k while N train = 5 \u00d7 10 5 .", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Mode recovery cost of the empirical distribution from the ground-truth distribution as a function of N train while k = 200.", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Mode recovery cost reduction log-rate between empirical and learned distributions from the ground-truth distribution as a function of k while N train = 5 \u00d7 10 5 .", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Mode recovery cost of the learned distribution from the empirical distribution as a function of N train while k = 200.", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |