|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:29:47.151374Z" |
|
}, |
|
"title": "On the Computational Power of Transformers and its Implications in Sequence Modeling", |
|
"authors": [ |
|
{ |
|
"first": "Satwik", |
|
"middle": [], |
|
"last": "Bhattamishra", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Microsoft Research", |
|
"location": { |
|
"country": "India" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Arkil", |
|
"middle": [], |
|
"last": "Patel", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Microsoft Research", |
|
"location": { |
|
"country": "India" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Navin", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Microsoft Research", |
|
"location": { |
|
"country": "India" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Encoder", |
|
"middle": [], |
|
"last": "-Encoder", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Attention Head Feed Forward Network Input Embedding Decoder-Decoder Attention Head Decoder-Encoder Attention Head Feed Forward Network Output Embedding Positional Encoding Positional Encoding", |
|
"institution": "", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Transformers are being used extensively across several sequence modeling tasks. Significant research effort has been devoted to experimentally probe the inner workings of Transformers. However, our conceptual and theoretical understanding of their power and inherent limitations is still nascent. In particular, the roles of various components in Transformers such as positional encodings, attention heads, residual connections, and feedforward networks, are not clear. In this paper, we take a step towards answering these questions. We analyze the computational power as captured by Turing-completeness. We first provide an alternate and simpler proof to show that vanilla Transformers are Turing-complete and then we prove that Transformers with only positional masking and without any positional encoding are also Turing-complete. We further analyze the necessity of each component for the Turing-completeness of the network; interestingly, we find that a particular type of residual connection is necessary. We demonstrate the practical implications of our results via experiments on machine translation and synthetic tasks.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Transformers are being used extensively across several sequence modeling tasks. Significant research effort has been devoted to experimentally probe the inner workings of Transformers. However, our conceptual and theoretical understanding of their power and inherent limitations is still nascent. In particular, the roles of various components in Transformers such as positional encodings, attention heads, residual connections, and feedforward networks, are not clear. In this paper, we take a step towards answering these questions. We analyze the computational power as captured by Turing-completeness. We first provide an alternate and simpler proof to show that vanilla Transformers are Turing-complete and then we prove that Transformers with only positional masking and without any positional encoding are also Turing-complete. We further analyze the necessity of each component for the Turing-completeness of the network; interestingly, we find that a particular type of residual connection is necessary. We demonstrate the practical implications of our results via experiments on machine translation and synthetic tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Transformer (Vaswani et al., 2017 ) is a recent selfattention based sequence-to-sequence architecture which has led to state of the art results across various NLP tasks including machine translation (Ott et al., 2018) , language modeling (Radford et al., 2018) and question answering (Devlin et al., 2019) . Although a number of variants of Transformers have been proposed, the original architecture still underlies these variants.", |
|
"cite_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 33, |
|
"text": "(Vaswani et al., 2017", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 199, |
|
"end": 217, |
|
"text": "(Ott et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 238, |
|
"end": 260, |
|
"text": "(Radford et al., 2018)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 284, |
|
"end": 305, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While the training and generalization of machine learning models such as Transformers are the central goals in their analysis, an essential prerequisite to this end is characterization of the computational", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "POS (1) POS (2) POS (3) (a)", |
|
"eq_num": "(b)" |
|
} |
|
], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Figure 1: (a) Self-Attention Network with positional encoding, (b) Self-Attention Network with positional masking without any positional encoding power of the model: training a model for a certain task cannot succeed if the model is computationally incapable of carrying out the task. While the computational capabilities of recurrent networks (RNNs) have been studied for decades (Kolen and Kremer, 2001; Siegelmann, 2012) , for Transformers we are still in the early stages. The celebrated work of Siegelmann and Sontag (1992) showed, assuming arbitrary precision, that RNNs are Turing-complete, meaning that they are capable of carrying out any algorithmic task formalized by Turing machines. Recently, P\u00e9rez et al. (2019) have shown that vanilla Transformers with hard-attention can also simulate Turing machines given arbitrary precision. However, in contrast to RNNs, Transformers consist of several components and it is unclear which components are necessary for its Turing-completeness and thereby crucial to its computational expressiveness.", |
|
"cite_spans": [ |
|
{ |
|
"start": 381, |
|
"end": 405, |
|
"text": "(Kolen and Kremer, 2001;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 406, |
|
"end": 423, |
|
"text": "Siegelmann, 2012)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 706, |
|
"end": 725, |
|
"text": "P\u00e9rez et al. (2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The role of various components of the Transformer in its efficacy is an important question for further improvements. Since the Transformer does not process the input sequentially, it requires some form of positional information. Various positional encoding schemes have been proposed to capture order information (Shaw et al., 2018; Dai et al., 2019; Huang et al., 2018) . At the same time, on machine translation, showed that the performance of Transformers with only positional masking (Shen et al., 2018) is comparable to that with positional encodings. In case of positional masking ( Fig. 1) , as opposed to explicit encodings, the model is only allowed to attend over preceding inputs and no additional positional encoding vector is combined with the input vector. Tsai et al. (2019) raised the question of whether explicit encoding is necessary if positional masking is used. Additionally, since P\u00e9rez et al. (2019) 's Turingcompleteness proof relied heavily on residual connections, they asked whether these connections are essential for Turing-completeness. In this paper, we take a step towards answering such questions. Below, we list the main contributions of the paper,", |
|
"cite_spans": [ |
|
{ |
|
"start": 313, |
|
"end": 332, |
|
"text": "(Shaw et al., 2018;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 333, |
|
"end": 350, |
|
"text": "Dai et al., 2019;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 351, |
|
"end": 370, |
|
"text": "Huang et al., 2018)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 488, |
|
"end": 507, |
|
"text": "(Shen et al., 2018)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 771, |
|
"end": 789, |
|
"text": "Tsai et al. (2019)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 903, |
|
"end": 922, |
|
"text": "P\u00e9rez et al. (2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 589, |
|
"end": 596, |
|
"text": "Fig. 1)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We provide an alternate and arguably simpler proof to show that Transformers are Turingcomplete by directly relating them to RNNs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 More importantly, we prove that Transformers with positional masking and without positional encoding are also Turing-complete.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We analyze the necessity of various components such as self-attention blocks, residual connections and feedforward networks for Turing-completeness. Figure 2 provides an overview.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 159, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We explore implications of our results on machine translation and synthetic tasks. 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Computational Power of neural networks has been studied since the foundational paper Mc-Culloch and Pitts (1943) ; in particular, among sequence-to-sequence models, this aspect of RNNs has long been studied (Kolen and Kremer, 2001 ). The seminal work by Siegelmann and Sontag (1992) showed that RNNs can simulate a Turing machine by using unbounded precision. Chen et al. (2018) showed that RNNs with ReLU activations are also Turing-complete. Many recent works have explored the computational power of RNNs in practical settings. Several works (Merrill et al., 2020) , (Weiss et al., 2018) recently studied the ability of RNNs to recognize counter-like languages. The capability of RNNs to recognize strings of balanced parantheses has also been studied (Sennhauser and Berwick, 2018; Skachkova et al., 2018) . However, such analysis on Transformers has been scarce. Theoretical work on Transformers was initiated by P\u00e9rez et al. (2019) who formalized the notion of Transformers and showed that it can simulate a Turing machine given arbitrary precision. Concurrent to our work, there have been several efforts to understand self-attention based models (Levine et al., 2020; Kim et al., 2020) . Hron et al. (2020) show that Transformers behave as Gaussian processes when the number of heads tend to infinity. Hahn (2020) showed some limitations of Transformer encoders in modeling regular and context-free languages. It has been recently shown that Transformers are universal approximators of sequence-tosequence functions given arbitrary precision (Yun et al., 2020) . However, these are not applicable 2 to the complete Transformer architecture. With a goal similar to ours, Tsai et al. (2019) attempted to study the attention mechanism via a kernel formulation. However, a systematic study of various components of Transformers has not been done.", |
|
"cite_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 112, |
|
"text": "Mc-Culloch and Pitts (1943)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 207, |
|
"end": 230, |
|
"text": "(Kolen and Kremer, 2001", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 254, |
|
"end": 282, |
|
"text": "Siegelmann and Sontag (1992)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 360, |
|
"end": 378, |
|
"text": "Chen et al. (2018)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 545, |
|
"end": 567, |
|
"text": "(Merrill et al., 2020)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 570, |
|
"end": 590, |
|
"text": "(Weiss et al., 2018)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 755, |
|
"end": 785, |
|
"text": "(Sennhauser and Berwick, 2018;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 786, |
|
"end": 809, |
|
"text": "Skachkova et al., 2018)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 918, |
|
"end": 937, |
|
"text": "P\u00e9rez et al. (2019)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1154, |
|
"end": 1175, |
|
"text": "(Levine et al., 2020;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1176, |
|
"end": 1193, |
|
"text": "Kim et al., 2020)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 1196, |
|
"end": 1214, |
|
"text": "Hron et al. (2020)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1310, |
|
"end": 1321, |
|
"text": "Hahn (2020)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1550, |
|
"end": 1568, |
|
"text": "(Yun et al., 2020)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1678, |
|
"end": 1696, |
|
"text": "Tsai et al. (2019)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "All the numbers used in our computations will be from the set of rational numbers denoted Q. For a sequence X = (x 1 , . . . , x n ), we set X j := (x 1 , . . . , x j ) for 1 \u2264 j \u2264 n. We will work with an alphabet \u03a3 of size m, with special symbols # and $ signifying the beginning and end of the input sequence, respectively. The symbols are mapped to vectors via a given 'base' embedding f b : \u03a3 \u2192 Q d b , where d b is the dimension of the embedding. E.g., this embedding could be the one used for processing the symbols by the RNN. We set f b (#) = 0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions and Preliminaries", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "d b and f b ($) = 0 d b . Posi- tional encoding is a function pos : N \u2192 Q d b .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions and Preliminaries", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Together, these provide embedding for a symbol s at position i given by f (f b (s), pos(i)), often taken to be simply f b (s) + pos(i). Vector s \u2208 Q m denotes one-hot encoding of a symbol s \u2208 \u03a3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions and Preliminaries", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We follow Siegelmann and Sontag (1992) in our definition of RNNs. To feed the sequences s 1 s 2 . . . s n \u2208 \u03a3 * to the RNN, these are converted to the vectors", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNNs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "x 1 , x 2 , . . . , x n where x i = f b (s i ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNNs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The RNN is given by the recurrence", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNNs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "h t = g(W h h t\u22121 + W x x t + b), where t \u2265 1, function g(\u2022) is a multilayer feedforward network (FFN) with activation \u03c3, bias vector b \u2208 Q d h , matrices W h \u2208 Q d h \u00d7d h and W x \u2208 Q d h \u00d7d b , and h t \u2208 Q d h", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNNs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "is the hidden state with given initial hidden state h 0 ; d h is the hidden state dimension.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNNs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "After the last symbol s n has been fed, we continue to feed the RNN with the terminal symbol f b ($) until it halts. This allows the RNN to carry out computation after having read the input.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNNs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "A class of seq-to-seq neural networks is Turingcomplete if the class of languages recognized by the networks is exactly the class of languages recognized by Turing machines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNNs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Theorem 3.1. (Siegelmann and Sontag, 1992) Any seq-to-seq function \u03a3 * \u2192 \u03a3 * computable by a Turing machine can also be computed by an RNN.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNNs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For details please see section B.1 in appendix.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNNs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Vanilla Transformer. We describe the original Transformer architecture with positional encoding (Vaswani et al., 2017) as formalized by P\u00e9rez et al. (2019) , with some modifications. All vectors in this subsection are from Q d .", |
|
"cite_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 118, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 136, |
|
"end": 155, |
|
"text": "P\u00e9rez et al. (2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The transformer, denoted Trans, is a seq-to-seq architecture. Its input consists of (i) a sequence X = (x 1 , . . . , x n ) of vectors, (ii) a seed vector y 0 . The output is a sequence Y = (y 1 , . . . , y r ) of vectors. The sequence X is obtained from the sequence (s 1 , . . . , s n ) \u2208 \u03a3 n of symbols by using the embedding mentioned earlier:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "x i = f (f b (s i ), pos(i)).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The transformer consists of composition of transformer encoder and transformer decoder. For the feedforward networks in the transformer layers we use the activation as in Siegelmann and Sontag (1992) , namely the saturated linear activation function \u03c3(x) which takes value 0 for x < 0, value x for 0 < x < 1 and value 1 for x > 1. This activation can be easily replaced by the standard ReLU activation via \u03c3(x) = ReLU(x) \u2212 ReLU(x \u2212 1). Self-attention. The self-attention mechanism takes as input (i) a query vector q, (ii) a sequence of key vectors K = (k 1 , . . . , k n ), and (iii) a sequence of value vectors", |
|
"cite_spans": [ |
|
{ |
|
"start": 171, |
|
"end": 199, |
|
"text": "Siegelmann and Sontag (1992)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "V = (v 1 , . . . , v n ). The q-attention over K and V , denoted Att(q, K, V ), is a vector a = \u03b1 1 v 1 +\u03b1 2 v 2 +\u2022 \u2022 \u2022+\u03b1 n v n , where (i) (\u03b1 1 , . . . , \u03b1 n ) = \u03c1(f att (q, k 1 ), . . . , f att (q, k n )). (ii) The normalization function \u03c1 : Q n \u2192 Q n \u22650", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "is hardmax: for x = (x 1 , . . . , x n ) \u2208 Q n , if the maximum value occurs r times among x 1 , . . . , x n , then hardmax(x) i := 1/r if x i is a maximum value and hardmax(x) i := 0 otherwise. In practice, the softmax is often used but its output values are in general not rational. (iii) For vanilla transformers, the scoring function f att used is a combination of multiplicative attention (Vaswani et al., 2017 ) and a non-linear function: f att (q, k i ) = \u2212 q, k i . This was also used by P\u00e9rez et al. (2019) . Transformer encoder. A single-layer encoder is a function Enc(X; \u03b8), with input X = (x 1 , . . . , x n ) a sequence of vectors in Q d , and parameters \u03b8. The output is another sequence", |
|
"cite_spans": [ |
|
{ |
|
"start": 394, |
|
"end": 415, |
|
"text": "(Vaswani et al., 2017", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 496, |
|
"end": 515, |
|
"text": "P\u00e9rez et al. (2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Z = (z 1 , . . . , z n ) of vectors in Q d . The parame- ters \u03b8 specify functions Q(\u2022), K(\u2022), V (\u2022), and O(\u2022), all of type Q d \u2192 Q d . The functions Q(\u2022), K(\u2022)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": ", and V (\u2022) are linear transformations and O(\u2022) an FFN. For 1 \u2264 i \u2264 n, the output of the self-attention block is produced by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "a i = Att(Q(x i ), K(X), V (X)) + x i (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "This operation is also referred to as the encoderencoder attention block. The output Z is computed by z i = O(a i ) + a i for 1 \u2264 i \u2264 n. The addition operations +x i and +a i are the residual connections. The complete L-layer transformer encoder TEnc (L) (X; \u03b8) = (K e , V e ) has the same input X = (x 1 , . . . , x n ) as the single-layer encoder. In contrast, its output K e = (k e 1 , . . . , k e n ) and V e = (v e 1 , . . . v e n ) contains two sequences. TEnc (L) is obtained by composition of L singlelayer encoders: let X (0) := X, and for 0 \u2264 \u2264 L \u2212 1, let X ( +1) = Enc(X ( ) ; \u03b8 ) and finally,", |
|
"cite_spans": [ |
|
{ |
|
"start": 467, |
|
"end": 470, |
|
"text": "(L)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "K e = K (L) (X (L) ), V e = V (L) (X (L) ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Transformer decoder. The input to a singlelayer decoder is (i) (K e , V e ) output by the encoder, and (ii) sequence Y = (y 1 , . . . , y k ) of vectors for k \u2265 1. The output is another sequence Z = (z 1 , . . . , z k ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Similar to the single-layer encoder, a singlelayer decoder is parameterized by functions", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Q(\u2022), K(\u2022), V (\u2022) and O(\u2022) and is defined by p t = Att(Q(y t ), K(Y t ), V (Y t )) + y t , (2) a t = Att(p t , K e , V e ) + p t ,", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "z t = O(a t ) + a t ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where 1 \u2264 t \u2264 k. The operation in (2) will be referred to as the decoder-decoder attention block and the operation in (3) as the decoder-encoder attention block. In (2), positional masking is applied to prevent the network from attending over symbols which are ahead of them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "An L-layer Transformer decoder TDec L ((K e , V e ), Y ; \u03b8) = z is obtained by repeated application of L single-layer decoders each with its own parameters, and a transformation function F : Q d \u2192 Q d applied to the last vector in the sequence of vectors output by the final decoder. Formally, for 0 \u2264 \u2264 L\u22121 and Y 0 := Y we have", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Y +1 = Dec((K e , V e ), Y ; \u03b8 ), z = F (y L k )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": ". Note that while the output of a single-layer decoder is a sequence of vectors, the output of an L-layer Transformer decoder is a single vector. The complete Transformer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The output Trans(X, y 0 ) = Y is computed by the recurrence\u1ef9 t+1 = TDec(TEnc(X), (y 0 , y 1 , . . . , y t )), for 0 \u2264 t \u2264 r \u2212 1. We get y t+1 by adding positional encoding: y t+1 =\u1ef9 t+1 + pos(t + 1). Directional Transformer. We denote the Transformer with only positional masking and no positional encodings as Directional Transformer and use them interchangeably. In this case, we use standard multiplicative attention as the scoring function in our construction, i.e, f att (q, k i ) = q, k i . The general architecture is the same as for the vanilla case; the differences due to positional masking are the following.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "There are no positional encodings. So the input vectors", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "x i only involve f b (s i ). Simi- larly, y t =\u1ef9 t . In (1), Att(\u2022) is replaced by Att(Q(x i ), K(X i ), V (X i )) where X i := (x 1 , . . . , x i ) for 1 \u2264 i \u2264 n. Similarly, in (3), Att(\u2022) is replaced by Att(p t , K e t , V", |
|
"eq_num": "e" |
|
} |
|
], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "t ). Remark 1. Our definitions deviate slightly from practice, hard-attention being the main one since hardmax keeps the values rational whereas softmax takes the values to irrational space. Previous studies have shown that soft-attention behaves like hard-attention in practice and Hahn (2020) discusses its practical relevance. Remark 2. Transformer Networks with positional encodings are not necessarily equivalent in terms of their computational expressiveness (Yun et al., 2020) to those with only positional masking when considering the encoder only model (as used in BERT and GPT-2). Our results in Section 4.1 show their equivalence in terms of expressiveness for the complete seq-to-seq architecture.", |
|
"cite_spans": [ |
|
{ |
|
"start": 465, |
|
"end": 483, |
|
"text": "(Yun et al., 2020)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Architecture", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In light of Theorem 3.1, to prove that Transformers are Turing-complete, it suffices to show that they can simulate RNNs. We say that a Transformer simulates an RNN (as defined in Sec. 3.1) if on every input s \u2208 \u03a3 * , at each step t, the vector y t contains the hidden state h t as a subvector, i.e. y t = [h t , \u2022], and halts at the same step as the RNN. Proof Sketch. The input s 0 , . . . , s n \u2208 \u03a3 * is provided to the transformer as the sequence of vectors x 0 , . . . , x n , where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "x i = [0 d h , f b (s i ), 0 d h , i, 1],", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "which has as sub-vector the given base embedding f b (s i ) and the positional encoding i, along with extra coordinates set to constant values and will be used later.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The basic observation behind our construction of the simulating Transformer is that the transformer decoder can naturally implement the recurrence operations of the type used by RNNs. To this end, the FFN O dec (\u2022) of the decoder, which plays the same role as the FFN component of the RNN, needs sequential access to the input in the same way as RNN. But the Transformer receives the whole input at the same time. We utilize positional encoding along with the attention mechanism to isolate x t at time t and feed it to O dec (\u2022), thereby simulating the RNN.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "As stated earlier, we append the input s 1 , . . . , s n of the RNN with $'s until it halts. Since the Transformer takes its input all at once, appending by $'s is not possible (in particular, we do not know how long the computation would take). Instead, we append the input with a single $. After encountering a $ once, the Transformer will feed (encoding of) $ to O dec (\u2022) in subsequent steps until termination. Here we confine our discussion to the case t \u2264 n; the t > n case is slightly different but simpler.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The construction is straightforward: it has only one head, one encoder layer and one decoder layer; moreover, the attention mechanisms in the encoder and the decoder-decoder attention block of the decoder are trivial as described below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The encoder attention layer does trivial computation in that it merely computes the identity function: z i = x i , which can be easily achieved, e.g. by using the residual connection and setting the value vectors to 0. The fi- nal K (1) (\u2022) and V (1) (\u2022) functions bring (K e , V e ) into useful forms by appropriate linear transformations:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "k i = [0 d b , 0 d b , 0 d b , \u22121, i] and v i = [0 d b , f b (s i ), 0 d b , 0, 0].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Thus, the key vectors only encode the positional information and the value vectors only encode the input symbols.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The output sequence of the decoder is y 1 , y 2 , . . .. Our construction will ensure, by induction on t, that y t contains the hidden states h t of the RNN as a sub-vector along with positional information:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "y t = [h t , 0 d b , 0 d b , t + 1, 1]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": ". This is easy to arrange for t = 0, and assuming it for t we prove it for t+1. As for the encoder, the decoder-decoder attention block acts as the identity: p t = y t . Now, using the last but one coordinate in y t representing the time t + 1, the attention mechanism Att(p t , K e , V e ) can retrieve the embedding of the t-th input symbol x t . This is possible because in the key vector k i mentioned above, almost all coordinates other than the one representing the position i are set to 0, allowing the mechanism to only focus on the positional information and not be distracted by the other contents of p t = y t : the scoring function has value", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "f att (p t , k i ) = \u2212| p t , k i | = \u2212|i \u2212 (t + 1)|.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For a given t, it is maximized at i = t + 1 for t < n and at i = n for t \u2265 n. This use of scoring function is similar to P\u00e9rez et al. (2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 140, |
|
"text": "P\u00e9rez et al. (2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "At this point, O dec (\u2022) has at its disposal the hidden state h t (coming from y t via p t and the residual connection) and the input symbol x t (coming via the attention mechanism and the residual connection). Hence O(\u2022) can act just like the FFN (Lemma C.4) underlying the RNN to compute h t+1 and thus y t+1 , proving the induction hypothesis. The complete construction can be found in Sec. C.2 in the appendix. Proof Sketch. As before, by Theorem 3.1 it suffices to show that Transformers can simulate RNNs. The input s 0 , . . . , s n is provided to the transformer as the sequence of vectors x 0 , . . . , x n , where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "x i = [0 d h , 0 d h , f b (s i ), s i , 0, 0 m , 0 m , 0 m ].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The general goal for the directional case is similar to the vanilla case, namely we would like the FFN O dec (\u2022) of the decoder to directly simulate the computation in the underlying RNN. In the vanilla case, positional encoding and the attention mechanism helped us feed input x t at the t-th iteration of the decoder to O dec (\u2022). However, we no longer have explicit positional information in the input x t such as a coordinate with value t. The key insight is that we do not need the positional information explicitly to recover x t at step t: in our construction, the attention mechanism with masking will recover x t in an indirect manner even though it's not able to \"zero in\" on the t-th position.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Let us first explain this without details of the construction. We maintain in vector \u03c9 t \u2208 Q m , with a coordinate each for symbols in \u03a3, the fraction of times the symbol has occurred up to step t. Now, at a step t \u2264 n, for the difference \u03c9 t \u2212 \u03c9 t\u22121 (which is part of the query vector), it can be shown easily that only the coordinate corresponding to s t is positive. Thus after applying the linearized sigmoid \u03c3(\u03c9 t \u2212 \u03c9 t\u22121 ), we can isolate the coordinate corresponding to s t . Now using this query vector, the (hard) attention mechanism will be able to retrieve the value vectors for all indices j such that s j = s t and output their average. Crucially, the value vector for an index j is essentially x j which depends only on s j . Thus, all these vectors are equal to x t , and so is their average. This recovers x t , which can now be fed to O dec (\u2022), simulating the RNN.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We now outline the construction and relate it to the above discussion. As before, for simplicity we restrict to the case t \u2264 n. We use only one head, one layer encoder and two layer decoder. The encoder, as in the vanilla case, does very little other than pass information along. The vectors in (K e , V e ) are obtained by the trivial attention mechanism followed by simple linear transformations:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "k e i = [0 d h , 0 d h , 0 d b , s i , 0, 0 m , 0 m , 0 m ] and v e i = [0 d h , 0 d h , f b (s i ), 0 m , 0, 0 m , s i , 0 m ].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Our construction ensures that at step t we have", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "y t = [h t\u22121 , 0 d h , 0 d b , 0 m , 1 2 t , 0 m , 0 m , \u03c9 t\u22121 ].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "As before, the proof is by induction on t.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In the first layer of decoder, the decoderdecoder attention block is trivial: p", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "(1) t = y t . In the decoder-encoder attention block, we give equal attention to all the t + 1 values, which along with", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "O enc (\u2022), leads to z (1) t = [h t\u22121 , 0 d h , 0 d b , \u03b4 t , 1 2 t+1 , 0 m , 0 m , \u03c9 t ], where essentially \u03b4 t = \u03c3(\u03c9 t \u2212 \u03c9 t\u22121 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": ", except with a change for the last coordinate due to special status of the last symbol $ in the processing of RNN.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In the second layer, the decoder-decoder attention block is again trivial with p", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "(2) t = z", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "(1) t . We remark that in this construction, the scoring function is the standard multiplicative attention 3 . Now p", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "(2) t , k e j = \u03b4 t , s j = \u03b4 t,j , which is positive if and only if s j = s t , as mentioned earlier. Thus attention weights in Att(p (2) t and the residual connection) and the input symbol x t (coming via the attention mechanism and the residual connection). Hence O dec (\u2022) can act just like the FFN underlying the RNN to compute h t+1 and thus y t+1 , proving the induction hypothesis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "(2) t , K e t , V e t ) satisfy hardmax( p (2) t , k e 1 , . . . , p (2) t , k e t ) = 1 \u03bbt (I(s 0 = s t ), I(s 1 = s t ), . . . , I(s t = s t )),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The complete construction can be found in Sec. D in the Appendix.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In practice, found that for NMT, Transformers with only positional masking achieve comparable performance compared to the ones with positional encodings. Similar evidence was found by Tsai et al. (2019) . Our proof for directional transformers entails that there is no loss of order information if positional information is only provided in the form of masking. However, we do not recommend using masking as a replacement for explicit encodings. The computational equivalence of encoding and masking given by our results implies that any differences in their performance must come from differences in learning dynamics.", |
|
"cite_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 202, |
|
"text": "Tsai et al. (2019)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Turing-Completeness Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The results for various components follow from our construction in Theorem 4.1. Note that in both the encoder and decoder attention blocks, we need to compute the identity function. We can nullify the role of the attention heads by setting the value vectors to zero and making use of only the residual connections to implement the identity function. Thus, even if we remove those attention heads, the model is still Turing-complete. On the other hand, we can remove the residual connections around the attention blocks and make use of the attention heads to implement the identity function by using positional encodings. Hence, either the attention head or the residual connection is sufficient to achieve Turing-completeness. A similar argument can be made for the FFN in the encoder layer: either the residual connection or the FFN is sufficient for Turing-completeness. For the decoder-encoder attention head, since it is the only way for the decoder to obtain information about the input, it is necessary for the completeness. The FFN is the only component that can perform computations based on the input and the computations performed earlier via recurrence and hence, the model is not Turing-complete without it. Figure 2 summarizes the role of different components with respect to the computational expressiveness of the network. Proof Sketch. We confine our discussion to singlelayer decoder; the case of multilayer decoder is similar. Without the residual connection, the decoder-encoder attention block produces a t = Att(p t , K e , V e ) = n i=1 \u03b1 i v e i for some \u03b1 i 's such that n i \u03b1 i = 1. Note that, without residual connection a t can take on at most 2 n \u2212 1 values. This is because by the definition of hard attention the vector (\u03b1 1 , . . . , \u03b1 n ) is characterized by the set of zero coordinates and there are at most 2 n \u2212 1 such sets (all coordinates cannot be zero). This restriction on the number of values on a t holds regardless of the value of p t . If the task requires the network to produce values of a t that come from a set with size at least 2 n , then the network will not be able to perform the task. Here's an example task: given a number \u2206 \u2208 (0, 1), the network must produce numbers 0, \u2206, 2\u2206, . . . , k\u2206, where k is the maximum integer such that k\u2206 \u2264 1. If the network receives a single input \u2206, then it is easy to see that the vector a t will be a constant (v e 1 ) at any step and hence the output of the network will also be constant at all steps. Thus, the model cannot perform such a task. If the input is combined with n \u2212 1 auxiliary symbols (such as # and $), then in the network, each a t takes on at most 2 n \u2212 1 values. Hence, the model will be incapable of performing the task if \u2206 < 1/2 n . Such a limitation does not exist with a residual connection since the vector a t = n i=1 \u03b1 i v e i + p t can take arbitrary number of values depending on its prior computations in p t . For further details, see Sec. C.1 in the Appendix.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1220, |
|
"end": 1228, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis of Components", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Discussion. It is perhaps surprising that residual connection, originally proposed to assist in the learning ability of very deep networks, plays a vital role in the computational expressiveness of the network. Without it, the model is limited in its capability to make decisions based on predictions in the previous steps. We explore practical implications of this result in section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of Components", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this section, we explore the practical implications of our results. Our experiments are geared towards answering the following questions: Q1. Are there any practical implications of the limitation of Transformers without decoder-encoder residual connections? What tasks can they do or not do compared to vanilla Transformers? Q2. Is there any additional benefit of using positional masking as opposed to absolute positional encoding (Vaswani et al., 2017) ?", |
|
"cite_spans": [ |
|
{ |
|
"start": 436, |
|
"end": 458, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Although we showed that Transformers without decoder-encoder residual connection are not Turing complete, it does not imply that they are incapable of performing all the tasks. Our results suggest that they are limited in their capability to make inferences based on their previous computations, which is required for tasks such as counting and language modeling. However, it can be shown that the model is capable of performing tasks which rely only on information provided at a given step such as copying and mapping. For such tasks, given positional information at a particular step, the model can look up the corresponding input and map it via the FFN. We evaluate these hypotheses via our experiments. For our experiments on synthetic data, we consider two tasks, namely the copy task and the counting task. For the copy task, the goal of a model is to reproduce the input sequence. We sample sentences of lengths between 5-12 words from Penn Treebank and create a train-test split of 40k-1k with all sentences belonging to the same range of length. In the counting task, we create a very simple dataset where the model is given one number between 0 and 100 as input and its goal is to predict the next five numbers. Since only a single input is provided to the encoder, it is necessary for the decoder to be able to make inferences based on its previous predictions to perform this task. The benefit of conducting these experiments on synthetic data is that they isolate the phenomena we wish to evaluate. For both these tasks, we compare vanilla Transformer with the one without decoder-encoder residual connection. As a baseline we also consider the model without decoder-decoder residual connection, since according to our results, that connection does not influence the computational power of the model. We implement a single layer encoderdecoder network with only a single attention head in each block.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We then assess the influence of the limitation on Machine Translation which requires a model to do a combination of both mapping and inferring from computations in previous timesteps. We evaluate the models on IWSLT'14 German-English dataset and IWSLT'15 English-Vietnamese dataset. We again compare vanilla Transformer with the ones without decoder-encoder and decoder-decoder residual connection. While tuning the models, we vary the number of layers from 1 to 4, the learning rate, warmup steps and the number of heads. Specifications of the models, experimental setup, datasets and sample outputs can be found in Sec. E in the Appendix.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Results on the effect of residual connections on synthetic tasks can be found in Table 1 . As per our hypothesis, all the variants are able to perfectly perform the copy task. For the counting task, the one without decoder-encoder residual connection is incapable of performing it. However, the other two including the one without decoder-decoder residual connection are able to accomplish the task by learning to make decisions based on their prior predictions. Table 3 provides some illustrative sample outputs of the models. For the MT task, results can be found in Table 2 . While the drop from removing decoder-encoder residual connection is significant, it is still able to perform reasonably well since the task can be largely fulfilled by mapping different words from one sentence to another.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 88, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 463, |
|
"end": 470, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 569, |
|
"end": 576, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For positional masking, our proof technique suggests that due to lack of positional encodings, the model must come up with its own mechanism to make order related decisions. Our hypothesis is that, if it is able to develop such a mechanism, it should be able to generalize to higher lengths and not overfit on the data it is provided. To evaluate this claim, we simply extend the copy task upto higher lengths. The training set remains the same as before, containing sentences of length 5-12 words. We create 5 different validation sets each containing 1k sentences each. The first set contains sentences within the same length as seen in training (5-12 words), the second set contains sentences of length 13-15 words while the third, fourth and fifth sets contain sentences of lengths 15-20, 21-25 and 26-30 words respectively. We consider two models, one which is provided absolute positional encodings and one where only positional masking is applied. Figure 3 shows the performance of these models across various lengths. The model with positional masking clearly generalizes up to higher lengths although its performance too degrades at extreme lengths. We found that the model with absolute positional encodings during training overfits on the fact that the 13th token is always the terminal symbol. Hence, when evalu- ated on higher lengths it never produces a sentence of length greater than 12. Other encoding schemes such as relative positional encodings (Shaw et al., 2018; Dai et al., 2019) can generalize better, since they are inherently designed to address this particular issue. However, our goal is not to propose masking as a replacement of positional encodings, rather it is to determine whether the mechanism that the model develops during training is helpful in generalizing to higher lengths. Note that, positional masking was not devised by keeping generalization or any other benefit in mind. Our claim is only that, the use of masking does not limit the model's expressiveness and it may benefit in other ways, but during practice one should explore each of the mechanisms and even a combination of both. showed that a combination of both masking and encodings is better able to learn order information as compared to explicit encodings. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 1465, |
|
"end": 1484, |
|
"text": "(Shaw et al., 2018;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1485, |
|
"end": 1502, |
|
"text": "Dai et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 955, |
|
"end": 963, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We showed that the class of languages recognized by Transformers and RNNs are exactly the same. This implies that the difference in performance of both the networks across different tasks can be attributed only to their learning abilities. In contrast to RNNs, Transformers are composed of multiple components which are not essential for their com-putational expressiveness. However, in practice they may play a crucial role. Recently, Voita et al. (2019) showed that the decoder-decoder attention heads in the lower layers of the decoder do play a significant role in the NMT task and suggest that they may be helping in language modeling. This indicates that components which are not essential for the computational power may play a vital role in improving the learning and generalization ability. Take-Home Messages. We showed that the order information can be provided either in the form of explicit encodings or masking without affecting computational power of Transformers. The decoder-encoder attention block plays a necessary role in conditioning the computation on the input sequence while the residual connection around it is necessary to keep track of previous computations. The feedforward network in the decoder is the only component capable of performing computations based on the input and prior computations. Our experimental results show that removing components essential for computational power inhibit the model's ability to perform certain tasks. At the same time, the components which do not play a role in the computational power may be vital to the learning ability of the network.", |
|
"cite_spans": [ |
|
{ |
|
"start": 436, |
|
"end": 455, |
|
"text": "Voita et al. (2019)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Final Remarks", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Although our proofs rely on arbitrary precision, which is common practice while studying the computational power of neural networks in theory (Siegelmann and Sontag, 1992; P\u00e9rez et al., 2019; Hahn, 2020; Yun et al., 2020) , implementations in practice work over fixed precision settings. However, our construction provides a starting point to analyze Transformers under finite precision. Since RNNs can recognize all regular languages in finite precision (Korsky and Berwick, 2019), it follows from our construction that Transformer can also recognize a large class of regular languages in finite precision. At the same time, it does not imply that it can recognize all regular languages given the limitation due to the precision required to encode positional information. We leave the study of Transformers in finite precision for future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 171, |
|
"text": "(Siegelmann and Sontag, 1992;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 172, |
|
"end": 191, |
|
"text": "P\u00e9rez et al., 2019;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 192, |
|
"end": 203, |
|
"text": "Hahn, 2020;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 221, |
|
"text": "Yun et al., 2020)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Final Remarks", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We begin with various definitions and results. We define simulation of Turing machines by RNNs and state the Turing-completeness result for RNNs. We define vanilla and directional Transformers and what it means for Transformers to simulate RNNs. Many of the definitions from the main paper are reproduced here, but in more detail. In Sec. C.1 we discuss the effect of removing a residual connection on computational power of Transformers. Sec. C.2 contains the proof of Turing completeness of vanilla Transformers and Sec. D the corresponding proof for directional Transformers. Finally, Sec. 5 has further details of experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Roadmap", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Denote the set {1, 2, . . . , n} by [n]. Functions defined for scalars are extended to vectors in the natural way: for a function F defined on a set A, for a sequence (a 1 , . . . , a n ) of elements in A, we set F (a 1 , . . . , a n ) := (F (a 1 ), . . . , F (a n )). Indicator I(P ) is 1, if predicate P is true and is 0 otherwise. For a sequence X = (x n , . . . , x n ) for some n \u2265 0, we set X j := (x n , . . . , x j ) for j \u2208 {n , i+1, . . . , n}. We will work with an alphabet \u03a3 = {\u03b2 1 , . . . , \u03b2 m }, with \u03b2 1 = # and \u03b2 m = $. The special symbols # and $ correspond to the beginning and end of the input sequence, resp. For a vector v, by 0 v we mean the all-0 vector of the same dimension as v. Lett := min{t, n}", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Definitions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here we summarize, somewhat informally, the Turing-completeness result for RNNs due to (Siegelmann and Sontag, 1992) . We recall basic notions from computability theory. In the main paper, for simplicity we stated the results for total recursive functions \u03c6 : {0, 1} * \u2192 {0, 1} * , i.e. a function that is defined on every s \u2208 {0, 1} * and whose values can be computed by a Turing machine. While total recursive functions form a satisfactory formalization of seq-to-seq tasks, here we state the more general result for partial recursive functions. Let \u03c6 : {0, 1} * \u2192 {0, 1} * be partial recursive. A partial recursive function is one that need not be defined for every s \u2208 {0, 1} * , and there exists a Turing Machine M with the following property. The input s is initially written on the tape of the Turing Machine M and the output \u03c6(s) is the content of the tape upon acceptance which is indicated by halting in a designated accept state. On s for which \u03c6 is undefined, M does not halt.", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 116, |
|
"text": "(Siegelmann and Sontag, 1992)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.1 RNNs and Turing-completeness", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We now specify how Turing machine M is simulated by RNN R(M). In the RNNs in (Siegelmann and Sontag, 1992) the hidden state h t has the form", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 106, |
|
"text": "(Siegelmann and Sontag, 1992)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.1 RNNs and Turing-completeness", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "h t = [q t , \u03a8 1 , \u03a8 2 ],", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.1 RNNs and Turing-completeness", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where q t = [q 1 , . . . , q s ] denotes the state of M one-hot form. Numbers \u03a8 1 , \u03a8 2 \u2208 Q, called stacks, store the contents of the tape in a certain Cantor set like encoding (which is similar to, but slightly more involved, than binary representation) at each step. The simulating RNN R(M), gets as input encodings of s 1 s 2 ...s n in the first n steps, and from then on receives the vector 0 as input in each step. If \u03c6 is defined on s, then M halts and accepts with the output \u03c6(s) the content of the tape. In this case, R(M) enters a special accept state, and \u03a8 1 encodes \u03c6(s) and \u03a8 2 = 0. If M does not halt then R(M) also does not enter the accept state. Siegelmann and Sontag (1992) further show that from R(M) one can further explicitly produce the \u03c6(s) as its output. In the present paper, we will not deal with explicit production of the output but rather work with the definition of simulation in the previous paragraph. This is for simplicity of exposition, and the main ideas are already contained in our results. If the Turing machine computes \u03c6(s) in time T (s), the simulation takes O(|s|) time to encode the input sequence s and 4T (s) to compute \u03c6(s).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.1 RNNs and Turing-completeness", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Theorem B.1 ((Siegelmann and Sontag, 1992) ). Given any partial recursive function \u03c6 : {0, 1} * \u2192 {0, 1} * computed by Turing machine M \u03c6 , there exists a simulating RNN R(M \u03c6 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 42, |
|
"text": "((Siegelmann and Sontag, 1992)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.1 RNNs and Turing-completeness", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In view of the above theorem, for establishing Turing-completeness of Transformers, it suffices to show that RNNs can be simulated by Transformers. Thus, in the sequel we will only talk about simulating RNNs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.1 RNNs and Turing-completeness", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here we describe the original transformer architecture due to (Vaswani et al., 2017) as formalized by (P\u00e9rez et al., 2019) . While our notation and definitions largely follow (P\u00e9rez et al., 2019) , they are not identical. The transformer here makes use of positional encoding; later we will discuss the transformer variant using directional attention but without using positional encoding.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 84, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 102, |
|
"end": 122, |
|
"text": "(P\u00e9rez et al., 2019)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 195, |
|
"text": "(P\u00e9rez et al., 2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The transformer, denoted Trans, is a sequenceto-sequence architecture. Its input consists of (i) a sequence X = (x 1 , . . . , x n ) of vectors in Q d , (ii) a seed vector y 0 \u2208 Q d . The output is a sequence Y = (y 1 , . . . , y r ) of vectors in Q d . The sequence X is obtained from the sequence (s 0 , . . . , s n ) \u2208 \u03a3 n+1 of symbols by using the embedding mentioned earlier:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "x i = f (f b (s i ), pos(i)) for 0 \u2264 i \u2264 n.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The transformer consists of composition of transformer encoder and a transformer decoder. The transformer encoder is obtained by composing one or more single-layer encoders and similarly the transformer decoder is obtained by composing one or more single-layer decoders. For the feed-forward networks in the transformer layers we use the activation as in (Siegelmann and Sontag, 1992) , namely the saturated linear activation function:", |
|
"cite_spans": [ |
|
{ |
|
"start": 355, |
|
"end": 384, |
|
"text": "(Siegelmann and Sontag, 1992)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c3(x) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 0 if x < 0, x if 0 \u2264 x \u2264 1, 1 if x > 1.", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As mentioned in the main paper, we can easily work with the standard ReLU activation via", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u03c3(x) = ReLU(x) \u2212 ReLU(x \u2212 1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the following, after defining these components, we will put them together to specify the full transformer architecture. But we begin with self-attention mechanism which is the central feature of the transformer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Self-attention. The self-attention mechanism takes as input (i) a query vector q, (ii) a sequence of key vectors K = (k 1 , . . . , k n ), and (iii) a sequence of value", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "vectors V = (v 1 , . . . , v n ). All vectors are in Q d .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The q-attention over keys K and values V , denoted by Att(q, K, V ), is a vector a given by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(\u03b1 1 , . . . , \u03b1 n ) = \u03c1(f att (q, k 1 ), . . . , f att (q, k n )), a = \u03b1 1 v 1 + \u03b1 2 v 2 + \u2022 \u2022 \u2022 + \u03b1 n v n .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The above definition uses two functions \u03c1 and f att which we now describe. For the normalization function \u03c1 : Q n \u2192 Q n \u22650 we will use hardmax: for x = (x 1 , . . . , x n ) \u2208 Q n , if the maximum value occurs r times among x 1 , . . . , x n , then hardmax(x) i := 1/r if x i is a maximum value and hardmax(x) i := 0 otherwise. In practice, the softmax is often used but its output values are in general not rational. The names soft-attention and hard-attention are used for the attention mechanism depending on which normalization function is used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For the Turing-completeness proof of vanilla transformers, the scoring function f att used is a combination of multiplicative attention (Vaswani et al., 2017) and a non-linear function: f att (q, k i ) = \u2212 q, k i . For directional transformers, the standard multiplicative attention is used, that is, f att (q, k i ) = q, k i .", |
|
"cite_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 158, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Transformer encoder. A single-layer encoder is a function Enc(X; \u03b8), where \u03b8 is the parameter vector and the input X = (x 1 , . . . , x n ) is a sequence of vector in Q d . The output is another sequence Z = (z 1 , . . . , z n ) of vectors in ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Q(x i ) = x T i W Q , K(x i ) = x T i W K , V (x i ) = x T i W V ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "W Q , W K , W V \u2208 Q d\u00d7d . The function O(\u2022)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "is a feed-forward network. The single-layer encoder is then defined by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "a i = Att(Q(x i ), K(X), V (X)) + x i ,", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "z i = O(a i ) + a i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The addition operations +x i and +a i are the residual connections. The operation in (5) is called the encoder-encoder attention block. The complete L-layer transformer encoder TEnc (L) (X; \u03b8) has the same input X = (x 1 , . . . , x n ) as the single-layer encoder. By contrast, its output consists of two sequences (K e , V e ), each a sequence of n vectors in Q d . The encoder TEnc (L) (\u2022) is obtained by repeated application of single-layer encoders, each with its own parameters; and at the end, two trasformation functions K L (\u2022) and V L (\u2022) are applied to the sequence of output vectors at the last layer. Functions K (L) (\u2022) and V (L) (\u2022) are linear transformations in our constructions. Formally, for 1 \u2264 \u2264 L \u2212 1 and X 1 := X, we have", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "X +1 = Enc(X ; \u03b8 ), K e = K (L) (X L ), V e = V (L) (X L ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The output of the L-layer Transformer encoder (K e , V e ) = TEnc (L) (X) is fed to the Transformer decoder which we describe next.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Transformer decoder. The input to a singlelayer decoder is (i) (K e , V e ), the sequences of key and value vectors output by the encoder, and (ii) a sequence Y = (y 1 , . . . , y k ) of vectors in Q d . The output is another sequence Z = (z 1 , . . . , z k ) of vectors in Q d .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Similar to the single-layer encoder, a singlelayer decoder is parameterized by functions ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Q(\u2022), K(\u2022), V", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p t = Att(Q(y t ), K(Y t ), V (Y t )) + y t , (6) a t = Att(p t , K e , V e ) + p t ,", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "z t = O(a t ) + a t .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The operation in (6) will be referred to as the decoder-decoder attention block and the operation in (7) as the decoder-encoder attention block. In the decoder-decoder attention block, positional masking is applied to prevent the network from attending over symbols which are ahead of them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An L-layer Transformer decoder is obtained by repeated application of L single-layer decoders each with its own parameters and a transformation function F : Q d \u2192 Q d applied to the last vector in the sequence of vectors output by the final decoder. Formally, for 1 \u2264 \u2264 L \u2212 1 and Y 1 = Y we have", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Y +1 = Dec((K e , V e ), Y ; \u03b8 ), z = F (y L t ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use z = TDec L ((K e , V e ), Y ; \u03b8) to denote an L-layer Transformer decoder. Note that while the output of a single-layer decoder is a sequence of vectors, the output of an L-layer Transformer decoder is a single vector.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The complete Transformer. A Transformer network receives an input sequence X, a seed vector y 0 , and r \u2208 N. For t \u2265 0 its output is a sequence Y = (y 1 , . . . , y r ) defined b\u1ef9", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "y t+1 = TDec TEnc(X), (y 0 , y 1 , . . . , y t ) .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We get y t+1 by adding positional encoding: y t+1 =\u1ef9 t+1 + pos(t + 1). We denote the complete Transformer by Trans(X, y 0 ) = Y . The Transformer \"halts\" when y T \u2208 H, where H is a prespecified halting set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Simulation of RNNs by Transformers. We say that a Transformer simulates an RNN (as defined in Sec. B.1) if on input s \u2208 \u03a3 * , at each step t, the vector y t contains the hidden state h t as a subvector: \u2022] , and halts at the same step as RNN. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 205, |
|
"text": "\u2022]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "y t = [h t ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "a t = Att(p t , K e , V e ) + p t", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The result follows from the observation that without the residual connections, a t = Att(p t , K e , V e ), which leads to a t = n i=1 \u03b1 i v e i for some \u03b1 i s such that n i \u03b1 i = 1. Since v e i is produced from the encoder, the vector a t will have no information about its previous hidden state values. Since the previous hidden state information was computed and stored in p t , without the residual connection, the information in a t depends solely on the output of the encoder.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "One could argue that since the attention weights \u03b1 i s depend on the query vector p t , it could still use it gain the necessary information from the vectors v e i s. However, note that by definition of hard attention, the attention weights \u03b1 i in a t = n i=1 \u03b1 i v e i can either be zero or some nonzero value depending on the attention logits. Since the attention weights \u03b1 i are such that n i \u03b1 i = 1 and all the nonzero weights are equal to each other. Thus given the constraints there are 2 n \u22121 ways to attend over n inputs excluding the case where no input is attended over. Hence, the network without decoder-encoder residual connection with n inputs can have at most 2 n \u22121 distinct a t values. This implies that the model will be unable to perform a task that takes n inputs and has to produce more than 2 n \u2212 1 outputs. Note that, such a limitation will not exist with a residual connection since the vector a t = \u03a3 n i=1 \u03b1 i v e i + p t can take arbitrary number of values depending on its prior computations in p t .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As an example to illustrate the limitation, consider the following simple problem, given a value \u2206, where 0 \u2264 \u2206 \u2264 1, the network must produce the values 0, \u2206, 2\u2206, . . . , k\u2206, where k is the maximum integer such that k\u2206 \u2264 1. If the network receives a single input \u2206, the encoder will produce only one particular output vector and regardless of what the value of the query vector p t is, the vector a t will be constant at every timestep. Since a t is fed to feedforward network which maps it to z t , the output of the decoder will remain the same at every timestep and it cannot produce distinct values. If the input is combined with n \u2212 1 auxiliary symbols (such as # and $), then the network can only produce 2 n \u22121 outputs. Hence, the model will be incapable of performing the task if \u2206 < 1/2 n .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Thus the model cannot perform the task defined above which RNNs and Vanilla Transformers can easily do with a simple counting mechanism via their recurrent connection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For the case of multilayer decoder, consider any L layer decoder model. If the residual connection is removed, the output of decoder-encoder attention block at each layer is a", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "( ) t = n i=1 \u03b1 ( ) i v e i for 1 \u2264 \u2264 L.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Observe, that since output of the decoder-encoder attention block in the last (L-th) layer of the decoder is a", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(L) t = n i=1 \u03b1 (L) i v e i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since the output of the L layer decoder will be a feedforward network over a (L) t , the computation reduces to the single layer decoder case. Hence, similar to the single layer case, if the task requires the network to produce values of a t that come from a set with size at least 2 n , then the network will not be able to perform the task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 80, |
|
"text": "(L)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This implies that the model without decoderencoder residual connection is limited in its capability to perform tasks which requires it to make inferences based on previously generated outputs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Vanilla Transformer Architecture", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "with positional encoding Theorem C.2. RNNs can be simulated by vanilla Transformers and hence the class of vanilla Transformers is Turing-complete.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Proof. The construction of the simulating transformer is simple: it uses a single head and both the encoder and decoder have one layer. Moreover, the encoder does very little and most of the action happens in the decoder. The main task for the simulation is to design the input embedding (building on the given base embedding f b ), the feedforward network O(\u2022) and the matrices corresponding to functions", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Q(\u2022), K(\u2022), V (\u2022).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Input embedding. The input embedding is obtained by summing the symbol and positional encodings which we next describe. These encodings have dimension d = 2d h + d b + 2, where d h is the dimension of the hidden state of the RNN and d b is the dimension of the given encoding f b of the input symbols. We will use the symbol encoding f symb : \u03a3 \u2192 Q d which is essentially the same as f b except that the dimension is now larger:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "f symb (s) = [0 d h , f e (s); 0 d h , 0, 0].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The positional encoding pos : N \u2192 Q d is simply", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "pos(i) = [0 d h , 0 d b , 0 d h , i, 1].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Together, these define the combined embedding f for a given input sequence", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "s 0 s 1 \u2022 \u2022 \u2022 s n \u2208 \u03a3 * by f (s i ) = f symb (s i )+pos(i) = [0 d h , f b (s i ), 0 d h , i, 1].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The vectors v \u2208 Q d used in the computation of our transformer are of the form", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "v = [h 1 , s; h 2 , x 1 , x 2 ],", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "h 1 , h 2 \u2208 Q d h , s \u2208 Q de , and x 1 , x 2 \u2208 Q.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The coordinates corresponding to the h i 's are reserved for computation related to hidden states of the RNN, the coordinates corresponding to s are reserved for base embeddings, and those for x 1 and x 2 are reserved for scalar values related to positional operations. The first two blocks, corresponding to h 1 and s are reserved for computation of the RNN. During the computation of the Transformer, the underlying RNN will get the input st at step t for t = 0, 1, . . ., where recall thatt = min{t, n}. This sequence leads to the RNN getting the embedding of the input sequence s 0 , . . . , s n in the first n + 1 steps followed by the embedding of the symbol $ for the subsequent steps, which is in accordance with the requirements of (Siegelmann and Sontag, 1992). Similar to (P\u00e9rez et al., 2019) we use the following scoring function in the attention mechanism in our construction,", |
|
"cite_spans": [ |
|
{ |
|
"start": 783, |
|
"end": 803, |
|
"text": "(P\u00e9rez et al., 2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "f att (q i , k j ) = \u2212| q i , k j |", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Construction of TEnc. As previously mentioned, our transformer encoder has only one layer, and the computation in the encoder is very simple: the attention mechanism is not utilized, only the residual connections are. This is done by setting the matrix for V (\u2022) to the all-zeros matrix, and the feedforward networks to always output 0. The application of appropriately chosen linear transformations for the final K(\u2022) and V (\u2022) give the following lemma about the output of the encoder.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Lemma C.3. There exists a single layer encoder denoted by TEnc that takes as input the sequence (x 1 , . . . , x n , $) and generates the tuple (K e , V e ) where K e = (k 1 , . . . , k n ) and V e = (v 1 , . . . , v n ) such that,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "k i = [0 h , 0 s ; 0 h , \u22121, i], v i = [0 h , s i ; 0 h , 0, 0].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Construction of TDec. As in the construction of TEnc, our TDec has only one layer. Also like TEnc, the decoder-decoder attention block just computes the identity: we set V (1) (\u2022) = 0 identically, and use the residual connection so that p t = y t . For t \u2265 0, at the t-th step we denote the input to the decoder as y t =\u1ef9 t + pos(t). Let h 0 = 0 h and\u1ef9 0 = 0. We will show by induction that at the t-th timestep we have", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "y t = [h t , 0 s ; 0 h , t + 1, 1].", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "By construction, this is true for t = 0:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "y 0 = [0 h , 0 s ; 0 h , 1, 1].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Assuming that it holds for t, we show it for t + 1. By Lemma C.5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Att(p t , K e , V e ) = [0 h , v t+1 ; 0 h , 0, 0]. (10)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Lemma C.5 basically shows how we retrieve the input s t+1 at the relevant step for further computation in the decoder. It follows that", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "a t = Att(p t , K e , V e ) + p t = [h t , s t+1 , 0 h , t + 1, 1].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the final block of the decoder, the computation for RNN takes place:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Lemma C.4. There exists a function O(\u2022) defined by feed-forward network such that,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "O(a t ) = [(h t+1 \u2212 h t ), \u2212s t+1 , 0 h , \u2212(t + 1), \u22121],", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where W h , W x and b denote the parameters of the RNN under consideration.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Simulation of RNNs by Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "z t = O(a t ) + a t = [h t+1 , 0 s ; 0 h , 0, 0].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "This leads to", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We choose the function F for our decoder to be the identity function, therefore\u1ef9 t+1 = [h t+1 , 0 s ; 0 h , 0, 0], which means y t+1 =\u1ef9 t+1 + pos(i + 1) = [h t+1 , 0 s ; 0 h , t + 2, 1], proving our induction hypothesis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "This leads to", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Proof of Lemma C.3. We construct a single-layer encoder achieving the desired K e and V e . We make use of the residual connections and via trivial selfattention we get that", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "z i = x i . More specifically for i \u2208 [n] we have V (1) (x i ) = 0, a i = 0 + x i , O(a i ) = 0, z i = 0 + a i = x i . V", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(1) (x i ) = 0 can be achieved by setting the weight matrix as the all-0 matrix. Recall that x i is defined as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "x i = [ 0 h , s i , 0 h , i, 1 ].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We then apply linear transformations in", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "K(z i ) = z i W k and V (z i ) = z i W v , where W T k = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 0 0 \u2022 \u2022 \u2022 0 0 . . . . . . . . . . . . 0 0 \u2022 \u2022 \u2022 0 0 0 0 \u2022 \u2022 \u2022 0 1 0 0 \u2022 \u2022 \u2022 \u22121 0 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "and W k \u2208 Q d\u00d7d , and similarly one can obtain v i by setting the submatrix of W v \u2208 Q d\u00d7d formed by the first d \u2212 2 rows and columns to the identity matrix, and the rest of the entries to zeros.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Lemma C.5. Let q t \u2208 Q d be a query vector such that q = [\u2022, . . . , \u2022, t + 1, 1] where t \u2208 N and '\u2022' denotes an arbitrary value. Then we have", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Att(q t , K e , V e ) = [0 h , s t+1 , 0 h , 0, 0]. (11)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Proof. Recall that p t = y t = [h t , 0, . . . , 0, t + 1, 1] and k i = [0, 0, . . . , 0, \u22121, i] and hence", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "p t , k i = i \u2212 (t + 1), f att (p t , k i ) = \u2212|i \u2212 (t + 1)|.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Thus, for i \u2208 [n], the scoring functionf att (p t , k i ) has the maximum value 0 at index i = t + 1 if t < n; for t \u2265 n, the maximum value t + 1 \u2212 n is achieved for i = n. Therefore", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Att(p t , K e , V e ) = s t+1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Proof of Lemma C.4. Recall that", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "a t = [ h t , s t+1 , 0 h , t + 1, 1 ] Network O(a t ) is of the form O(a t ) = W 2 \u03c3(W 1 a t + b 1 ),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where W i \u2208 Q d\u00d7d and b \u2208 Q d and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "W 1 = d h d e d h 2 d h d e d h 2 \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 W h W x 0 0 0 I 0 0 I 0 0 0 0 0 0 I \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb and b 1 = [b h , 0 s , 0 h , 0, 0]. Hence \u03c3(W 1 a t + b 1 ) = [\u03c3(W h h t + W x s t+1 + b), s t+1 , h t , t + 1, 1]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Next we define W 2 by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "W 2 = d h d e d h 2 d h d e d h 2 \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 I 0 \u2212I 0 0 \u2212I 0 0 0 0 0 0 0 0 0 \u2212I \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb . This leads to O(a t ) = W 2 \u03c3(W 1 a t + b 1 ) = [\u03c3(W h h t + W x s t+1 + b) \u2212 h t , \u2212s t+1 , 0 h , \u2212(t + 1), \u22121],", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "which is what we wanted to prove.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.3 Technical Lemmas", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There are a few changes in the architecture of the Transformer to obtain directional Transformer. The first change is that there are no positional encodings and thus the input vector x i only consists of s i . Similarly, there are no positional encodings in the decoder inputs and hence y t =\u1ef9 t . The vector\u1ef9 is the output representation produced at the previous step and the first input vector to the decoder\u1ef9 0 = 0. Instead of using positional encodings, we apply positional masking to the inputs and outputs of the encoder.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Thus the encoder-encoder attention in (5) is redefined as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "a ( +1) i = Att(Q(z ( ) i ), K(Z ( ) i ), V (Z ( ) i )) + z ( ) i ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where Z (0) = X. Similarly the decoder-encoder attention in (7) is redefined by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "a ( ) t = Att(p ( ) t , K e t , V e t ) + p ( ) t ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where in a ( ) t denotes the layer and we use v ( ,b) to denote any intermediate vector being used in -th layer and b-th block in cases where the same symbol is used in multiple blocks in the same layer.", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 53, |
|
"text": "( ,b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Theorem D.1. RNNs can be simulated by vanilla Transformers and hence the class of vanilla Transformers is Turing-complete.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Proof. The Transformer network in this case will be more complex than the construction for the vanilla case. The encoder remains very similar, but the decoder is different and has two layers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Embedding. We will construct our Transformer to simulate an RNN of the form given in the definition with the recurrence ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "h t = g(W h h t\u22121 + W x x t + b).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "v = [h 1 , h 2 , s 1 , s 1 , x 1 , s 2 s 3 , s 4 ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where h i \u2208 Q d h , s \u2208 Q de and x i \u2208 Q. These blocks reserved for different types of objects. The where W i \u2208 Q d\u00d7d and b 1 \u2208 Q d . Define W 1 as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2d h d e d \u03c9 1 d \u03c9 d \u03c9 d \u03c9 2d h d e d \u03c9 \u2212 1 1 1 d \u03c9 d \u03c9 d \u03c9 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "0 0 0 0 0 0 0 0 I 0 0 0 0 0 0 0 0 0 0 I \u2212I 0 0 0 1 2 0 0 0 0 0 0 1 2 0 0 0 0 0 I 0 0 0 0 0 0 0 0 0 I 0 0 0 0 0 0 0 I \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb and b 1 = 0, then", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u03c3(W 1 a (1) t + b 1 ) = [0 h , 0 h , s 0:t , \u2206 t , 1 2 t+1 , \u03c9 t , \u03c9 t\u22121 , \u03c9 t\u22121 ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We define W 2 as Lemma D.4. There exists a function O (2) (.) defined by feed-forward network such that, for t \u2265 0,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "2d h d e d \u03c9\u22121 2 d \u03c9 d \u03c9 d \u03c9 2d h d e d \u03c9 \u2212 1 1 1 d \u03c9 d \u03c9 d \u03c9 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0", |
|
"eq_num": "0" |
|
} |
|
], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "O (2) (a (2) t ) = [\u03c3(W h h t\u22121 + W x st + b) \u2212 h t\u22121 , 0 h , \u2212st, \u2212\u03b4 t , 0, 0 \u03c9 , \u2212 s t , 0 \u03c9 ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where W h , W x and b denote the parameters of the RNN under consideration.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Proof. Proof is very similar to proof of lemma C.4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Completeness of Directional Transformers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this section, we describe the specifics of our experimental setup. This includes details about the dataset, models, setup and some sample outputs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "E Details of Experiments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The models under consideration are the vanilla Transformer, the one without decoder-encoder residual connection and the one without decoderdecoder residual connection. For the synthetic tasks, we implement a single layer encoder-decoder network with only a single attention head in each block. Our implementation of the Transformer is adapted from the implementation of (Rush, 2018) . Table 4 provides some illustrative sample outputs of the models for the copy task. For the machine translation task, we use Open-NMT (Klein et al., 2017) for our implementation. For preprocessing the German-English dataset we used the script from fairseq. The dataset contains about 153k training sentences, 7k development sentences and 7k test sentences. The hyperparameters to train the vanilla Transformer were obtained from fairseq's guidelines. We tuned the parameters on the validation set for the two baseline model. To preprocess the English-Vietnamese dataset, we follow Luong and Manning (2015) . The dataset contains about 133k training sentences. We use the tst2012 dataset containing 1.5k sentences for validation and tst2013 containing 1.3k sentences as test set. We use noam optimizer in all our experiments. While tuning the network, we vary the number of layer from 1 to 4, the learning rate, the number of heads, the warmup steps, embedding size and feedforward embedding size.", |
|
"cite_spans": [ |
|
{ |
|
"start": 370, |
|
"end": 382, |
|
"text": "(Rush, 2018)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 518, |
|
"end": 538, |
|
"text": "(Klein et al., 2017)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 965, |
|
"end": 989, |
|
"text": "Luong and Manning (2015)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 385, |
|
"end": 392, |
|
"text": "Table 4", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "E.1 Impact of Residual Connections", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our implementation for directional transformer is based on but we use only unidirectional masking as opposed to bidirectional used in their setup. While tuning the models, we vary the layers from 1 to 4, the learning rate, warmup steps and the number of heads.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "E.2 Masking and Encodings", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We have made our source code available at https://github.com/satwik77/Transformer-Computation-Analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Hahn (2020) andYun et al. (2020) study encoder-only seqto-seq models with fixed length outputs in which the computation halts as soon as the last symbol of the input is processed. Our work is about the full Transformer (encoder and decoder) which is a seq-to-seq model with variable length sequence output in which the decoder starts operating sequentially after the encoder.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that it is closer to practice than the scoring function \u2212| q, k | used inP\u00e9rez et al. (2019) and Theorem 4.1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "t , K \u0113 t , V \u0113 t ) =t j=0\u03b1 (1,2) t,j v e j ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "t+1 is non-zero. Now the claim follows immediately by the definition of hardmax.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank the anonymous reviewers for their constructive comments and suggestions. We would also like to thank our colleagues at Microsoft Research and Michael Hahn for their valuable feedback and helpful discussions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "vectors h i s are reserved for computation related to hidden states of RNNs, s i s are reserved for input embeddings and x i s are reserved for scalar values related to positional operations.Given an input sequence s 0 s 1 s 2 \u2022 \u2022 \u2022 s n \u2208 \u03a3 * where s 0 = # and s n = $, we use an embedding function f :Unlike (P\u00e9rez et al., 2019) , we use the dot product as our scoring function as used in Vaswani et al. (2017) in the attention mechanism in our construction, f att (q i , k j ) = q i , k j .For the computation of the Transformer, we also use a vector sequence in Q |\u03a3| defined bywhere 0 \u2264 t \u2264 n. The vector \u03c9 t = (\u03c9 t,1 , . . . , \u03c9 t,|\u03a3| ) contains the proportion of each input symbol till step t for 0 \u2264 t \u2264 n. Set \u03c9 \u22121 = 0. From the defintion of \u03c9 t , it follows that at any step 1 \u2264 k \u2264 |\u03a3| we havewhere \u03c6 t,k denotes the number of times the k-th symbol \u03b2 k in \u03a3 has appeared till the t-th step. Note that \u03c9 t,0 = 1 t+1 since the first coordinate corresponds to the proportion of the start symbol # which appears only once at t = 0. Similarly, \u03c9 t,|\u03a3| = 0 for 0 \u2264 t < n and \u03c9 t,|\u03a3| = 1/(t + 1) for t \u2265 n, since the end symbol $ doesn't appear till the end of the input and it appears only once at t = n.We define two more sequences of vectors in Q |\u03a3| for 0 \u2264 t \u2264 n:Here \u2206 t denotes the difference in the proportion of symbols between the t-th and (t \u2212 1)-th steps, with the applicatin of sigmoid activation. In vector \u03b4 t , the last coordinate of \u2206 t has been replaced with 1/2 t+1 . The last coordinate in \u03c9 t indicates the proportion of the terminal symbol $ and hence the last value in \u2206 t denotes the change in proportion of $.We set the last coordinate in \u03b4 t to an exponentially decreasing sequence so that after n steps we always have a nonzero score for the terminal symbol and it is taken as input in the underlying RNN. Different and perhaps simpler choices for the last coordinate of \u03b4 t may be possible. Note that 0 \u2264 \u2206 t,k \u2264 1 and 0 \u2264 \u03b4 t,k \u2264 1 for 0 \u2264 t \u2264 n and 1 \u2264 k \u2264 |\u03a3|.Construction of TEnc. The input to the network DTrans M is the sequence (s 0 , s 1 , . . . , s n\u22121 , s n ) where s 0 = # and s n = $.Our encoder is a simple single layer network such that TEnc(x 0 , x 1 , . . . , x n ) = (K e , V e ) where K e = (k e 0 , . . . , k e n ) and V e = (v e 0 , . . . , v e n ) such that,Similar to our construction of the encoder for vanilla transformer (Lemma C.3), the above K e and V e can be obtained by making the output of Att(\u2022) = 0 by choosing the V (\u2022) to always evaluate to 0 and similarly for O(\u2022), and using residual connections. Then one can produce K e and V e via simple linear transformations using K(\u2022) and V (\u2022).Construction of TDec. At the t-th step we denote the input to the decoder as y t =\u1ef9 t , where 0 \u2264 t \u2264 r, where r is the step where the decoder halts. Let h \u22121 = 0 h and h 0 = 0 h . We will prove by induction on t that for 0 \u2264 t \u2264 r we haveThis is true for t = 0 by the choice of seed vector:Assuming the truth of (14) for t, we show it for t + 1.Layer 1. Similar to the construction in Lemma C.3, in the decoder-decoder attention block we set V (1) (\u2022) = 0 d and use the residual connections to set p(1) t = y t . At the t-th step in the decoder-encoder attention block of layer 1 we havewhereIn Lemma D.2 we construct feed-forward networkLayer 2. In the first block of layer 2, we set the value transformation function to identically zero similar to Lemma C.3, i.e. V (2) (\u2022) = 0 which leads to the output of Att(\u2022) to be 0 and then using the residual connection we get pIn the final block of the decoder in the second layer, the computation for RNN takes place. In Lemma D.4 below we construct the feed-forward network O (2) (\u2022) such thatand hencewhich givesproving the induction hypothesis (14) for t + 1, and completing the simulation of RNN.", |
|
"cite_spans": [ |
|
{ |
|
"start": 309, |
|
"end": 329, |
|
"text": "(P\u00e9rez et al., 2019)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 411, |
|
"text": "Vaswani et al. (2017)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Lemma D.2. There exists a function O (1) (.) defined by feed-forward network such that,Proof. We define the feed-forward network O (1) (.) such thatWe define the feed-forward network O(a t ) as follows,which is what we wanted to prove.where t \u2265 0 and '\u2022' denotes an arbitrary value. Then we haveProof. Lett , k \u0113 t be the vector of normalized attention scores in the decoder-encoder attention block of layer 2 at time t. ThenWe claim that Claim 1. For t \u2265 0 we havewhere \u03bb t is a normalization factor given by \u03bb t = n\u22121 j=0 I(s j = s t ).We now prove the lemma assuming the claim above. Denote the L.H.S. in (16) by \u03b3 t . Note that if s j = s t , then v e j = \u03b3 t . Now we hav\u0113completing the proof of the lemma modulo the proof of the claim, which we prove next.Proof. (of Claim 1) For 0 < t \u2264 n, the vector \u03c9 t \u2212 \u03c9 t\u22121 has the formThe last inequality used our assumption that s 0 = # and that # does not occur at any later time and therefore \u03c6 t\u22121,j < t. On the other hand, if s t = \u03b2 k , then\u2264 0.This leads to,In words, the change in the proportion of a symbol is positive from step t \u2212 1 to t if and only if it is the input symbol at the t-th step. For 0 \u2264 t \u2264 n and 1 \u2264 k \u2264 |\u03a3|, this leads tot which comes from (15), and k e j is defined in (13). We reproduce these for convenience:It now follows that for 0 < t < n, if 0 \u2264 j \u2264 t is such that s j = s t , thent , k e j = \u03b4 t , s j = \u03b4 t,i = 0.And for 0 < t < n, if 0 \u2264 j \u2264 t is such thatt , k e j = \u03b4 t , s j = \u03b4 t,i= t \u2212 \u03c6 t\u22121,j t(t + 1) \u2265 1 t(t + 1).Thus, for 0 \u2264 t < n, in the vector p(2) t , k e 0 , . . . , pt , k e t , the largest coordinates are the ones indexed by j with s j = s t and they all equal t\u2212\u03c6 t\u22121,i t(t+1) . All other coordinates are 0. For t \u2265 n, only the last coordinate p(2) t , k e n = \u03b4 t , $ = 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D.1 Technical Lemmas", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Recurrent neural networks as weighted language recognizers", |
|
"authors": [ |
|
{ |
|
"first": "Yining", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sorcha", |
|
"middle": [], |
|
"last": "Gilroy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Maletti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "May", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2261--2271", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-1205" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yining Chen, Sorcha Gilroy, Andreas Maletti, Jonathan May, and Kevin Knight. 2018. Recurrent neural networks as weighted language recognizers. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2261-2271, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Transformer-XL: Attentive language models beyond a fixed-length context", |
|
"authors": [ |
|
{ |
|
"first": "Zihang", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhilin", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2978--2988", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1285" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Car- bonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 2978-2988, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Theoretical limitations of selfattention in neural sequence models", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Hahn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "156--171", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00306" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Hahn. 2020. Theoretical limitations of self- attention in neural sequence models. Transactions of the Association for Computational Linguistics, 8:156-171.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Infinite attention: Nngp and ntk for deep attention networks", |
|
"authors": [ |
|
{ |
|
"first": "Jiri", |
|
"middle": [], |
|
"last": "Hron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yasaman", |
|
"middle": [], |
|
"last": "Bahri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jascha", |
|
"middle": [], |
|
"last": "Sohl-Dickstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Novak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2006.10540" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiri Hron, Yasaman Bahri, Jascha Sohl-Dickstein, and Roman Novak. 2020. Infinite attention: Nngp and ntk for deep attention networks. arXiv preprint arXiv:2006.10540.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "An improved relative self-attention mechanism for transformer with application to music generation", |
|
"authors": [ |
|
{ |
|
"first": "Cheng-Zhi Anna", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Curtis", |
|
"middle": [], |
|
"last": "Hawthorne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Hoffman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Douglas", |
|
"middle": [], |
|
"last": "Eck", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Noam Shazeer, Curtis Hawthorne, An- drew M. Dai, Matthew D. Hoffman, and Douglas Eck. 2018. An improved relative self-attention mechanism for transformer with application to mu- sic generation. ArXiv, abs/1809.04281.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "The lipschitz constant of self-attention", |
|
"authors": [ |
|
{ |
|
"first": "Hyunjik", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Papamakarios", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andriy", |
|
"middle": [], |
|
"last": "Mnih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2006.04710" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hyunjik Kim, George Papamakarios, and Andriy Mnih. 2020. The lipschitz constant of self-attention. arXiv preprint arXiv:2006.04710.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "OpenNMT: Opensource toolkit for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuntian", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Senellart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of ACL 2017, System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "67--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander Rush. 2017. OpenNMT: Open- source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A field guide to dynamical recurrent networks", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Kolen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kremer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John F Kolen and Stefan C Kremer. 2001. A field guide to dynamical recurrent networks. John Wiley & Sons.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "On the computational power of rnns", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Samuel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert C", |
|
"middle": [], |
|
"last": "Korsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Berwick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1906.06349" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel A Korsky and Robert C Berwick. 2019. On the computational power of rnns. arXiv preprint arXiv:1906.06349.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Or Sharir, Hofit Bata, and Amnon Shashua. 2020. Limits to depth efficiencies of self-attention", |
|
"authors": [ |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Levine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Wies", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2006.12467" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoav Levine, Noam Wies, Or Sharir, Hofit Bata, and Amnon Shashua. 2020. Limits to depth efficiencies of self-attention. arXiv preprint arXiv:2006.12467.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Stanford neural machine translation systems for spoken language domains", |
|
"authors": [ |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the International Workshop on Spoken Language Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "76--79", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minh-Thang Luong and Christopher D Manning. 2015. Stanford neural machine translation systems for spo- ken language domains. In Proceedings of the In- ternational Workshop on Spoken Language Transla- tion, pages 76-79.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A logical calculus of the ideas immanent in nervous activity", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Warren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Walter", |
|
"middle": [], |
|
"last": "Mcculloch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pitts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1943, |
|
"venue": "The bulletin of mathematical biophysics", |
|
"volume": "5", |
|
"issue": "4", |
|
"pages": "115--133", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Warren S McCulloch and Walter Pitts. 1943. A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4):115- 133.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A formal hierarchy of RNN architectures", |
|
"authors": [ |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Merrill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gail", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eran", |
|
"middle": [], |
|
"last": "Yahav", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "443--459", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William Merrill, Gail Weiss, Yoav Goldberg, Roy Schwartz, Noah A. Smith, and Eran Yahav. 2020. A formal hierarchy of RNN architectures. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 443-459, On- line. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Scaling neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--9", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-6301" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine trans- lation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1-9, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "On the turing completeness of modern neural network architectures", |
|
"authors": [ |
|
{ |
|
"first": "Jorge", |
|
"middle": [], |
|
"last": "P\u00e9rez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Javier", |
|
"middle": [], |
|
"last": "Marinkovi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pablo", |
|
"middle": [], |
|
"last": "Barcel\u00f3", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jorge P\u00e9rez, Javier Marinkovi\u0107, and Pablo Barcel\u00f3. 2019. On the turing completeness of modern neural network architectures. In International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Improving language understanding by generative pre-training", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karthik", |
|
"middle": [], |
|
"last": "Narasimhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Salimans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai- assets/researchcovers/languageunsupervised/language understanding paper. pdf.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "The annotated transformer", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of Workshop for NLP Open Source Software (NLP-OSS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "52--60", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-2509" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Rush. 2018. The annotated transformer. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 52-60, Melbourne, Aus- tralia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Evaluating the ability of LSTMs to learn context-free grammars", |
|
"authors": [ |
|
{ |
|
"first": "Luzi", |
|
"middle": [], |
|
"last": "Sennhauser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Berwick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "115--124", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-5414" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luzi Sennhauser and Robert Berwick. 2018. Evaluat- ing the ability of LSTMs to learn context-free gram- mars. In Proceedings of the 2018 EMNLP Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 115-124, Brussels, Bel- gium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Self-attention with relative position representations", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Shaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "464--468", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-2074" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position represen- tations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 464-468, New Orleans, Louisiana. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Disan: Directional self-attention network for rnn/cnn-free language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tianyi", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guodong", |
|
"middle": [], |
|
"last": "Long", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shirui", |
|
"middle": [], |
|
"last": "Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chengqi", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2018. Disan: Di- rectional self-attention network for rnn/cnn-free lan- guage understanding. In Thirty-Second AAAI Con- ference on Artificial Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Neural networks and analog computation: beyond the Turing limit", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Hava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Siegelmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hava T Siegelmann. 2012. Neural networks and ana- log computation: beyond the Turing limit. Springer Science & Business Media.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "On the computational power of neural nets", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Hava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduardo", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Siegelmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sontag", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the fifth annual workshop on Computational learning theory", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "440--449", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hava T Siegelmann and Eduardo D Sontag. 1992. On the computational power of neural nets. In Proceed- ings of the fifth annual workshop on Computational learning theory, pages 440-449. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Closing brackets with recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Natalia", |
|
"middle": [], |
|
"last": "Skachkova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Trost", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dietrich", |
|
"middle": [], |
|
"last": "Klakow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "232--239", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-5425" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Natalia Skachkova, Thomas Trost, and Dietrich Klakow. 2018. Closing brackets with recurrent neu- ral networks. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpret- ing Neural Networks for NLP, pages 232-239, Brus- sels, Belgium. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Transformer dissection: An unified understanding for transformer's attention via the lens of kernel", |
|
"authors": [ |
|
{ |
|
"first": "Yao-Hung Hubert", |
|
"middle": [], |
|
"last": "Tsai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shaojie", |
|
"middle": [], |
|
"last": "Bai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Makoto", |
|
"middle": [], |
|
"last": "Yamada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Louis-Philippe", |
|
"middle": [], |
|
"last": "Morency", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4344--4353", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1443" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2019. Transformer dissection: An unified under- standing for transformer's attention via the lens of kernel. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4344-4353, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned", |
|
"authors": [ |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Voita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Talbot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fedor", |
|
"middle": [], |
|
"last": "Moiseev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5797--5808", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1580" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elena Voita, David Talbot, Fedor Moiseev, Rico Sen- nrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lift- ing, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 5797-5808, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "On the practical computational power of finite precision RNNs for language recognition", |
|
"authors": [ |
|
{ |
|
"first": "Gail", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eran", |
|
"middle": [], |
|
"last": "Yahav", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "740--745", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-2117" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gail Weiss, Yoav Goldberg, and Eran Yahav. 2018. On the practical computational power of finite pre- cision RNNs for language recognition. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 740-745, Melbourne, Australia. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Assessing the ability of self-attention networks to learn word order", |
|
"authors": [ |
|
{ |
|
"first": "Baosong", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Longyue", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Derek", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Wong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lidia", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Chao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhaopeng", |
|
"middle": [], |
|
"last": "Tu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3635--3644", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1354" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Baosong Yang, Longyue Wang, Derek F. Wong, Lidia S. Chao, and Zhaopeng Tu. 2019. Assessing the ability of self-attention networks to learn word order. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 3635-3644, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Are transformers universal approximators of sequence-to-sequence functions", |
|
"authors": [ |
|
{ |
|
"first": "Chulhee", |
|
"middle": [], |
|
"last": "Yun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Srinadh", |
|
"middle": [], |
|
"last": "Bhojanapalli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankit", |
|
"middle": [], |
|
"last": "Singh Rawat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sashank", |
|
"middle": [], |
|
"last": "Reddi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanjiv", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chulhee Yun, Srinadh Bhojanapalli, Ankit Singh Rawat, Sashank Reddi, and Sanjiv Kumar. 2020. Are transformers universal approximators of sequence-to-sequence functions? In International Conference on Learning Representations.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "The class of Transformers with positional encodings is Turing-complete.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "Transformer network with various components highlighted. The components marked red are essential for the Turing-completeness whereas for the pairs of blocks and residual connections marked green, either one of the component is enough. The dashed residual connection is not necessary for Turingcompleteness of the network.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"text": "The class of Transformers with positional masking and no explicit positional encodings is Turing-complete.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"text": "Proposition 4.3. The class of Transformers without residual connection around the decoderencoder attention block is not Turing-complete.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"text": "Performance of the two models on the copy task across varying lengths of test inputs. DiSAN refers to Transformer with only positional masking. SAN refers to vanilla Transformers.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF5": { |
|
"uris": null, |
|
"text": "Q d . The parameters \u03b8 specify functions Q(\u2022), K(\u2022), V (\u2022), and O(\u2022), all of type Q d \u2192 Q d . The functions Q(\u2022), K(\u2022), and V (\u2022) are usually linear transformations and this will be the case in our constructions:", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF6": { |
|
"uris": null, |
|
"text": "(\u2022) and O(\u2022) and is defined by", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF7": { |
|
"uris": null, |
|
"text": "The vectors used in the Transformer layers are of dimension d = 2d h + d e + 4|\u03a3| + 1. Where d h is the dimension of the hidden state of the RNN and d e is the dimension of the input embedding. All vector v \u2208 Q d used during the computation of the network are of the form", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"text": "where \u03bb t is a normalization constant and I(\u2022) is the indicator. See Lemma D.3 for more details.At this point, O dec (\u2022) has at its disposal the hidden state h t (coming from z", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>(1) t</td><td>via p</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"text": "BLEU scores (\u2191) for copy and counting task.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Please see Section 5 for details</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"text": "BLEU scores (\u2191) for translation task. Please see Section 5 for details.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"text": "Sample outputs by the models on the counting task. Without the residual connection around Decoder-Encoder block, the model is incapable of predicting more than one distinct output.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF9": { |
|
"text": "Sample outputs by the models on the copy task on length 16. With absolute positional encodings the model overfits on terminal symbol at position 13 and generates sequence of length 12.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |