|
{ |
|
"paper_id": "J15-1010", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:03:49.540456Z" |
|
}, |
|
"title": "When the Whole Is Not Greater Than the Combination of Its Parts: A \"Decompositional\" Look at Compositional Distributional Semantics", |
|
"authors": [ |
|
{ |
|
"first": "Fabio", |
|
"middle": [ |
|
"Massimo" |
|
], |
|
"last": "Zanzotto", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Rome \"", |
|
"location": { |
|
"addrLine": "Tor Vergata,\" Viale del Politecnico, 1", |
|
"postCode": "00133", |
|
"settlement": "Rome", |
|
"country": "Italy" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Lorenzo", |
|
"middle": [], |
|
"last": "Ferrone", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Distributional semantics has been extended to phrases and sentences by means of composition operations. We look at how these operations affect similarity measurements, showing that similarity equations of an important class of composition methods can be decomposed into operations performed on the subparts of the input phrases. This establishes a strong link between these models and convolution kernels.", |
|
"pdf_parse": { |
|
"paper_id": "J15-1010", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Distributional semantics has been extended to phrases and sentences by means of composition operations. We look at how these operations affect similarity measurements, showing that similarity equations of an important class of composition methods can be decomposed into operations performed on the subparts of the input phrases. This establishes a strong link between these models and convolution kernels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Distributional semantics approximates word meanings with vectors tracking cooccurrence in corpora (Turney and Pantel 2010) . Recent work has extended this approach to phrases and sentences through vector composition (Clark 2015) . Resulting compositional distributional semantic models (CDSMs) estimate degrees of semantic similarity (or, more generally, relatedness) between two phrases: A good CDSM might tell us that green bird is closer to parrot than to pigeon, useful for tasks such as paraphrasing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 122, |
|
"text": "(Turney and Pantel 2010)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 216, |
|
"end": 228, |
|
"text": "(Clark 2015)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "We take a mathematical look 1 at how the composition operations postulated by CDSMs affect similarity measurements involving the vectors they produce for phrases or sentences. We show that, for an important class of composition methods, encompassing at least those based on linear transformations, the similarity equations can be decomposed into operations performed on the subparts of the input phrases, Table 1 Compositional Distributional Semantic Models: a, b, and c are distributional vectors representing the words a, b, and c, respectively; matrices X, Y, and Z are constant across a phrase type, corresponding to syntactic slots; the matrix A and the third-order tensor B B B represent the predicate words a in the first phrase and b in the second phrase, respectively. Coecke, Sadrzadeh, and Clark (2010) and typically factorized into terms that reflect the linguistic structure of the input. This establishes a strong link between CDSMs and convolution kernels (Haussler 1999) , which act in the same way. We thus refer to our claim as the \"Convolution Conjecture.\"", |
|
"cite_spans": [ |
|
{ |
|
"start": 778, |
|
"end": 813, |
|
"text": "Coecke, Sadrzadeh, and Clark (2010)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 971, |
|
"end": 986, |
|
"text": "(Haussler 1999)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 405, |
|
"end": 412, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "We focus on the models in Table 1 . These CDSMs all apply linear methods, and we suspect that linearity is a sufficient (but not necessary) condition to ensure that the Convolution Conjecture holds. We will first illustrate the conjecture for linear methods, and then briefly consider two nonlinear approaches: the dual space model of Turney (2012), for which it does, and a representative of the recent strand of work on neuralnetwork models of composition, for which it does not.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 33, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Vectors are represented as small letters with an arrow a and their elements are a i , matrices as capital letters in bold A and their elements are A ij , and third-order or fourth-order tensors as capital letters in the form A A A and their elements are A ijk or A ijkh . The symbol represents the element-wise product and \u2297 is the tensor product. The dot product is a, b and the Frobenius product-that is, the generalization of the dot product to matrices and high-order tensors-is represented as A, B F and A A A, B B B F . The Frobenius product acts on vectors, matrices, and third-order tensors as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mathematical Preliminaries", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "a, b F = i a i b i = a, b A, B F = ij A ij B ij A A A, B B B F = ijk A ijk B ijk", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Mathematical Preliminaries", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "A simple property that relates the dot product between two vectors and the Frobenius product between two general tensors is the following:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mathematical Preliminaries", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "a, b = I, a b T F (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mathematical Preliminaries", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "where I is the identity matrix. The dot product of A x and B y can be rewritten as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mathematical Preliminaries", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "A x, B y = A T B, x y T F (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mathematical Preliminaries", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Let A A A and B B B be two third-order tensors and x, y, a, c four vectors. It can be shown that:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mathematical Preliminaries", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "xA A A y, aB B B c = j (A A A \u2297 B B B) j , x \u2297 y \u2297 a \u2297 c F (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mathematical Preliminaries", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "where C C C = j (A A A \u2297 B B B) j is a non-standard way to indicate the tensor contraction of the tensor product between two third-order tensors. In this particular tensor contraction, the elements C iknm of the resulting fourth-order tensor C C C are", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mathematical Preliminaries", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "C iknm = j A ijk B njm . The elements D iknm of the tensor D D D = x \u2297 y \u2297 a \u2297 c are D iknm = x i y k a n c m .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mathematical Preliminaries", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Structured Objects. In line with Haussler (1999) , a structured object x \u2208 X is either a terminal object that cannot be furthermore decomposed, or a non-terminal object that can be decomposed into n subparts. We indicate with x = (x 1 , . . . , x n ) one such decomposition, where the subparts x i \u2208 X are structured objects themselves. The set X is the set of the structured objects and T X \u2286 X is the set of the terminal objects. A structured object x can be anything according to the representational needs. Here, x is a representation of a text fragment, and so it can be a sequence of words, a sequence of words along with their part of speech, a tree structure, and so on. The set R(x) is the set of decompositions of x relevant to define a specific CDSM. Note that a given decomposition of a structured object x does not need to contain all the subparts of the original object. For example, let us consider the phrase x = tall boy. We can then define R(x) = {(tall, boy), (tall), (boy)}. This set contains the three possible decompositions of the phrase: ( tall", |
|
"cite_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 48, |
|
"text": "Haussler (1999)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formalizing the Convolution Conjecture", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "x 1 , boy x 2 ), ( tall x 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formalizing the Convolution Conjecture", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "), and ( boy", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formalizing the Convolution Conjecture", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "x 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formalizing the Convolution Conjecture", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": ").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formalizing the Convolution Conjecture", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Recursive formulation of CDSM. A CDSM can be viewed as a function f that acts recursively on a structured object x. If x is a non-terminal object", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formalizing the Convolution Conjecture", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "f (x) = x\u2208R(x) \u03b3( f (x 1 ), f (x 2 ), . . . , f (x n ))", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Formalizing the Convolution Conjecture", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "where R(x) is the set of relevant decompositions, is a repeated operation on this set, \u03b3 is a function defined on f (x i ) where x i are the subparts of a decomposition of x. If x is a terminal object, f (x) is directly mapped to a tensor. The function f may operate differently on different kinds of structured objects, with tensor degree varying accordingly. The set R(x) and the functions f , \u03b3, and depend on the specific CDSM, and the same CDSM might be susceptible to alternative analyses satisfying the form in Equation (5). As an example, under Additive, x is a sequence of words and f is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formalizing the Convolution Conjecture", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "f (x) = \u23a7 \u23a8 \u23a9 y\u2208R(x) f (y) if x / \u2208 T X x if x \u2208 T X", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Formalizing the Convolution Conjecture", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formalizing the Convolution Conjecture", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "R((w 1 , . . . , w n )) = {(w 1 ), . . . , (w n )}.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formalizing the Convolution Conjecture", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The repeated operation corresponds to summing and \u03b3 is identity. For Multiplicative we have", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formalizing the Convolution Conjecture", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "f (x) = \u23a7 \u23a8 \u23a9 y\u2208R(x) f (y) if x / \u2208 T X x if x \u2208 T X (7)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formalizing the Convolution Conjecture", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "where R(x) = {(w 1 , . . . , w n )} (a single trivial decomposition including all subparts). With a single decomposition, the repeated operation reduces to a single term; and here \u03b3 is the product (it will be clear subsequently, when we apply the Convolution Conjecture to these models, why we are assuming different decomposition sets for Additive and Multiplicative).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formalizing the Convolution Conjecture", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "For every CDSM f along with its R(x) set, there exist functions K, K i and a function g such that:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition 1 (Convolution Conjecture)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "K( f (x), f (y)) = x\u2208R(x) y\u2208R(y) g(K 1 ( f (x 1 ), f (y 1 )), K 2 ( f (x 2 ), f (y 2 )), . . . , K n ( f (x n ), f (y n )))", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Definition 1 (Convolution Conjecture)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The Convolution Conjecture postulates that the similarity K( f (x), f (y)) between the tensors f (x) and f (y) is computed by combining operations on the subparts, that is,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition 1 (Convolution Conjecture)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "K i ( f (x i ), f (y i ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition 1 (Convolution Conjecture)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ", using the function g. This is exactly what happens in convolution kernels (Haussler 1999) . K is usually the dot product, but this is not necessary: We will show that for the dual-space model of Turney (2012) K turns out to be the fourth root of the Frobenius tensor.", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 91, |
|
"text": "(Haussler 1999)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition 1 (Convolution Conjecture)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We illustrate now how the Convolution Conjecture (CC) applies to the considered CDSMs, exemplifying with adjective-noun and subject-verb-object phrases. Without loss of generality we use tall boy and red cat for adjective-noun phrases and goats eat grass and cows drink water for subject-verb-object phrases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Additive Model. K and K i are dot products, g is the identity function, and f is as in Equation (6). The structure of the input is a word sequence (i.e., x = (w 1 w 2 )) and the relevant decompositions consist of these single words, R(x) = {(w 1 ), (w 2 )}. Then K( f (tall boy), f (red cat)) = tall + boy, red + cats = = tall, red + tall, cat + boy, red + boy, cat = =", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "x\u2208{tall,boy} y\u2208{red,cat}", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "f (x), f (y) = x\u2208{tall,boy} y\u2208{red,cat} K( f (x), f (y))", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "The CC form of Additive shows that the overall dot product can be decomposed into dot products of the vectors of the single words. Composition does not add any further information. These results can be easily extended to longer phrases and to phrases of different length.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Multiplicative Model. K, g are dot products, K i the component-wise product, and f is as in Equation 7. The structure of the input is x = (w 1 w 2 ), and we use the trivial single decomposition consisting of all subparts (thus summation reduces to a single term):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "K( f (tall boy), f (red cat)) = tall boy, red cat = tall red boy cat, 1 = = tall red, boy cat = g(K 1 ( tall, red), K 2 ( boy, cat))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "This is the dot product between an indistinct chain of element-wise products and a vector 1 of all ones or the product of two separate element-wise products, one on adjectives tall red, and one on nouns boy cat. In this latter CC form, the final dot product is obtained in two steps: first separately operating on the adjectives and on the nouns; then taking the dot product of the resulting vectors. The comparison operations are thus reflecting the input syntactic structure. The results can be easily extended to longer phrases and to phrases of different lengths.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Full Additive Model. The input consists of a sequence of (label,word) pairs x = ( (L 1 w 1 ) , . . . , (L n w n )) and the relevant decomposition set includes the single tuples, that is,", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 92, |
|
"text": "(L 1 w 1 )", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "R(x) = {(L 1 w 1 ), . . . , (L n w n )}. The CDSM f is defined as f (x) = \u23a7 \u23aa \u23aa \u23aa \u23a8 \u23aa \u23aa \u23aa \u23a9 (L w)\u2208R(x) f (L)f (w) if x / \u2208 T X X if x \u2208 T X is a label L w if x \u2208 T X is a word w", |
|
"eq_num": "(11)" |
|
} |
|
], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "The repeated operation here is summation, and \u03b3 the matrix-by-vector product. In the CC form, K is the dot product, g the Frobenius product, K 1 ( f (x), f (y)) = f (x) T f (y), and K 2 ( f (x), f (y)) = f (x)f (y) T . We have then for adjective-noun composition (by using the property in Equation 3):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "K( f ((A tall) (N boy)), f ((A red) (N cat))) = A tall + N boy, A red + N cat = = A tall, A red + A tall, N cat + N boy, A red + N boy, N cat = = A T A, tall red T F + N T A, boy red T F + A T N, tall cat T F + N T N, boy cat T F = = (l x w x )\u2208{(A tall),(N boy)} (l y w y )\u2208{(A red),(N cat)} g(K 1 ( f (l x ), f (l y )), K 2 ( f (w x ), f (w y ))", |
|
"eq_num": "(12)" |
|
} |
|
], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "The CC form shows how Full Additive factorizes into a more structural and a more lexical part: Each element of the sum is the Frobenius product between the product of two matrices representing syntactic labels and the tensor product between two vectors representing the corresponding words. For subject-verb-object phrases ((S w 1 ) (", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "V w 2 ) (O w 3 )) we have K( f (((S goats) (V eat) (O grass))), f (((S cows) (V drink) (O water)))) = = S goats + V eat + O grass, S cows + V drink + O water = = S T S, goats cows T F + S T V, goats drink T F + S T O, goats water T F + V T S, eat cows T F + V T V, eat drink T F + V T O, eat water T F + O T S, grass cows T F + O T V, grass drink T F + O T O, grass water T F = (l x w x )\u2208{(S goats),(V eat),(O grass)} (l y w y )\u2208{(S cows),(V drink),(O water)} g(K 1 ( f (l x ), f (l y )), K 2 ( f (w x ), f (w y )) (13)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Again, we observe the factoring into products of syntactic and lexical representations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "By looking at Full Additive in the CC form, we observe that when X T Y \u2248 I for all matrix pairs, it degenerates to Additive. Interestingly, Full Additive can also approximate a semantic convolution kernel (Mehdad, Moschitti, and Zanzotto 2010) , which combines dot products of elements in the same slot. In the adjective-noun case, we obtain this approximation by choosing two nearly orthonormal matrices A and N such that AA T = NN T \u2248 I and AN T \u2248 0 and applying Equation 2", |
|
"cite_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 243, |
|
"text": "(Mehdad, Moschitti, and Zanzotto 2010)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": ": A tall + N boy, A red + N cat \u2248 tall, red + boy, cat .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "This approximation is valid also for three-word phrases. When the matrices S, V, and O are such that XX T \u2248 I with X one of the three matrices and YX T \u2248 0 with X and Y two different matrices, Full Additive approximates a semantic convolution kernel comparing two sentences by summing the dot products of the words in the same role, that is,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "S goats + V eat + O grass, S cows + V drink + O water \u2248 \u2248 goats, cows + eat, drink + grass, water", |
|
"eq_num": "(14)" |
|
} |
|
], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Results can again be easily extended to longer and different-length phrases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Lexical Function Model. We distinguish composition with one-vs. two argument predicates. We illustrate the first through adjective-noun composition, where the adjective acts as the predicate, and the second with transitive verb constructions. Although we use the relevant syntactic labels, the formulas generalize to any construction with the same argument count. For adjective-noun phrases, the input is a sequence of (label, word) pairs (x = ((A, w 1 ), (N, w 2 ))) and the relevant decomposition set again includes only the single trivial decomposition into all the subparts:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "R(x) = {((A, w 1 ), (N, w 2 ))}.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "The method itself is recursively defined as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "f (x) = \u23a7 \u23aa \u23a8 \u23aa \u23a9 f ((A, w 1 ))f ((N, w 2 )) if x / \u2208 T X = ((A, w 1 ), (N, w 2 )) W 1 if x \u2208 T x = (A, w 1 ) w 2 if x \u2208 T x = (N, w 2 )", |
|
"eq_num": "(15)" |
|
} |
|
], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Here, K and g are, respectively, the dot and Frobenius product, K 1 ( f (x), f (y)) = f (x) T f (y), and K 2 ( f (x), f (y)) = f (x)f (y) T . Using Equation 3, we have then K( f (tall boy)), f (red cat)) = TALL boy, RED cat = = TALL T RED, boy cat", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "T F = g(K 1 ( f (tall), f (red)), K 2 ( f (boy), f (cat)))", |
|
"eq_num": "(16)" |
|
} |
|
], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "The role of predicate and argument words in the final dot product is clearly separated, showing again the structure-sensitive nature of the decomposition of the comparison operations. In the two-place predicate case, again, the input is a set of (label, word) tuples, and the relevant decomposition set only includes the single trivial decomposition into all subparts. The CDSM f is defined as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "f (x) = \u23a7 \u23aa \u23a8 \u23aa \u23a9 f ((S w 1 )) \u2297 f ((V w 2 )) \u2297 f ((O w 3 )) if x / \u2208 T X = ((S w 1 ) (V w 2 ) (O w 3 )) w if x \u2208 T X = (l w) and l is S or O W W W if x \u2208 T X = (V w)", |
|
"eq_num": "(17)" |
|
} |
|
], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "K is the dot product and g (x, y, z) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 36, |
|
"text": "(x, y, z)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "= x, y \u2297 z F , K 1 ( f (x), f (y)) = j ( f (x) \u2297 f (y))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "jthat is, the tensor contraction 2 along the second index of the tensor product between f (x) and f (y)-and K 2 ( f (x), f (y)) = K 3 ( f (x), f (y)) = f (x) \u2297 f (y) are tensor products. The dot product of goats EAT EAT EAT grass and cows DRINK DRINK DRINK water is (by using Equation 4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": ") K( f (((S goats) (V eat) (O grass))), f (((S cows) (V drink) (O water)))) = = goats EAT EAT EAT grass, cows DRINK DRINK DRINK water = = j (EAT EAT EAT \u2297 DRINK DRINK DRINK) j , goats \u2297 grass \u2297 cows \u2297 water F = = g(K 1 ( f ((V eat)), f ((V drink))), K 2 ( f ((S goats)), f ((S cows))) \u2297 K 3 ( f ((O grass)), f ((O water))))", |
|
"eq_num": "(18)" |
|
} |
|
], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "We rewrote the equation as a Frobenius product between two fourth-order tensors. The first combines the two third-order tensors of the verbs j (EAT EAT EAT \u2297 DRINK DRINK DRINK) j and the second combines the vectors representing the arguments of the verb, that is: goats \u2297 grass \u2297 cows \u2297 water. In this case as well we can separate the role of predicate and argument types in the comparison computation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Extension of the Lexical Function to structured objects of different lengths is treated by using the identity element for missing parts. As an example, we show here the comparison between tall boy and cat where the identity element is the identity matrix I:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "K( f (tall boy)), f (cat)) = TALL boy, cat = TALL boy, I cat = = TALL T I, boy cat T F = g(K 1 ( f (tall), f ( )), K 2 ( f (boy), f (cat))) (19)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Dual Space Model. We have until now applied the CC to linear CDSMs with the dot product as the final comparison operator (what we called K). The CC also holds for the effective Dual Space model of Turney (2012) , which assumes that each word has two distributional representations, w d in \"domain\" space and w f in \"function\" space. The similarity of two phrases is directly computed as the geometric average of the separate similarities between the first and second words in both spaces. Even though there is no explicit composition step, it is still possible to put the model in CC form. Take x = (x 1 , x 2 ) and its trivial decomposition. Define, for a word w with vector representations w d and w f : g(a, b) to be \u221a ab. Then", |
|
"cite_spans": [ |
|
{ |
|
"start": 197, |
|
"end": 210, |
|
"text": "Turney (2012)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 706, |
|
"end": 713, |
|
"text": "g(a, b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "f (w) = w d w f T . Define also K 1 ( f (x 1 ), f (y 1 )) = f (x 1 ), f (y 1 ) F , K 2 ( f (x 2 ), f (y 2 )) = f (x 2 ), f (y 2 ) F and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "g(K 1 ( f (x 1 ), f (y 1 )), K 2 ( f (x 2 ), f (y 2 ))) = = x d1 x f 1 T , y d1 y f 1 T F \u2022 x d2 x f 2 T , y d2 y f 2 T F = = 4 x d1 , y d1 \u2022 x f 1 , y f 1 \u2022 x d2 , y d2 \u2022 x f 2 , y f 2 = = geo(sim(x d1 , y d1 ), sim(x d2 , y d2 ), sim(x f 1 , y f 1 ), sim(x f 2 , y f 2 ))", |
|
"eq_num": "(20)" |
|
} |
|
], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "A Neural-network-like Model. Consider the phrase (w 1 , w 2 , . . . , w n ) and the model defined by f (x) = \u03c3( w 1 + w 2 + . . . + w n ), where \u03c3(\u2022) is a component-wise logistic function. Here we have a single trivial decomposition that includes all the subparts, and \u03b3(x 1 , . . . , x n ) is defined as \u03c3(x 1 + . . . + x n ). To see that for this model the CC cannot hold, consider two two-word phrases (a b) and ( ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "= i \u03c3( a + b) i \u2022 \u03c3( c + d) i = i 1 + e \u2212a i \u2212b i + e \u2212c i \u2212d i + e \u2212a i \u2212b i \u2212c i \u2212d i \u22121", |
|
"eq_num": "(21)" |
|
} |
|
], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "We need to rewrite this as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "g(K 1 ( a, c), K 2 ( b, d))", |
|
"eq_num": "(22)" |
|
} |
|
], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "But there is no possible choice of g, K 1 , and K 2 that allows Equation (21) to be written as Equation (22). This example can be regarded as a simplified version of the neuralnetwork model of Socher et al. (2011) . The fact that the CC does not apply to it suggests that it will not apply to other models in this family.", |
|
"cite_spans": [ |
|
{ |
|
"start": 193, |
|
"end": 213, |
|
"text": "Socher et al. (2011)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Composed Phrases", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "The Convolution Conjecture offers a general way to rewrite the phrase similarity computations of CDSMs by highlighting the role played by the subparts of a composed representation. This perspective allows for a better understanding of the exact operations that a composition model applies to its input. The Convolution Conjecture also suggests a strong connection between CDSMs and semantic convolution kernels. This link suggests that insights from the CDSM literature could be directly integrated in the development of convolution kernels, with all the benefits offered by this wellunderstood general machine-learning framework.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "Grefenstette et al. (2013) first framed the Lexical Function in terms of tensor contraction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank the reviewers for helpful comments. Marco Baroni acknowledges ERC 2011 Starting Independent Research Grant n. 283554 (COMPOSES).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Vector space models of lexical meaning", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Handbook of Contemporary Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Clark, Stephen. 2015. Vector space models of lexical meaning. In Shalom Lappin and Chris Fox, editors, Handbook of Contemporary Semantics, 2nd ed. Blackwell, Malden, MA. In press.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Mathematical foundations for a compositional distributional model of meaning", |
|
"authors": [ |
|
{ |
|
"first": "Bob", |
|
"middle": [], |
|
"last": "Coecke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehrnoosh", |
|
"middle": [], |
|
"last": "Sadrzadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Linguistic Analysis", |
|
"volume": "36", |
|
"issue": "", |
|
"pages": "345--384", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Coecke, Bob, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. Mathematical foundations for a compositional distributional model of meaning. Linguistic Analysis, 36:345-384.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Composing distributions: Mathematical structures and their linguistic interpretation. Working paper", |
|
"authors": [ |
|
{ |
|
"first": "Mohan", |
|
"middle": [], |
|
"last": "Ganesalingam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aur\u00e9lie", |
|
"middle": [], |
|
"last": "Herbelot", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ganesalingam, Mohan and Aur\u00e9lie Herbelot. 2013. Composing distributions: Mathematical structures and their linguistic interpretation. Working paper, Computer Laboratory, University of Cambridge. Available at www.cl.cam.ac.uk/\u223cah433/.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Multi-step regression learning for compositional distributional semantics", |
|
"authors": [ |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Georgiana", |
|
"middle": [], |
|
"last": "Dinu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yao-Zhong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehrnoosh", |
|
"middle": [], |
|
"last": "Sadrzadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of IWCS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "131--142", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grefenstette, Edward, Georgiana Dinu, Yao-Zhong Zhang, Mehrnoosh Sadrzadeh, and Marco Baroni. 2013. Multi-step regression learning for compositional distributional semantics. Proceedings of IWCS, pages 131-142, Potsdam.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A regression model of adjective-noun compositionality in distributional semantics", |
|
"authors": [ |
|
{ |
|
"first": "Emiliano", |
|
"middle": [], |
|
"last": "Guevara", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of GEMS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--37", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guevara, Emiliano. 2010. A regression model of adjective-noun compositionality in distributional semantics. In Proceedings of GEMS, pages 33-37, Uppsala.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Convolution kernels on discrete structures", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Haussler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Haussler, David. 1999. Convolution kernels on discrete structures. Technical report USCS-CL-99-10, University of California at Santa Cruz.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Syntactic/semantic structures for textual entailment recognition", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Proceedings of NAACL", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "20--21", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Syntactic/semantic structures for textual entailment recognition. In Proceedings of NAACL, pages 1,020-1,028, Los Angeles, CA.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Vector-based models of semantic composition", |
|
"authors": [ |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "236--244", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitchell, Jeff and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL, pages 236-244, Columbus, OH.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Dynamic pooling and unfolding recursive autoencoders for paraphrase detection", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "801--809", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Socher, Richard, Eric Huang, Jeffrey Pennin, Andrew Ng, and Christopher Manning. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Proceedings of NIPS, pages 801-809, Granada.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Domain and function: A dual-space model of semantic relations and compositions", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Turney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "44", |
|
"issue": "", |
|
"pages": "533--585", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Turney, Peter. 2012. Domain and function: A dual-space model of semantic relations and compositions. Journal of Artificial Intelligence Research, 44:533-585.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "From frequency to meaning: Vector space models of semantics", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Turney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Pantel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "37", |
|
"issue": "", |
|
"pages": "141--188", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Turney, Peter and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141-188.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Estimating linear models for compositional distributional semantics", |
|
"authors": [ |
|
{ |
|
"first": "Fabio", |
|
"middle": [], |
|
"last": "Zanzotto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ioannis", |
|
"middle": [], |
|
"last": "Massimo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francesca", |
|
"middle": [], |
|
"last": "Korkontzelos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suresh", |
|
"middle": [], |
|
"last": "Falucchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manandhar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of COLING", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "263--264", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zanzotto, Fabio Massimo, Ioannis Korkontzelos, Francesca Falucchi, and Suresh Manandhar. 2010. Estimating linear models for compositional distributional semantics. In Proceedings of COLING, pages 1,263-1,271, Beijing.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "c d) K( f ((a, b)), f ((c, d))) = f ((a, b)), f ((c, d))", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |