|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:29:00.303009Z" |
|
}, |
|
"title": "Representation Learning for Type-Driven Composition", |
|
"authors": [ |
|
{ |
|
"first": "Gijs", |
|
"middle": [], |
|
"last": "Wijnholds", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Utrecht Institute of Linguistics OTS Utrecht University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Mehrnoosh", |
|
"middle": [], |
|
"last": "Sadrzadeh", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University College London", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Mary University of London", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper is about learning word representations using grammatical type information. We use the syntactic types of Combinatory Categorial Grammar to develop multilinear representations, i.e. maps with n arguments, for words with different functional types. The multilinear maps of words compose with each other to form sentence representations. We extend the skipgram algorithm from vectors to multilinear maps to learn these representations and instantiate it on unary and binary maps for transitive verbs. These are evaluated on verb and sentence similarity and disambiguation tasks and a subset of the SICK relatedness dataset. Our model performs better than previous typedriven models and is competitive with state of the art representation learning methods such as BERT and neural sentence encoders.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper is about learning word representations using grammatical type information. We use the syntactic types of Combinatory Categorial Grammar to develop multilinear representations, i.e. maps with n arguments, for words with different functional types. The multilinear maps of words compose with each other to form sentence representations. We extend the skipgram algorithm from vectors to multilinear maps to learn these representations and instantiate it on unary and binary maps for transitive verbs. These are evaluated on verb and sentence similarity and disambiguation tasks and a subset of the SICK relatedness dataset. Our model performs better than previous typedriven models and is competitive with state of the art representation learning methods such as BERT and neural sentence encoders.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "We develop a novel technique for learning word representations by using syntactic type information of the words to learn representations for them and the constituency-based structure of the sentence to compose the representations. The word representations are multilinear maps, i.e. maps with variable number of arguments, where the number of arguments and the type of each map come from the syntactic type of each word. The word representation are composed via the application and further composition of the results of these maps, based on constituency structure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For instance, a noun such as children or ball is represented by a vector, i.e. \u2212\u2212\u2212\u2212\u2212\u2192 children or \u2212 \u2212 \u2192 ball, which can be thought of as 0-linear maps as they have no input or output. An adjective such as young is represented by a unilinear map young, i.e. a linear map of one argument, which at input takes an argument of type noun, e.g. children and at output returns an argument of type noun, i.e. young children. A transitive verb such as play, is represented by a bilinear map play, i.e. a linear map with two arguments, which at input takes two arguments of type noun, e.g. children and ball, and at output returns an argument of type sentence, i.e. young children play ball. An adjective-noun phrase representation is obtained by applying the representation of the adjective to the representation of its noun, i.e. by applying the unilinear map of the adjective to the vector of the noun. A sentence representation is obtained by the composition of two applications, i.e. by first applying the representation of the verb, e.g. play to the representation of the object, e.g. \u2212 \u2212 \u2192 ball, resulting in a unilinear map for the representation of the verb phrase play ball, and subsequently applying this verb phrase to the representation of the subject, e.g.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2212\u2212\u2212\u2212\u2212\u2192 children.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The representations are learnt by generalising the skipgram model with negative sampling of Mikolov et al. (2013) from vectors to higher order tensors, which are the intended multilinear maps. The types and composition operations come from the syntactic types and combinators of Combinatory Categorial Grammar (CCG) (Steedman, 2000) . CCG is a phrase structure grammar formalism based on the combinatory logic of Curry and Feys (Curry and Feys, 1958) . It assigns types defined in the notation of the combinatory logic to words of language and uses the operations of the combinatory logic to compose these types to obtain types for the phrases and sentences containing them. For instance, a word can have a CCG functional type of n arguments; this word will be represented in our setting by an n-ary map that uses the representations of its arguments, in a skipgram-style model, to predict the representations of the contexts of their composed phrases. As an example, consider a transitive verb; it has a CCG functional type of two arguments. Its representation is thus a binary map that predicts the contexts of its subject-verb-object phrases. Since the specific subject-verbobject phrases obtained may be sparse, we approximate the higher order maps with a set of lower order ones. As a result, a word with a CCG type of n arguments, gets represented by n maps of n \u2212 1 arguments; these transform the representations of a certain number of the arguments to predict the contexts of the remaining arguments. A transitive verb is now represented by two unary maps of one argument each; one of them transforms the object representation to predict its subject contexts, and the other transforms its subject representations to predict its object contexts. These lower order approximations are combined with each other to produce one single representation for the word with functional type.", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 113, |
|
"text": "Mikolov et al. (2013)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 316, |
|
"end": 332, |
|
"text": "(Steedman, 2000)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 428, |
|
"end": 450, |
|
"text": "(Curry and Feys, 1958)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our generalised skipgram algorithm is modular, i.e. the skipgram model of Mikolov et al. (2013) and its extension to adjective matrices (Maillard and Clark, 2015) are special cases of it. We instantiate our model on binary and unary maps for transitive verbs. After learning these representations, we evaluate them on verb similarity, compositional sentence similarity and disambiguation tasks, and a subset of the SICK relatedness dataset (Marelli et al., 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 95, |
|
"text": "Mikolov et al. (2013)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 136, |
|
"end": 162, |
|
"text": "(Maillard and Clark, 2015)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 440, |
|
"end": 462, |
|
"text": "(Marelli et al., 2014)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the verb and sentence similarity and verb disambiguation datasets, our model outperforms all previous type-driven models, and in most cases it also outperforms InferSent and Universal Sentence encoders, as well as pre-trained ELMo and BERT embeddings. However, it does not outperform BERT embeddings fine-tuned on NLI data. In the subset of SICK, our model only outperforms all previous type-driven models. Despite that, our model is motivated by linguistic theory, is simple and quick to train, and has the potential for improvement (which we expand on in the conclusion).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Code and data to train representations and reproduce our work is available online. 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Background There is a plethora of methods for word embeddings, with few of them distinguishing the grammatical types of the words. For adjectives, we have the regression model of Baroni and Zamparelli (2010) that approximates the holistic adjective-noun vectors to learn adjective matrices; we also have the skipgram model of Maillard and Clark (2015) that learns a transformation between fixed vectors for nouns and adjective-noun combi-nations. The model of Grefenstette and Sadrzadeh (2011) takes the sum of the outer products of the vectors of subjects and objects, and the Kronecker product of the verb vector with itself, to learn verb matrices. Later work uses multi-step regression to learn a verb cube, i.e. a multidimensional array of depth 1, by iteratively approximating a holistic subject-verb and verb-object vector . The model of Paperno et al. (2014) overcomes the sparsity issues of this technique and approximates the cubes by two matrices. The plausibility model of Polajnar et al. (2014) , learns a verb matrix/cube by optimising a model that distinguishes between observed subject-verb-object triples and randomly generated ones. Our work is different from these, since we use a skipgram-style model rather than combining the subject and object vectors or the verb vectors, as done by Grefenstette and Sadrzadeh (2011) , or performing regression, as done by Paperno et al. (2014) and Polajnar et al. (2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 207, |
|
"text": "Baroni and Zamparelli (2010)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 326, |
|
"end": 351, |
|
"text": "Maillard and Clark (2015)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 460, |
|
"end": 493, |
|
"text": "Grefenstette and Sadrzadeh (2011)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 845, |
|
"end": 866, |
|
"text": "Paperno et al. (2014)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 985, |
|
"end": 1007, |
|
"text": "Polajnar et al. (2014)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 1306, |
|
"end": 1339, |
|
"text": "Grefenstette and Sadrzadeh (2011)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1379, |
|
"end": 1400, |
|
"text": "Paperno et al. (2014)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 1405, |
|
"end": 1427, |
|
"text": "Polajnar et al. (2014)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Sentence embeddings are either learnt by mixing word embeddings e.g. the additive models of (Mitchell and Lapata, 2010; Mikolov et al., 2013) , or as a whole, e.g. the supervised InferSent (Conneau et al., 2017) and Universal Sentence Encoder (Cer et al., 2018) , and the unsupervised ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) models. None, however, explicitly take into account grammatical information. Tree-RNNs (Socher et al., 2013) , Tree-LSTMs (Tai et al., 2015) , and Lifted Matrix Space model (Chung et al., 2018) , do use the constituency tree of a sentence as a guide, but to learn a semantic function composition rather than different types of representations for words. Our work is different from these, since we start our learning procedure by taking the grammatical types of words into account and then compose these initially learnt representations with each other based on the structure of phrases they are part of, rather then by adding or learning different composition operators, or learning the entire phrase/sentence at once.", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 119, |
|
"text": "(Mitchell and Lapata, 2010;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 120, |
|
"end": 141, |
|
"text": "Mikolov et al., 2013)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 189, |
|
"end": 211, |
|
"text": "(Conneau et al., 2017)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 243, |
|
"end": 261, |
|
"text": "(Cer et al., 2018)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 311, |
|
"text": "(Peters et al., 2018)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 321, |
|
"end": 342, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 430, |
|
"end": 451, |
|
"text": "(Socher et al., 2013)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 465, |
|
"end": 483, |
|
"text": "(Tai et al., 2015)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 516, |
|
"end": 536, |
|
"text": "(Chung et al., 2018)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "On the other hand, formal distributional models, e.g. the categorial framework of Coecke et al. (2010 Coecke et al. ( , 2013 , the linear regression approach of , and the Combinatory Categorial Grammar (CCG) tensor contraction model of Maillard et al. (2014) , directly take the grammatical types of words into account, but fail to scale up to sentences of any length and complexity, and do not perform as well as their neural embedding counterparts. To remedy these issues, our model makes use of a simple neural network to learn the typedriven word representations in such a way that their composition leads to improved results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 101, |
|
"text": "Coecke et al. (2010", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 102, |
|
"end": 124, |
|
"text": "Coecke et al. ( , 2013", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 236, |
|
"end": 258, |
|
"text": "Maillard et al. (2014)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The skipgram model with negative sampling generates word embeddings by optimising a logistic regression objective in which target vectors have high inner product with context vectors for positive contexts, and low inner product with negative ones. Given a target word n and a set of positive contexts C, a set of negative contexts C is sampled from a unigram distribution raised to some power (here: 3/4, after Levy et al. (2015) ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 411, |
|
"end": 429, |
|
"text": "Levy et al. (2015)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Initially, both target vectors n and context vectors c are randomly intialised, and during training the model updates both target and context vectors to maximise the following objective function:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "c\u2208C log \u03c3(n \u2022 c) + c\u2208C log \u03c3(\u2212n \u2022 c) (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We generalise the skipgram model following the typing of Combinatorial Categorial Grammar (CCG, Steedman (2000) ). CCG has a transparent interface between syntax and semantics and robust wide-coverage parsers (Clark and Curran, 2007; Hockenmaier and Steedman, 2007) . Syntactic types of CCG are either atomic, e.g. nouns/noun phrases: NP and sentences: S , or functional. Functional types are either of the form Y /X or Y \\X ; they take an argument of type X and return an argument of type Y , where for \\ the argument occurs to the left and for / it occurs to the right. Examples of functional types are adjectives: NP /NP , intransitive verbs: S \\NP and transitive verbs:", |
|
"cite_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 111, |
|
"text": "Steedman (2000)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 209, |
|
"end": 233, |
|
"text": "(Clark and Curran, 2007;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 265, |
|
"text": "Hockenmaier and Steedman, 2007)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(S \\NP )/NP .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Types are composed with each other through the combinatorial rules of CCG, which include forward and backward application and composition, type-raising, and backward-cross and forwardcross composition. An example of forward application is when an adjective composes with a noun, producing a noun. An example of backward application is when a verb phrase composes with a noun phrase producing a sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "NP /NP NP NP > NP S \\NP S <", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Forward and backward composition are used in composing auxiliary phrases; cross composition and the type-raising combinators are used in cases of coordination and gapping. Following the tensor semantics of CCG, developed in Maillard et al. (2014) , in our model, we represent a word W with a functional type of n arguments by a n-ary map W from the argument spaces to the result space:", |
|
"cite_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 246, |
|
"text": "Maillard et al. (2014)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "W (n) : V 1 \u00d7 ... \u00d7 V n \u2192 V n+1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where V i 's are (finite dimensional) vector spaces over the field of reals and the subscript n denotes the arity of the map W. Equivalently,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "W (n) is an (n + 1)th-order tensor W i 1 ...i n+1 in the space V 1 \u2297 ... \u2297 V n \u2297 V n+1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": ". Given a functional word W of n arguments and representations d 1 , ..., d n of its arguments, we denote by W (n) d 1 ...d n the application of the representation of W to its arguments' representations. The model that learns the maps has the following objective function:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "c\u2208C log \u03c3(W (n) d 1 ...d n \u2022 c) + c\u2208C log \u03c3(\u2212W (n) d 1 ...d n \u2022 c)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "When W is a noun, W (0) is a 0-ary map, equivalent to W i 1 : a 1st-order tensor, i.e., a vector. In this case, the objective function reduces to the original skipgram model of Equation 1. For W an adjective, W (1) is a unary map, equivalent to W i 1 i 2 : a 2-nd order tensor, i.e., a matrix. The objective function that learns it was developed in Maillard and Clark (2015) , and is as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 349, |
|
"end": 374, |
|
"text": "Maillard and Clark (2015)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "c\u2208C log \u03c3(W (1) d 1 \u2022 c) + c\u2208C log \u03c3(\u2212W (1) d 1 \u2022 c)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Since adjective-noun combinations are themselves nouns, taking the original skipgram linear context window will produce sensible adjective representations. This is however not the case for all words with functional types: for verbs, for example, subjects and objects may not be directly adjacent to the verb in a sentence and so one needs to commit to a full sentential context, leading to uninformative training data. Aside to that, training of a cube leads to over parameterisation. To overcome these issues simultaneously, we define lower order approximations of higher order tensors, where one argument of the functional type is left out of the composition and used as context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "More formally, a word W with a functional type of n arguments, is approximated by n maps of n \u2212 1 arguments. Equivalently, in tensor form, we are approximating a full n+1th-order tensor, by decomposing it into n separate partial tensors of one lower order each. We denote the map equivalent of these partial tensors by W i (n\u22121) . The objective function of the model thus become as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "d i \u2208D i log \u03c3( W i (n\u22121) d 1 ...d i\u22121 d i+1 ...d n \u2022 d i ) + d i \u2208D i log \u03c3(\u2212 W i (n\u22121) d 1 ...d i\u22121 d i+1 ...d n \u2022 d i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Here, d i is a word with representation d i : an observed argument of W , and D i is the set of all such arguments. Whereas, d i is a word with representation d i , which can in principle serve as an argument of W , but it is randomly sampled, so it is an unobserved argument of W . Similarly, D i is the set of all such arguments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We can decrease the order of the tensors even more by parameterising over subsets of contexts. For a word W of n arguments, when including 1 to i \u2264 n of its n arguments in the context, we obtain an (n \u2212 i)th-order tensor, with the", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "equivalent map W 1...i|i+1...n (n\u2212i+1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": ". The application of this map to the remaining i", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "+ 1 to n arguments is W 1...i|i+1...n (n\u2212i+1) d i+1 ...d n .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "So we predict as context the 1...ith arguments by composing with the vectors for the i + 1...nth arguments. We write W 1...i (n\u2212i+1) when i = n, i.e. we use all arguments as context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilinear Skipgram Embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We instantiate our model on transitive verbs. A transitive verb V has CCG type (S \\NP )/NP . Our full model learns a binary map", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instantiation to Verb Skipgram", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "V (2) : V 1 \u00d7V 2 \u2192 V 3 ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instantiation to Verb Skipgram", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "equivalent to a 3rd-order tensor, i.e. a cube, V i 1 i 2 i 3 to represent V . We denote d 1 , i.e. the object of the verb, by o, its vector by o, and d 2 , i.e. its subject, by s, its vector by s. The objective function of our full model for V is thus", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instantiation to Verb Skipgram", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "c\u2208C log \u03c3(V (2) os\u2022c)+ c\u2208C log \u03c3(\u2212V (2) os\u2022c) (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instantiation to Verb Skipgram", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We approximate V (2) by training two unary maps, an object one V", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instantiation to Verb Skipgram", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "(1) , which we denote by V o|s (1) , and a subject one V", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1|2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(1) , which we denote by V s|o (1) . The map V o|s (1) predicts the object of the verb, given a", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2|1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Representation Order Context V sent|o,s (2) binary sentence V o|s (1) / V s|o (1) unary obj/sbj V o (0) / V s (0) / V o,s (0) 0-ary obj/sbj/both V sent|s (1) , V sent|o (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2|1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "unary sentence v skip 0-ary linear window fixed subject; it is learnt as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2|1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "o\u2208O log \u03c3( V o|s (1) s \u2022 o) + o\u2208O log \u03c3(\u2212 V o|s (1) s \u2022 o) (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2|1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The map V s|o", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2|1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(1) predicts the subject of the verb, given a fixed object, and is learnt as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2|1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "s\u2208S log \u03c3( V s|o (1) o \u2022 s) + s\u2208S log \u03c3(\u2212 V s|o (1) o \u2022 s) (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2|1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here, S and O are the sets of observed subjects and objects of V , and S and O are the sets of V 's unobserved subjects and objects.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2|1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We push the approximation one level further to also produce three 0-ary maps, i.e. vectors, for the verb. We denote these by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2|1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "V o (0) , V s (0) , V o,s", |
|
"eq_num": "(0)" |
|
} |
|
], |
|
"section": "2|1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "; they respectively represent a verb vector by only considering its objects, subjects, or both as context. These vectors are similar to the dependency based embeddings of Levy and Goldberg (2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 171, |
|
"end": 195, |
|
"text": "Levy and Goldberg (2014)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2|1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We summarise all trained models by the arity of their maps and the choice of their contexts in Table 1 . As baselines, we additionally train unary maps", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 103, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "2|1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "V sent|s (1) and V sent|o (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2|1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ", which predict a full sentence context given the subject or object of the verb, and v skip for the original skipgram vector of the verb.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2|1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We consider two ways of combining our unary map skipgram verb representations into a single representation: the middle and late fusion methods of Bruni et al. (2014) . Middle fusion takes a weighted average of the two verb representations, using the result to compute similarity scores. Late fusion uses each representation to compute separate similarity scores and then averages the results. Given a weighted average M \u03b1 (A, B) = \u03b1A+(1\u2212\u03b1)B for \u03b1 \u2208 [0..1], and V, W two verbs, with approximated", |
|
"cite_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 165, |
|
"text": "Bruni et al. (2014)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fusion", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Formula ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metric", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "vecsim cos(a, b) = a \u2022 b |a||b| matsim S med s\u2208S cos( V (1) s, W (1) s) matsim O med o\u2208O cos( V (1) o, W (1) o) cubesim med s,o \u2208A cos(V (2) os, W (2) os)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metric", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "mid sim(M \u03b1 ( V, V ), M \u03b1 ( W, W )) (5) late M \u03b1 (sim( V, W), sim( V , W ))", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Metric", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The same fusion methods are used in the compositional tasks, where either verb matrices are averaged before composition, or cosine scores are averaged after.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metric", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In their adjective skipgram model, Maillard and Clark (2015) argued that cosine similarity, while suitable for vectors, does not capture any information about the function of matrices as unary maps and that instead one should measure how similarly the maps transform their arguments. The same holds for generalisations of unary maps to nary ones, equivalently, for matrices to higher order tensors. Following Maillard and Clark, we apply clustering to achieve this. The degree of similarity between two words W and W , each with a functional type of n arguments, is obtained by taking the median of the degrees of similarities of the applications of their maps W (n) and W (n) on the clusters of their arguments. Since going through all the instantiations of the arguments is expensive, we cluster the most frequent argument vectors and work with the similarity between the two transformations applied to the centroids of each cluster. The resulting similarity function is defined as follows, for D the set of tuples of cluster centroids:", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 60, |
|
"text": "Maillard and Clark (2015)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "tensorsim : med d 1 ,...,dn \u2208D cos(W (n) d 1 ...d n , W (n) d 1 ...d n )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Model type Formula T represents a n-ary map composition model for transitive sentences, T s is subject-directed composition, T o is object-directed composition. When \u03b1 = 0 or \u03b1 = 1, the models reduce to the case of using one of the two verb matrix embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Middle T (s, M \u03b1 ( V (1) , V (1) ), o) Late M \u03b1 (T (s, V (1) , o), T (s, V (1) , o)) Two M \u03b1 (T s (s, V (1) , o), T o (s, V (1) , o)) Cube V (2) os", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "For the case of transitive verbs, we are dealing with binary map transformations and the above definition simplifies to considering the most frequent subjects and objects of the verb, clustering them separately, then applying the map to the centroid vectors and taking the median. The details of the different map transformation similarities that we obtain for transitive verbs using our model are given in Table 2. 3 Implementation and Evaluation", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 407, |
|
"end": 415, |
|
"text": "Table 2.", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Clustering", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We implemented all models in Python, using the tensorflow package (Abadi et al., 2016) 2 . Vectors were 100-dimensional; unary and binary maps, i.e. matrices and cubes, were shaped accordingly. The functional type-driven information was extracted from a dependency parsed corpus 3 containing ca.130M sentences and ca. 3.2B words, on which the initial regular noun vectors were also trained.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In the case of matrices and cubes with full sentential contexts, a pair of networks was trained separately for each verb, sharing the context matrix from the noun skipgram model. For the matrices with subject (resp. object) contexts, we trained a pair of networks (a subject network and an object network), each with a single embedding layer encoding all the verbs. In these networks, the context matrix consists of all possible object (resp. subject) context vectors. Here we considered both a fixed context matrix (from the noun skipgram model) and a trainable context matrix and found that the trainable context matrix gave the best results 4 , so we work with the latter. Negative samples were drawn from the distribution over objects/subjects of all verbs in the case of the partial tensor models. We considered k = 10 negative samples per subject/object.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We evaluate our verb representations on four types of tasks: verb similarity, verb disambiguation, sentence similarity, including SVO sentences and SVO sentences with elliptical phrases, and a subset of the SICK sentence relatedness task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation and Datasets", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We considered five verb similarity datasets of varying size: pairs of words from the MEN (Bruni et al., 2012) and SimLex-999 (Hill et al., 2015) datasets that were labelled as verbs, obtaining 22 and 222 verb similarity pairs, respectively. Next to these partial datasets, we considered VerbSim (Yang and Powers, 2006) , a dataset of 130 verb pairs, and the more recent SimVerb-3500 dataset (Gerz et al., 2016) , containing 3500 verb pairs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 109, |
|
"text": "(Bruni et al., 2012)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 125, |
|
"end": 144, |
|
"text": "(Hill et al., 2015)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 295, |
|
"end": 318, |
|
"text": "(Yang and Powers, 2006)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 391, |
|
"end": 410, |
|
"text": "(Gerz et al., 2016)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Verb Similarity", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We considered seven tasks. (1,2) The two datasets introduced by Lapata (2008, 2010) , dubbed ML08 and ML10. These datasets contain pairs of intransitive sentences; the 2008 dataset aims to disambiguate the verb of each sentence, the 2010 dataset is for computing sentence similarity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 83, |
|
"text": "Lapata (2008, 2010)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Verb Disambiguation and Sentence Similarity", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "(3,4) The transitive verb disambiguation datasets of Grefenstette and Sadrzadeh (2011) (GS11) and KS13a, and (5) the transitive sentence similarity dataset of Kartsaklis et al. (2013) (KS13b) . 6,7We additionally test on two recent datasets (Wijnholds and Sadrzadeh, 2019) (ELLDIS and ELLSIM), which extend the KS13a and KS13b datasets to sentences with verb phrase ellipsis in them.", |
|
"cite_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 191, |
|
"text": "Kartsaklis et al. (2013) (KS13b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Verb Disambiguation and Sentence Similarity", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "The datasets ML08 and ML10, respectively, contain pairs of subject-verb, and verb-object phrases. Next to the additive baseline, we apply the unary map representations of verbs to the subject (or Table 4 : Two-map models. We compose partial sentence embeddings using the subject-and objectdirected verb matrix, and merge the two embeddings into one. M \u03b1 is the mixing operator defined before.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 196, |
|
"end": 203, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Verb Disambiguation and Sentence Similarity", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Model Formula CA M \u03b1 s T V (1) o, V (1) o s CAS M \u03b1 s T V (1) + o, V (1) o + s CATA M \u03b1 s T V (1) , V (1) o", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Verb Disambiguation and Sentence Similarity", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "object) to get sentence representations: V (1) s for subject-verb phrases, V (1) o for verb-object phrases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Verb Disambiguation and Sentence Similarity", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "For the separate subject-verb and verb-object maps, we apply middle and late fusion. To model a transitive sentence of the form subj verb obj, we compare verb-only and additive baselines with n-ary map models as described in Table 3 . In the Two model in this table, we first apply V o (1) to the subject vector, then mix it with the application of V s", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 225, |
|
"end": 232, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Verb Disambiguation and Sentence Similarity", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "(1) to the object vector. We then mix in the object/subject vectors and obtain three different models: CA for Copy Argument, CAS for Copy Argument Sum and CATA for Categorical Argument; see Table 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 190, |
|
"end": 197, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Verb Disambiguation and Sentence Similarity", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "The ELLDIS and ELLSIM datasets of Wijnholds and Sadrzadeh (2019) contain sentences of the form subj verb obj and subj * does too. We first resolve the ellipsis by replacing the marker does too with its antecedent verb object, then apply a transitive model to the resulting subj verb obj and subj * verb object conjunct and finally combine the representations by addition; formally", |
|
"cite_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 64, |
|
"text": "Wijnholds and Sadrzadeh (2019)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Verb Disambiguation and Sentence Similarity", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "E(s, V (1) , o, s * ) = T (s, V (1) , o) + T (s * , V (1) , o)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Verb Disambiguation and Sentence Similarity", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "where s * is the representation of subj * .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Verb Disambiguation and Sentence Similarity", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "The SICK relatedness task of Marelli et al. (2014) contains sentence pairs that are scored between 1 and 5 on semantic relatedness to evaluate compositional distributional models for relatedness. To evaluate our verb representations, we extract the verbs with their arguments (subjects and/or objects) from dependency parsed sentences, use one of the previously described composition models to generate a single verb representation for the verb-argument tuple, and compose this with the vectors for the remaining words in the (Chersoni et al. (2016) , VS) and (Gerz et al. (2016) ,", |
|
"cite_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 50, |
|
"text": "Marelli et al. (2014)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 526, |
|
"end": 549, |
|
"text": "(Chersoni et al. (2016)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 560, |
|
"end": 579, |
|
"text": "(Gerz et al. (2016)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SICK-R", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "MEN v SL v VS SV d SV t v", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SICK-R", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "SL v , SV d , SV t ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SICK-R", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "For MEN, we did not find any results on the verb subset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SICK-R", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "sentence. We used the Spacy 5 parser combined with a postprocessing script to correct cases of coordination of verbs and arguments, as we expected this to be vital information in the dataset. To keep this process manageable, we used the SemEval subset of the SICK dataset. We evaluate our best performing verb unary map", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SICK-R", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "representations ( V o|s/s|o (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SICK-R", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "), as well as the two analytical verb representations V Kron and V Rel .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SICK-R", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "At the verb level, we compare our skipgram verb representations (Table 1) with two verb representation methods from the type-driven literature (Grefenstette and Sadrzadeh, 2011) . The first representation, referred to by Kronecker, lifts a verb vector to a matrix representation using outer product. The second representation is the Relational model, where a verb matrix is taken to be the sum of the outer products of its subject and object vectors; formally:", |
|
"cite_spans": [ |
|
{ |
|
"start": 143, |
|
"end": 177, |
|
"text": "(Grefenstette and Sadrzadeh, 2011)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 73, |
|
"text": "(Table 1)", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison with Other Methods", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "V Kron = v a \u2297 v a V Rel = i s i \u2297 o i", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with Other Methods", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "At the sentence level, we compare our model with that of Mitchell and Lapata (2010) , which given a sentence adds the vectors of the words therein, and also with supervised sentence encoders, InferSent (Conneau et al., 2017) , as well Table 6 : Spearman \u03c1 correlation of verbs of SVO sentence level tasks. Each score is a maximum score out of possible clusters and fusion weights. State of the art scores are taken from (Mitchell and Lapata (2008) ,ML08), (Milajevs et al. (2014) , GS11, KS13b) and ,ML10,KS13a).", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 83, |
|
"text": "Mitchell and Lapata (2010)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 202, |
|
"end": 224, |
|
"text": "(Conneau et al., 2017)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 420, |
|
"end": 447, |
|
"text": "(Mitchell and Lapata (2008)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 456, |
|
"end": 479, |
|
"text": "(Milajevs et al. (2014)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 482, |
|
"end": 487, |
|
"text": "GS11,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 488, |
|
"end": 494, |
|
"text": "KS13b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 235, |
|
"end": 242, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison with Other Methods", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "as, Universal Sentence Encoder (Cer et al., 2018) . For these latter, we take off-the-shelf encoders to map the sentence pairs in our evaluation datasets to a pair of embeddings, and compute the cosine similarity between these. We moreover compare to state-of-the-art contextualised encoders ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) . For ELMo, we use a pre-trained model and apply mean pooling 6 . For BERT, we take the implementation of Reimers and Gurevych (2019) 7 , as it implements both the original pre-trained BERT models and fine-tuned sentence embedding models. To this, we apply max, mean, and CLS token pooling, and report the best scores out of all models and pooling types, for the pre-trained models and the fine-tuned models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 49, |
|
"text": "(Cer et al., 2018)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 318, |
|
"text": "(Peters et al., 2018)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 328, |
|
"end": 349, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with Other Methods", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "The correlation results on verb similarity tasks are displayed in Table 5 . Here, for the case of verb vectors, the general skipgram model is outperformed by the vectors trained using our partial model on the verb arguments as context, and in fact these show the highest performance on the VerbSim dataset. That the unary and binary maps representations with the full sentence as context perform rather poorly, and in many cases worse than the vector representations, illustrates that the choice of context is too general for these higher-order representations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 73, |
|
"text": "Table 5", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Verb Level Tasks", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "On four out of the five tasks, our approximated models that train unary maps with a restricted notion of context, outperform all other models: the most significant of these increases are for the 3000 entry test subset of the SimVerb dataset: here we observe an increase from 0.18 to 0.24. Table 6 shows the correlation scores on the verbs of the SVO sentence level tasks. In this experiment, we perform the sentence disambiguation and similarity tasks by only using the verbs of the sentences. We observe the same pattern in the results: training verb vectors on dependency label contexts slightly improves the performance. This is against the erratic performance of the binary map representations (on all but the ML2008 dataset). Again, our approximated unary map representations with a restricted context significantly outperforms the other methods.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 289, |
|
"end": 296, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Verb Level Tasks", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In the majority of the verb similarity datasets we do not improve the state of the art, but in the majority of the verb parts of the SVO sentence datasets, we do.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Verb Level Tasks", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The most interesting results, however, come from the SVO sentence tasks. These compute a representation for each sentence of the dataset by composing the representations of the words of that sentence, rather than by only working with individual word representations, as was done in the previous two tasks. Table 7 contrasts the additive models (top row), type-driven methods that use the Kronecker (second row) and Relational (third row) verb representations, against the type-driven model that uses skipgram representations (resp. full context binary maps, full context unary maps, restricted context unary maps). While the skipgram binary map verb representations with full sentences as context perform slightly better in a sentence context, they generally underperform the additive baseline and the non-skipgram tensors. We argue that this is mainly due to the choice of context: the full sentence doesn't tell us enough about the subjects and objects of the verb, whereas the Relational model directly encodes this information. Similarly to the verb similarity results, the binary map representations show a very poor performance, which we argue is due to data sparsity. Even though the binary map implicitly model properties of arguments of the verbs, their representation is too sparse to effectively model anything. Our proposed unary map model remedies both the sparsity problem and the choice of context, and outperforms all the other representations, save on the ML2008 dataset. This model also improves the state of the art in all the datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 306, |
|
"end": 313, |
|
"text": "Table 7", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Verb Disambiguation and SVO Sentence Similarity Datasets", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "ML08 ML10 GS11 KS13a KS13b V", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Verb Disambiguation and SVO Sentence Similarity Datasets", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "The results in Tables 9 show that our proposed verb unary map representations achieve competitive results compared to the additive baseline, and pre-trained BERT embeddings, on the ELLDIS and ELLSIM tasks and on (a subset of) the SICK relatedness task. What is more, they clearly outperform the analytic tensors and in ellipsis datasets; they also improve the state of the art of ELLDIS, which", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Elliptical Phrase and SICK Datasets", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "Add Kron Rel V o|s/s|o (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Elliptical Phrase and SICK Datasets", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "IS USE BERTp BERT f 0.31 0.30 0.37 0.56 0.34 0.27 0.36 0.65 0.67 0.52 0.65 0.76 0.80 0.68 0.67 0.79 0.71 0.58 0.44 0.70 0.74 0.76 0.70 0.76 Table 9 : Spearman \u03c1 scores on the ELLDIS (top), ELL-SIM (middle), and SICK relatedness (bottom) tasks. was 0.53, and provide equal results to the state of the art of ELLSIM, which was 0.76 , both reported in Wijnholds and Sadrzadeh (2019) . However, they are surpassed by fine-tuned BERT sentence embeddings and sentence encoders, that achieve the highest. For SICK, to verify that the high performance of our verb maps is not caused simply by adding in the vectors for the remaining word of a sentence, we did an ablation in which the rest of the sentence was not considered. Using addition of vectors, this gave a \u03c1 of 0.61, and for the compositional verb matrices this gave 0.62 (cf. 0.71 and 0.70 in Table 9 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 349, |
|
"end": 379, |
|
"text": "Wijnholds and Sadrzadeh (2019)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 147, |
|
"text": "Table 9", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 845, |
|
"end": 852, |
|
"text": "Table 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Elliptical Phrase and SICK Datasets", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "We compare our model with the InferSent encoder and the Universal Sentence Encoder, and with ELMo and BERT encodings in Table 8 . Although our embeddings outperform Universal Sentence Encoder on all tasks, on the ML2010 and KS2014 dataset InferSent performs higher, possibly due to its high embedding dimensionality (4096). For the BERT embeddings we observe an interesting pattern: our proposed method outperforms any pre-trained BERT model, but after fine-tuning on NLI datasets, the BERT models score the highest on all datasets but KS2013. Although fully analysing the syntactic awareness of BERT is beyond the scope of this paper, it seems that both explicitly modelling syntax in the embeddings as our method does, and fine-tuning BERT embeddings are viable strategies.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 127, |
|
"text": "Table 8", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison with Sentence Embeddings", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We generalised the skipgram model (Mikolov et al., 2013) to learn multilinear map representations for words with functional types using the setting of Combinatory Categorial Grammar. Our model reduces to the original skipgram for atomic types such as nouns, and to the adjective skipgram model of Maillard and Clark (2015) , for functional types of one argument. To overcome potential sparsity issues we approximated higher arity maps with a set of lower arity ones and showed that such approximations provide better results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 56, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 322, |
|
"text": "Maillard and Clark (2015)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The model was implemented on transitive verbs, learning binary and a set of approximated unary representations. These were evaluated on verb similarity and disambiguation and sentence similarity tasks. The unary map approximations significantly outperformed previous type-driven verb representations. They also outperformed sentence encoders and pre-trained BERT embeddings. When moving to datasets of longer sentences, e.g. sentences with elliptical phrases and the SICK relatedness, some sentence encoders and fine-tuned BERT representations were superior. Our multilinear skipgram model paves the way for a new generation of type-driven representations, in line with recent research highlighting benefits of syntactic biases injected into representation learning (Kuncoro et al., 2020) . Furthermore, our model is fast to train, guided by a linguistic calculus (CCG), and produces syntax-aware sentence embeddings. Performance could potentially be improved by adding non-linearities to the model, as in Socher et al. (2013) and by modelling complex syntactic phenomena such as auxiliaries and negation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 766, |
|
"end": 788, |
|
"text": "(Kuncoro et al., 2020)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1006, |
|
"end": 1026, |
|
"text": "Socher et al. (2013)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "github.com/gijswijnholds/ tensorskipgram-torch", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our code was later changed to Pytorch. 3 UKWaCkypedia, wacky.sslmit.unibo.it", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We argue that this is because contexts in the noun skipgram model are more general as they serve as contexts to many different target words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://tfhub.dev/google/elmo/2 7 https://github.com/UKPLab/ sentence-transformers", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors gratefully acknowledge a multitude of reviewers for their careful efforts in reviewing this work. Wijnholds is grateful for receiving PGR student funding from the School of Electronic Engineering and Computer Science at Queen Mary University of London, and is currently supported by the Dutch Research Council (NWO) under the scope of the project \"A composition calculus for vectorbased semantic modelling with a localization for Dutch\" (360-89-070). Sadrzadeh acknowledges the Royal Academy of Engineering Industrial Fellowship IF192058.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Tensorflow: A system for large-scale machine learning", |
|
"authors": [ |
|
{ |
|
"first": "Mart\u00edn", |
|
"middle": [], |
|
"last": "Abadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Barham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianmin", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Davis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthieu", |
|
"middle": [], |
|
"last": "Devin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanjay", |
|
"middle": [], |
|
"last": "Ghemawat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Irving", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Isard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "265--283", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mart\u00edn Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pages 265-283.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Frege in space: A program of compositional distributional semantics. LiLT (Linguistic Issues in Language Technology", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raffaela", |
|
"middle": [], |
|
"last": "Bernardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Zamparelli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Baroni, Raffaela Bernardi, and Roberto Zampar- elli. 2014. Frege in space: A program of composi- tional distributional semantics. LiLT (Linguistic Is- sues in Language Technology), 9.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Zamparelli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1183--1193", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1183-1193. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Distributional semantics in technicolor", |
|
"authors": [ |
|
{ |
|
"first": "Elia", |
|
"middle": [], |
|
"last": "Bruni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gemma", |
|
"middle": [], |
|
"last": "Boleda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nam-Khanh", |
|
"middle": [], |
|
"last": "Tran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "136--145", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elia Bruni, Gemma Boleda, Marco Baroni, and Nam- Khanh Tran. 2012. Distributional semantics in tech- nicolor. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguistics: Long Papers-Volume 1, pages 136-145. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Multimodal distributional semantics", |
|
"authors": [ |
|
{ |
|
"first": "Elia", |
|
"middle": [], |
|
"last": "Bruni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nam-Khanh", |
|
"middle": [], |
|
"last": "Tran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "49", |
|
"issue": "", |
|
"pages": "1--47", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1613/jair.4135" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Ar- tificial Intelligence Research, 49:1-47.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Universal sentence encoder", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Cer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinfei", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sheng-Yi", |
|
"middle": [], |
|
"last": "Kong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nan", |
|
"middle": [], |
|
"last": "Hua", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicole", |
|
"middle": [], |
|
"last": "Limtiaco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rhomni", |
|
"middle": [], |
|
"last": "St John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [], |
|
"last": "Constant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mario", |
|
"middle": [], |
|
"last": "Guajardo-Cespedes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steve", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Tar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1803.11175" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Representing verbs with rich contexts: an evaluation on verb similarity", |
|
"authors": [ |
|
{ |
|
"first": "Emmanuele", |
|
"middle": [], |
|
"last": "Chersoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Enrico", |
|
"middle": [], |
|
"last": "Santus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Lenci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philippe", |
|
"middle": [], |
|
"last": "Blache", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chu-Ren", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1967--1972", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D16-1205" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emmanuele Chersoni, Enrico Santus, Alessandro Lenci, Philippe Blache, and Chu-Ren Huang. 2016. Representing verbs with rich contexts: an evaluation on verb similarity. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 1967-1972, Austin, Texas. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "The lifted matrix-space model for semantic composition", |
|
"authors": [ |
|
{ |
|
"first": "Woojin", |
|
"middle": [], |
|
"last": "Chung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sheng-Fu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "508--518", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K18-1049" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "WooJin Chung, Sheng-Fu Wang, and Samuel Bowman. 2018. The lifted matrix-space model for semantic composition. In Proceedings of the 22nd Confer- ence on Computational Natural Language Learning, pages 508-518, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Widecoverage efficient statistical parsing with CCG and log-linear models", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Curran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Computational Linguistics", |
|
"volume": "33", |
|
"issue": "4", |
|
"pages": "493--552", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Clark and James R. Curran. 2007. Wide- coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493-552.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Lambek vs. Lambek: Functorial vector space semantics and string diagrams for Lambek calculus", |
|
"authors": [ |
|
{ |
|
"first": "Bob", |
|
"middle": [], |
|
"last": "Coecke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehrnoosh", |
|
"middle": [], |
|
"last": "Sadrzadeh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Special issue on Seventh Workshop on Games for Logic and Programming Languages", |
|
"volume": "164", |
|
"issue": "11", |
|
"pages": "1079--1100", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.apal.2013.05.009" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bob Coecke, Edward Grefenstette, and Mehrnoosh Sadrzadeh. 2013. Lambek vs. Lambek: Functorial vector space semantics and string diagrams for Lam- bek calculus. Annals of Pure and Applied Logic, 164(11):1079 -1100. Special issue on Seventh Workshop on Games for Logic and Programming Languages (GaLoP VII).", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Mathematical foundations for a compositional distributional model of meaning", |
|
"authors": [ |
|
{ |
|
"first": "Bob", |
|
"middle": [], |
|
"last": "Coecke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehrnoosh", |
|
"middle": [], |
|
"last": "Sadrzadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Linguistic Analysis", |
|
"volume": "36", |
|
"issue": "1", |
|
"pages": "345--384", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. Mathematical foundations for a compo- sitional distributional model of meaning. Linguistic Analysis, 36(1):345-384.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Supervised learning of universal sentence representations from natural language inference data", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Douwe", |
|
"middle": [], |
|
"last": "Kiela", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lo\u00efc", |
|
"middle": [], |
|
"last": "Barrault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "670--680", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1070" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680, Copen- hagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Combinatory Logic", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Haskell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Curry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Feys", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1958, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Haskell B. Curry and Richard Feys. 1958. Combina- tory Logic. North-Holland, Amsterdam.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "SimVerb-3500: A largescale evaluation set of verb similarity", |
|
"authors": [ |
|
{ |
|
"first": "Daniela", |
|
"middle": [], |
|
"last": "Gerz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Vuli\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Korhonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2173--2182", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D16-1235" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniela Gerz, Ivan Vuli\u0107, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. SimVerb-3500: A large- scale evaluation set of verb similarity. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2173-2182, Austin, Texas. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Multi-step regression learning for compositional distributional semantics", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Dinu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Sadrzadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) -Long Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "131--142", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Grefenstette, G. Dinu, Y. Zhang, M. Sadrzadeh, and M. Baroni. 2013. Multi-step regression learning for compositional distributional semantics. In Proceed- ings of the 10th International Conference on Com- putational Semantics (IWCS 2013) -Long Papers, pages 131-142, Potsdam, Germany. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Experimental support for a categorical compositional distributional model of meaning", |
|
"authors": [ |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehrnoosh", |
|
"middle": [], |
|
"last": "Sadrzadeh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1394--1404", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental support for a categorical composi- tional distributional model of meaning. In Proceed- ings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1394-1404, Edinburgh, Scotland, UK. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "SimLex-999: Evaluating semantic models with (genuine) similarity estimation", |
|
"authors": [ |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Korhonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Computational Linguistics", |
|
"volume": "41", |
|
"issue": "4", |
|
"pages": "665--695", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/COLI_a_00237" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics, 41(4):665-695.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "CCGbank: A corpus of CCG derivations and dependency structures extracted from the Penn treebank", |
|
"authors": [ |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Hockenmaier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Computational Linguistics", |
|
"volume": "33", |
|
"issue": "3", |
|
"pages": "355--396", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Julia Hockenmaier and Mark Steedman. 2007. CCG- bank: A corpus of CCG derivations and dependency structures extracted from the Penn treebank. Com- putational Linguistics, 33(3):355-396.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Prior disambiguation of word tensors for constructing sentence vectors", |
|
"authors": [ |
|
{ |
|
"first": "Dimitri", |
|
"middle": [], |
|
"last": "Kartsaklis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehrnoosh", |
|
"middle": [], |
|
"last": "Sadrzadeh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1590--1601", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dimitri Kartsaklis and Mehrnoosh Sadrzadeh. 2013. Prior disambiguation of word tensors for construct- ing sentence vectors. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1590-1601, Seattle, Wash- ington, USA. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Separating disambiguation from composition in distributional semantics", |
|
"authors": [ |
|
{ |
|
"first": "Dimitri", |
|
"middle": [], |
|
"last": "Kartsaklis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehrnoosh", |
|
"middle": [], |
|
"last": "Sadrzadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Pulman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "114--123", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, and Stephen Pulman. 2013. Separating disambiguation from composition in distributional semantics. In Pro- ceedings of the Seventeenth Conference on Computa- tional Natural Language Learning, pages 114-123,", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Association for Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Bulgaria", |
|
"middle": [], |
|
"last": "Sofia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sofia, Bulgaria. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Syntactic structure distillation pretraining for bidirectional encoders", |
|
"authors": [ |
|
{ |
|
"first": "Adhiguna", |
|
"middle": [], |
|
"last": "Kuncoro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lingpeng", |
|
"middle": [], |
|
"last": "Kong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Fried", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dani", |
|
"middle": [], |
|
"last": "Yogatama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Rimell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2005.13482" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adhiguna Kuncoro, Lingpeng Kong, Daniel Fried, Dani Yogatama, Laura Rimell, Chris Dyer, and Phil Blunsom. 2020. Syntactic structure distillation pre- training for bidirectional encoders. arXiv preprint arXiv:2005.13482.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Dependencybased word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "302--308", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P14-2050" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Dependency- based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 302-308, Baltimore, Maryland. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Improving distributional similarity with lessons learned from word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "211--225", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00134" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the Associ- ation for Computational Linguistics, 3:211-225.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Learning adjective meanings with a tensor-based skip-gram model", |
|
"authors": [ |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Maillard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "327--331", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K15-1035" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean Maillard and Stephen Clark. 2015. Learning adjective meanings with a tensor-based skip-gram model. In Proceedings of the Nineteenth Confer- ence on Computational Natural Language Learn- ing, pages 327-331, Beijing, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "A type-driven tensor-based semantics for CCG", |
|
"authors": [ |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Maillard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the EACL 2014 Workshop on Type Theory and Natural Language Semantics (TTNLS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "46--54", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/W14-1406" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean Maillard, Stephen Clark, and Edward Grefenstette. 2014. A type-driven tensor-based semantics for CCG. In Proceedings of the EACL 2014 Workshop on Type Theory and Natural Language Semantics (TTNLS), pages 46-54, Gothenburg, Sweden. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "A SICK cure for the evaluation of compositional distributional semantic models", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Marelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefano", |
|
"middle": [], |
|
"last": "Menini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luisa", |
|
"middle": [], |
|
"last": "Bentivogli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raffaella", |
|
"middle": [], |
|
"last": "Bernardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Zamparelli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "216--223", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zampar- elli. 2014. A SICK cure for the evaluation of com- positional distributional semantic models. In Pro- ceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014), pages 216-223, Reykjavik, Iceland. European Lan- guages Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "26", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their composition- ality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 26, pages 3111-3119. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Evaluating neural word representations in tensor-based compositional settings", |
|
"authors": [ |
|
{ |
|
"first": "Dmitrijs", |
|
"middle": [], |
|
"last": "Milajevs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dimitri", |
|
"middle": [], |
|
"last": "Kartsaklis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehrnoosh", |
|
"middle": [], |
|
"last": "Sadrzadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Purver", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "708--719", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dmitrijs Milajevs, Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, and Matthew Purver. 2014. Evaluating neural word representations in tensor-based compo- sitional settings. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 708-719.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Vector-based models of semantic composition", |
|
"authors": [ |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of ACL-08: HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "236--244", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL-08: HLT, pages 236-244, Columbus, Ohio. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Composition in distributional models of semantics", |
|
"authors": [ |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Cognitive Science", |
|
"volume": "34", |
|
"issue": "8", |
|
"pages": "1388--1429", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1111/j.1551-6709.2010.01106.x" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Sci- ence, 34(8):1388-1429.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "A practical and linguistically-motivated approach to compositional distributional semantics", |
|
"authors": [ |
|
{ |
|
"first": "Denis", |
|
"middle": [], |
|
"last": "Paperno", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nghia The", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "90--99", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P14-1009" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Denis Paperno, Nghia The Pham, and Marco Baroni. 2014. A practical and linguistically-motivated ap- proach to compositional distributional semantics. In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 90-99, Baltimore, Maryland. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2227--2237", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-1202" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Using sentence plausibility to learn the semantics of transitive verbs", |
|
"authors": [ |
|
{ |
|
"first": "Tamara", |
|
"middle": [], |
|
"last": "Polajnar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Rimell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Learning Semantics Workshop, NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tamara Polajnar, Laura Rimell, and Stephen Clark. 2014. Using sentence plausibility to learn the se- mantics of transitive verbs. Learning Semantics Workshop, NIPS.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", |
|
"authors": [ |
|
{ |
|
"first": "Nils", |
|
"middle": [], |
|
"last": "Reimers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3982--3992", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1410" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Recursive deep models for semantic compositionality over a sentiment treebank", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Perelygin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Chuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1631--1642", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "The syntactic process", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Steedman. 2000. The syntactic process, vol- ume 24. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Improved semantic representations from tree-structured long short-term memory networks", |
|
"authors": [ |
|
{ |
|
"first": "Kai Sheng", |
|
"middle": [], |
|
"last": "Tai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1556--1566", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P15-1150" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 1556-1566, Beijing, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Evaluating composition models for verb phrase elliptical sentence embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Gijs", |
|
"middle": [], |
|
"last": "Wijnholds", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehrnoosh", |
|
"middle": [], |
|
"last": "Sadrzadeh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "261--271", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1023" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gijs Wijnholds and Mehrnoosh Sadrzadeh. 2019. Eval- uating composition models for verb phrase elliptical sentence embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 261-271, Minneapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Verb similarity on the taxonomy of wordnet", |
|
"authors": [ |
|
{ |
|
"first": "Dongqiang", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"Martin" |
|
], |
|
"last": "Powers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Third International WordNet Conference GWC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "121--128", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dongqiang Yang and David Martin Powers. 2006. Verb similarity on the taxonomy of wordnet. In Pro- ceedings of the Third International WordNet Confer- ence GWC 2006, South Jeju Island, Korea, pages 121-128. Masaryk University.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Verb representations, ranging from 0-ary (vectors) to binary maps (cubes).", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Similarity metrics on vectors, matrices and cubes, based on clustering centroids.", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Building Representations for Transitive Sentences.", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"content": "<table><tr><td>: Spearman \u03c1 correlation on verb similarity</td></tr><tr><td>datasets. The subscript v indicates that we are look-</td></tr><tr><td>ing at the partial verb-only dataset. For SimVerb we</td></tr><tr><td>distinguish between the development and test set. State</td></tr><tr><td>of the art scores are taken from</td></tr></table>", |
|
"text": "", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"content": "<table><tr><td>C(+) denotes the additive model, whereas the other</td></tr><tr><td>rows represent the best score for compositional models</td></tr><tr><td>with different verb representations.</td></tr></table>", |
|
"text": "Spearman \u03c1 scores on compositional tasks.", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF9": { |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Spearman \u03c1 scores on compositional tasks, for our proposed unary map verb representation versus state of the art sentence embedding methods.", |
|
"type_str": "table", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |