ACL-OCL / Base_JSON /prefixT /json /tacl /2020.tacl-1.21.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:53:02.889282Z"
},
"title": "Learning Lexical Subspaces in a Distributional Vector Space",
"authors": [
{
"first": "Kushal",
"middle": [],
"last": "Arora",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "McGill University Qu\u00e9bec AI Instuite (Mila)",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Aishik",
"middle": [],
"last": "Chakraborty",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "McGill University Qu\u00e9bec AI Instuite (Mila)",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Jackie",
"middle": [
"C K"
],
"last": "Cheung",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "McGill University Qu\u00e9bec AI Instuite (Mila)",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose LEXSUB, a novel approach towards unifying lexical and distributional semantics. We inject knowledge about lexical-semantic relations into distributional word embeddings by defining subspaces of the distributional vector space in which a lexical relation should hold. Our framework can handle symmetric attract and repel relations (e.g., synonymy and antonymy, respectively), as well as asymmetric relations (e.g., hypernymy and meronomy). In a suite of intrinsic benchmarks, we show that our model outperforms previous approaches on relatedness tasks and on hypernymy classification and detection, while being competitive on word similarity tasks. It also outperforms previous systems on extrinsic classification tasks that benefit from exploiting lexical relational cues. We perform a series of analyses to understand the behaviors of our model. 1 * Equal contribution. 1 C o d e a v a i l a b l e a t https://github.com/ aishikchakraborty/LexSub.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose LEXSUB, a novel approach towards unifying lexical and distributional semantics. We inject knowledge about lexical-semantic relations into distributional word embeddings by defining subspaces of the distributional vector space in which a lexical relation should hold. Our framework can handle symmetric attract and repel relations (e.g., synonymy and antonymy, respectively), as well as asymmetric relations (e.g., hypernymy and meronomy). In a suite of intrinsic benchmarks, we show that our model outperforms previous approaches on relatedness tasks and on hypernymy classification and detection, while being competitive on word similarity tasks. It also outperforms previous systems on extrinsic classification tasks that benefit from exploiting lexical relational cues. We perform a series of analyses to understand the behaviors of our model. 1 * Equal contribution. 1 C o d e a v a i l a b l e a t https://github.com/ aishikchakraborty/LexSub.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Pre-trained word embeddings are the bedrock of modern natural language processing architectures. This success of pre-trained word embeddings is attributed to their ability to embody the distributional hypothesis (Harris, 1954; Firth, 1957) , which states that ''the words that are used in the same contexts tend to purport similar meanings'' (Harris, 1954) .",
"cite_spans": [
{
"start": 212,
"end": 226,
"text": "(Harris, 1954;",
"ref_id": null
},
{
"start": 227,
"end": 239,
"text": "Firth, 1957)",
"ref_id": "BIBREF15"
},
{
"start": 342,
"end": 356,
"text": "(Harris, 1954)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The biggest strength of the embedding methodstheir ability to cluster distributionally related words-is also their biggest weakness. This contextual clustering of words brings together words that might be used in a similar context in the text, but that might not necessarily be semantically similar, or worse, might even be antonyms (Lin et al., 2003) .",
"cite_spans": [
{
"start": 333,
"end": 351,
"text": "(Lin et al., 2003)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several techniques have been proposed in the literature to modify word vectors to incorporate lexical-semantic relations into the embedding space (Yu and Dredze, 2014; Xu et al., 2014; Fried and Duh, 2014; Faruqui et al., 2015; Mrk\u0161i\u0107 et al., 2016; Glava\u0161 and Vuli\u0107, 2018) . The common theme of these approaches is that they modify the original distributional vector space using auxiliary lexical constraints to endow the vector space with a sense of lexical relations. However, a potential limitation of this approach is that the alteration of the original distributional space may cause a loss of the distributional information that made these vectors so useful in the first place, leading to degraded performance when used in the downstream tasks.",
"cite_spans": [
{
"start": 146,
"end": 167,
"text": "(Yu and Dredze, 2014;",
"ref_id": null
},
{
"start": 168,
"end": 184,
"text": "Xu et al., 2014;",
"ref_id": "BIBREF70"
},
{
"start": 185,
"end": 205,
"text": "Fried and Duh, 2014;",
"ref_id": "BIBREF16"
},
{
"start": 206,
"end": 227,
"text": "Faruqui et al., 2015;",
"ref_id": "BIBREF14"
},
{
"start": 228,
"end": 248,
"text": "Mrk\u0161i\u0107 et al., 2016;",
"ref_id": "BIBREF37"
},
{
"start": 249,
"end": 272,
"text": "Glava\u0161 and Vuli\u0107, 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This problem could be further exacerbated when multiple relations are incorporated, especially as different lexical-semantic relations have different mathematical properties. For example, synonymy is a symmetric relation, whereas hypernymy and meronymy are asymmetric relations. It would be difficult to control the interacting effects that constraints induced by multiple relations could have on the distributional space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The solution that we propose is to enforce a separation of concerns, in which distributional information is addressed by a central main vector space, whereas each lexical relation is handled by a separate subspace of the main distributional space. The interface between these components is then a projection operation from the main distributional space into a lexical subspace. Our framework, LEXSUB, thus formulates the problem of enforcing lexical constraints as a problem of learning a Figure 1 : A concept diagram contrasting other post-hoc approaches with our LEXSUB framework. Our LEXSUB framework enforces the lexical constraints in lexical relation-specific subspaces, whereas the other approaches try to learn lexical relations in the original distributional vector space.",
"cite_spans": [],
"ref_spans": [
{
"start": 489,
"end": 497,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "linear subspace for each of the lexical relations within the distributional vector space. Figure 1 shows a conceptual diagram of the relationship between the distributional space and the lexical subspaces in LEXSUB.",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 98,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We show that LEXSUB outperforms previous methods in a variety of evaluations, particularly on intrinsic relatedness correlation tasks, and in extrinsic evaluations in downstream settings. We also show that LEXSUB is competitive with existing models on intrinsic similarity evaluation tasks. We run a series of analyses to understand why our method improves performance in these settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our experimental results suggest that explicitly separating lexical relations into their own subspaces allows the model to better capture the structure of each lexical relation without being polluted by information from the distributional space. Conversely, the main distributional vector space is not polluted by the need to model lexical relations in the same space, as is the case for previous models. Furthermore, the explicit linear projection that is learned ensures that a relation-specific subspace exists in the original distributional vector space, and can thus be discovered by a downstream model if the extrinsic task requires knowledge about lexical-semantic relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contributions. In summary, we propose LEXSUB, a framework for learning lexical linear subspaces within the distributional vector space. The proposed framework can model all major kinds of lexical-semantic relations, namely, attract-sym-metric, repel-symmetric, and attract-asymmetric. We demonstrate that our approach outperforms or is competitive with previous approaches on intrinsic evaluations, and outperforms them on a suite of downstream extrinsic tasks that might benefit from exploiting lexical relational information. Finally, we design a series of experiments to better understand the behaviors of our model and provide evidence that the separation of concerns achieved by LEXSUB is responsible for its improved performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several approaches have been proposed towards unifying the lexical and distributional semantics. These approaches can broadly be classified into two categories: 1) post-hoc, and 2) ad-hoc approaches. Post-hoc approaches finetune pre-trained embeddings by fitting them with lexical relations. On the other hand, ad-hoc models add auxiliary lexical constraints to the distributional similarity loss. Both post-hoc and ad-hoc approaches rely on lexical databases such as WordNet (Miller, 1995) , FrameNet (Baker et al., 1998) , BabelNet (Navigli and Ponzetto, 2012) , and PPDB (Ganitkevitch et al., 2013; Pavlick et al., 2015) for symbolically encoded lexical relations that are translated into lexical constraints. These lexical constraints endow the embeddings with lexical-semantic relational information.",
"cite_spans": [
{
"start": 476,
"end": 490,
"text": "(Miller, 1995)",
"ref_id": "BIBREF36"
},
{
"start": 502,
"end": 522,
"text": "(Baker et al., 1998)",
"ref_id": "BIBREF3"
},
{
"start": 534,
"end": 562,
"text": "(Navigli and Ponzetto, 2012)",
"ref_id": "BIBREF39"
},
{
"start": 574,
"end": 601,
"text": "(Ganitkevitch et al., 2013;",
"ref_id": "BIBREF17"
},
{
"start": 602,
"end": 623,
"text": "Pavlick et al., 2015)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Post-hoc Approaches. In the post-hoc approach, pre-trained word vectors such as GloVe (Pennington et al., 2014) , Word2Vec (Mikolov et al., 2013) , FastText (Bojanowski et al., 2017) , or Paragram (Wieting et al., 2015) are fine-tuned to endow them with lexical relational information (Faruqui et al., 2015; Rothe and Sch\u00fctze, 2015; Wieting et al., 2015; Mrk\u0161i\u0107 et al., 2016 Jo, 2018; Jo and Choi, 2018; Glava\u0161 and Vuli\u0107, 2018) . In this paper, we primarily discuss LEXSUB as a post-hoc model. This formulation of LEXSUB is similar to the other post-hoc approaches mentioned above with the significant difference that the lexical relations are enforced in a lexical subspace instead of the original distributional vector space. Rothe et al. (2016) explores the idea of learning specialized subspaces with to reduce the dimensionality of distributional space such that it maximally preserves relevant task-specific information at the expense of distributional information. Unlike Rothe et al. (2016) , our proposed method tries to retain the distributional information in the embeddings so that they can be used as a general-purpose initialization in any NLP pipeline. Embeddings from Rothe et al. (2016) 's method can only be used for the task on which they were trained.",
"cite_spans": [
{
"start": 86,
"end": 111,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF47"
},
{
"start": 123,
"end": 145,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF34"
},
{
"start": 157,
"end": 182,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 197,
"end": 219,
"text": "(Wieting et al., 2015)",
"ref_id": "BIBREF69"
},
{
"start": 285,
"end": 307,
"text": "(Faruqui et al., 2015;",
"ref_id": "BIBREF14"
},
{
"start": 308,
"end": 332,
"text": "Rothe and Sch\u00fctze, 2015;",
"ref_id": "BIBREF52"
},
{
"start": 333,
"end": 354,
"text": "Wieting et al., 2015;",
"ref_id": "BIBREF69"
},
{
"start": 355,
"end": 374,
"text": "Mrk\u0161i\u0107 et al., 2016",
"ref_id": "BIBREF37"
},
{
"start": 375,
"end": 384,
"text": "Jo, 2018;",
"ref_id": "BIBREF24"
},
{
"start": 385,
"end": 403,
"text": "Jo and Choi, 2018;",
"ref_id": "BIBREF25"
},
{
"start": 404,
"end": 427,
"text": "Glava\u0161 and Vuli\u0107, 2018)",
"ref_id": "BIBREF20"
},
{
"start": 728,
"end": 747,
"text": "Rothe et al. (2016)",
"ref_id": "BIBREF51"
},
{
"start": 979,
"end": 998,
"text": "Rothe et al. (2016)",
"ref_id": "BIBREF51"
},
{
"start": 1184,
"end": 1203,
"text": "Rothe et al. (2016)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Ad-hoc Approaches. The ad-hoc class of approaches add auxiliary lexical constraints to the distributional similarity loss function, usually, a language modeling objective like CBOW (Mikolov et al., 2013) or recurrent neural network language model (Mikolov et al., 2010; Sundermeyer et al., 2012) . These constraints can either be viewed as a prior or as a regularizer to the distributional objective (Yu and Dredze, 2014; Xu et al., 2014; Kiela et al., 2015a; Fried and Duh, 2014) . In other work, the original language modeling objective is modified to incorporate lexical constraints (Liu et al., 2015; Osborne et al., 2016; Bollegala et al., 2016; Ono et al., 2015; Nguyen et al., 2016 Nguyen et al., , 2017 Tifrea et al., 2018) . We discuss the ad-hoc formulation of LEXSUB in Appendix A.",
"cite_spans": [
{
"start": 181,
"end": 203,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF34"
},
{
"start": 247,
"end": 269,
"text": "(Mikolov et al., 2010;",
"ref_id": "BIBREF35"
},
{
"start": 270,
"end": 295,
"text": "Sundermeyer et al., 2012)",
"ref_id": "BIBREF60"
},
{
"start": 400,
"end": 421,
"text": "(Yu and Dredze, 2014;",
"ref_id": null
},
{
"start": 422,
"end": 438,
"text": "Xu et al., 2014;",
"ref_id": "BIBREF70"
},
{
"start": 439,
"end": 459,
"text": "Kiela et al., 2015a;",
"ref_id": "BIBREF26"
},
{
"start": 460,
"end": 480,
"text": "Fried and Duh, 2014)",
"ref_id": "BIBREF16"
},
{
"start": 586,
"end": 604,
"text": "(Liu et al., 2015;",
"ref_id": "BIBREF29"
},
{
"start": 605,
"end": 626,
"text": "Osborne et al., 2016;",
"ref_id": "BIBREF44"
},
{
"start": 627,
"end": 650,
"text": "Bollegala et al., 2016;",
"ref_id": "BIBREF8"
},
{
"start": 651,
"end": 668,
"text": "Ono et al., 2015;",
"ref_id": "BIBREF43"
},
{
"start": 669,
"end": 688,
"text": "Nguyen et al., 2016",
"ref_id": "BIBREF41"
},
{
"start": 689,
"end": 710,
"text": "Nguyen et al., , 2017",
"ref_id": "BIBREF40"
},
{
"start": 711,
"end": 731,
"text": "Tifrea et al., 2018)",
"ref_id": "BIBREF61"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "An alternate axis along which to classify these approaches is by their ability to model different types of lexical relations. These types can be enumerated as symmetric-attract (synonymy), symmetric-repel (antonymy), and asymmetricattract (hypernymy, meronymy). Most approaches mentioned above can handle symmetric-attract type relations, but only a few of them can model other types of lexical relations. For example, Ono et al. (2015) Our proposed framework can model all types of lexical relations, namely, symmetric-attract, symmetric-repel, and asymmetric-attract, and uses of all four major lexical relations found in lexical resources like WordNet, namely, synonymy, antonymy, hypernymy, and meronymy, and could flexibly include more relations. To our knowledge, we are the first to use meronymy lexical relations.",
"cite_spans": [
{
"start": 419,
"end": 436,
"text": "Ono et al. (2015)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Other Approaches. Several approaches do not fall into either of the categories mentioned above. A subset of these approaches attempts to learn lexical relations, especially hypernymy, directly by embedding a lexical database, for example, Poincar\u00e9 Embeddings (Nickel and or Order-Embeddings (Vendrov et al., 2015) . Another set of approaches, like DIH (Chang et al., 2018) or Word2Gauss (Vilnis and McCallum, 2014; Athiwaratkun and Wilson, 2017) attempt to learn the hypernymy relation directly from the corpus without relying on any lexical database. The third set of approaches attempt to learn a scoring function over a sparse bag of words (SBOW) features. These approaches are summarized by Shwartz et al. (2017) .",
"cite_spans": [
{
"start": 291,
"end": 313,
"text": "(Vendrov et al., 2015)",
"ref_id": "BIBREF63"
},
{
"start": 348,
"end": 372,
"text": "DIH (Chang et al., 2018)",
"ref_id": null
},
{
"start": 387,
"end": 414,
"text": "(Vilnis and McCallum, 2014;",
"ref_id": "BIBREF64"
},
{
"start": 415,
"end": 445,
"text": "Athiwaratkun and Wilson, 2017)",
"ref_id": "BIBREF1"
},
{
"start": 695,
"end": 716,
"text": "Shwartz et al. (2017)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "V {x 1 , x 2 , x 3 , . . . .x n },",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Given a vocabulary set",
"sec_num": null
},
{
"text": "our objective is to create a set of vectors {x 1 , x 2 , x 3 , . . . , x n } \u2208 R d that respect both distributional similarity as well as lexicalsemantic relations. We refer to these vectors as the main vector space embeddings. Let R be the relation set corresponding to a lexical-semantic relation r. The elements of this relation set are ordered pairs of words",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Given a vocabulary set",
"sec_num": null
},
{
"text": "(x i , x j ) \u2208 V \u00d7 V ; that is, if (x i , x j ) \u2208 R,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Given a vocabulary set",
"sec_num": null
},
{
"text": "then x i and x j are related by the lexical relation r. For symmetric relations like synonymy and antonymy, (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Given a vocabulary set",
"sec_num": null
},
{
"text": "x i , x j ) \u2208 R implies (x j , x i ) \u2208 R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Given a vocabulary set",
"sec_num": null
},
{
"text": "Similarly, for asymmetric relations like hypernymy and meronymy, x j is related to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Given a vocabulary set",
"sec_num": null
},
{
"text": "x i by relation r if (x i , x j ) \u2208 R and (x j , x i ) /",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Given a vocabulary set",
"sec_num": null
},
{
"text": "\u2208 R. Our model has two components. The first component helps the model learn the lexical subspaces within the distributional vector space. These subspaces are learned using a loss function L lex defined in Section 3.2.4. The second component helps the model learn the distributional vector space. The training of this vector space is aided by a loss function L dist defined in Section 3.3. The total loss that we optimize is therefore defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Given a vocabulary set",
"sec_num": null
},
{
"text": "L total = L dist +L lex .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Given a vocabulary set",
"sec_num": null
},
{
"text": "Distance Function. In the subsequent subsections, we will build lexical subspace distance functions using the cosine distance function, d(x, y) = 1 \u2212 x \u2022 y/( x y ) where x and y are embeddings for the word x and y, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Given a vocabulary set",
"sec_num": null
},
{
"text": "In this section, we discuss three types of abstract lexical losses-attract symmetric, attract asymmetric, and repel symmetric-that are commonly found in lexical databases like WordNet. We then discuss a negative sampling loss that prevents the model from finding trivial solutions to the lexical objective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Lexical Subspaces in the Distributional Space",
"sec_num": "3.2"
},
{
"text": "Let x i and x j be a pair of words related by a lexical relation r. We project their embeddings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract Lexical Relation Loss",
"sec_num": "3.2.1"
},
{
"text": "x i , x j \u2208 R d to an h-dimensional lexical subspace (h < d) using a learned relation-specific projection matrix W proj r with dimensions h \u00d7 d.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract Lexical Relation Loss",
"sec_num": "3.2.1"
},
{
"text": "The distance between any two words x i and x j in the lexical subspace is defined as a distance between their projected embeddings. We define this lexicorelational subspace specific distance function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract Lexical Relation Loss",
"sec_num": "3.2.1"
},
{
"text": "d proj r as d proj r (x i , x j ) = d(W proj r x i , W proj r x j ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract Lexical Relation Loss",
"sec_num": "3.2.1"
},
{
"text": "The lexical subspaces can be categorized into three types: attract symmetric, attract asymmetric, and repel symmetric. In an attract symmetric subspace, the objective is to minimize the distance between the lexically related word pair x i and x j . The corresponding loss function is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract Lexical Relation Loss",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L att-sym r = 1 |R| x i ,x j \u2208R d proj r (x i , x j )",
"eq_num": "(2)"
}
],
"section": "Abstract Lexical Relation Loss",
"sec_num": "3.2.1"
},
{
"text": "Similarly, for repel symmetric lexical relations such as antonymy, the goal is to maximize the distance (up to a margin \u03b3) between the two projected embeddings. We define a repel loss for r, L rep r , as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract Lexical Relation Loss",
"sec_num": "3.2.1"
},
{
"text": "L rep r = 1 |R| x i ,x j \u2208R max 0, \u03b3 \u2212 d proj r (x i , x j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract Lexical Relation Loss",
"sec_num": "3.2.1"
},
{
"text": "(3) In the case of attract asymmetric relations, we encode the asymmetry of the relationship between x i and x j by defining an asymmetric distance function d asym r in terms of this affine transformation of embeddings of x i and x j as: (an h-dimensional vector) are the parameters of the affine function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract Lexical Relation Loss",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d asym r (x i , x j ) = d proj r (W asym r x i + b asym r , x j )",
"eq_num": "(4)"
}
],
"section": "Abstract Lexical Relation Loss",
"sec_num": "3.2.1"
},
{
"text": "The attract asymmetric loss function is then defined in terms of d asym r as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract Lexical Relation Loss",
"sec_num": "3.2.1"
},
{
"text": "L att-asym r = 1 |R| x i ,x j \u2208R \uf8ee \uf8f0 d asym r (x i , x j ) + max 0, \u03b3 \u2212 d asym r (x j , x i ) \uf8f9 \uf8fb (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract Lexical Relation Loss",
"sec_num": "3.2.1"
},
{
"text": "The first term of the L att-asym r brings x i 's projected embedding closer to the embedding of x j . The second term avoids the trivial solution of parameterized affine function collapsing to a identity function. This is achieved by maximizing the distance between x i and the affine projection of x j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract Lexical Relation Loss",
"sec_num": "3.2.1"
},
{
"text": "We supplement our lexical loss functions with a negative sampling loss. This helps avoid the trivial solutions such as all words embeddings collapsing to a single point for attract relations and words being maximally distant in the repel subspace.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negative Sampling",
"sec_num": "3.2.2"
},
{
"text": "We generate negative samples by uniformly sampling n words from the vocabulary V . For attract subspaces (both attract symmetric and attract asymmetric), we ensure that negatively sampled words in the subspace are at a minimum distance \u03b4 min r from x i . Similarly, for repel subspaces, we ensure that negative samples are at a distance of at-most \u03b4 max r from x i . The attract and repel negative sampling losses are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negative Sampling",
"sec_num": "3.2.2"
},
{
"text": "L attr-neg r = x i ,x j n l=1 max 0, \u03b4 min r \u2212 d proj r (x i , x l ) L rep-neg r = x i ,x j n l=1 max 0, d proj r (x i , x l ) \u2212 \u03b4 max r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negative Sampling",
"sec_num": "3.2.2"
},
{
"text": "where x l indicates the negative sample drawn from a uniform distribution over vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negative Sampling",
"sec_num": "3.2.2"
},
{
"text": "Synonymy Relations. As synonymy is an attract symmetric relation, we use L attr-sym syn as our lexical loss and L attr-neg syn as our negative sampling loss, with the negative sampling loss weighted by a negative sampling ratio hyperparameter \u00b5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation-Specific Losses",
"sec_num": "3.2.3"
},
{
"text": "L syn = L attr-sym syn + \u00b5L attr-neg syn (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation-Specific Losses",
"sec_num": "3.2.3"
},
{
"text": "Antonymy Relations. Antonymy relation is the mirror image of the synonymy relation; hence, we use the same subspace for both the relations; (i.e., W proj ant = W proj syn ). As antonymy is a repel lexical relation, we use L rep syn as our lexical loss and L rep-neg syn as our negative loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation-Specific Losses",
"sec_num": "3.2.3"
},
{
"text": "L ant = L rep syn + \u00b5L rep-neg syn (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation-Specific Losses",
"sec_num": "3.2.3"
},
{
"text": "Hypernymy Relations. Hypernymy is an attract asymmetric relation, hence, we use L attr-asym hyp as the lexical loss and L attr-neg hyp as negative sampling loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation-Specific Losses",
"sec_num": "3.2.3"
},
{
"text": "L hyp = L attr-asym hyp + \u00b5L attr-neg hyp (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation-Specific Losses",
"sec_num": "3.2.3"
},
{
"text": "Meronymy Relations. Meronymy is also an attract-asymmetric relation. Therefore, in a similar manner, the lexical loss will be L attr-asym mer and negative sampling loss will be L attr-neg mer :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation-Specific Losses",
"sec_num": "3.2.3"
},
{
"text": "L mer = L attr-asym mer + \u00b5L attr-neg mer (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation-Specific Losses",
"sec_num": "3.2.3"
},
{
"text": "Based on the individual lexical losses defined above, the total lexical subspace loss defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Total Lexical Subspace Loss",
"sec_num": "3.2.4"
},
{
"text": "L lex = \u03bd syn L syn +\u03bd ant L ant +\u03bd hyp L hyp +\u03bd mer L mer (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Total Lexical Subspace Loss",
"sec_num": "3.2.4"
},
{
"text": "where \u03bd syn , \u03bd ant , \u03bd hyp , \u03bd mer \u2208 [0, 1] are lexical relation ratio hyperparameters weighing the importance of each of the lexical relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Total Lexical Subspace Loss",
"sec_num": "3.2.4"
},
{
"text": "In the post-hoc setting, we start from pretrained embeddings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preserving the Distributional Space",
"sec_num": "3.3"
},
{
"text": "X = [x 1 , x 2 , . . . , x n ] T \u2208 R n\u00d7d to learn retrofitted embeddings X \u2032 = [x \u2032 1 , x \u2032 2 , . . . , x \u2032 n ] T \u2208 R n\u00d7d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preserving the Distributional Space",
"sec_num": "3.3"
},
{
"text": "The L dist component aims to minimize the change in L2 distance between the word embeddings in order to preserve the distributional information in the pre-trained embeddings:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preserving the Distributional Space",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L dist = 1 n X \u2212 X \u2032 2 2",
"eq_num": "(11)"
}
],
"section": "Preserving the Distributional Space",
"sec_num": "3.3"
},
{
"text": "The overall loss of LEXSUB is ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Loss Function",
"sec_num": "3.4"
},
{
"text": "L total = L dist +L lex .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Loss Function",
"sec_num": "3.4"
},
{
"text": "In this section, we describe the datasets and models that we use in our experiments. The output of our model is the main vector space embedding that is endowed with the specialized lexical subspaces. All our evaluations are done on the main vector space embeddings unless stated otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Setup",
"sec_num": "4"
},
{
"text": "Our experiments were conducted using GloVe embeddings (Pennington et al., 2014) of 300dimension trained on 6 billion tokens from the Wikipedia 2014 and Gigaword 5 corpus. The vocabulary size for GloVe embeddings is 400,000.",
"cite_spans": [
{
"start": 54,
"end": 79,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Dataset",
"sec_num": "4.1"
},
{
"text": "We use WordNet (Miller, 1995) as the lexical database for all experiments. We consider all four types of lexical relations: synonymy, antonymy, hypernymy, and meronymy. Only those relation triples where both words occur in the vocabulary are considered. We consider both instance and concept hypernyms for hypernymy relations, and for meronomy relations, part, substance, as well as member meronyms were included as constraints. Table 1 shows the relation-wise split used in the experiments.",
"cite_spans": [
{
"start": 15,
"end": 29,
"text": "(Miller, 1995)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 429,
"end": 436,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Lexical Resource",
"sec_num": "4.2"
},
{
"text": "We learn 300-dimensional embeddings during training. We use Adagrad (Duchi et al., 2011) as our optimizer with learning rate 0.5. We train the models for 100 epochs. For the lexical losses, we take n = 10, \u00b5 = 10, \u03b3 = 2, \u03b4 syn max = 1.5, \u03b4 syn min = 0.5, \u03b4 mer min = 1, \u03b4 hyp min = 1.0, and \u03bd syn = 0.01, \u03bd hyp = 0.01, \u03bd mer = 0.001.",
"cite_spans": [
{
"start": 68,
"end": 88,
"text": "(Duchi et al., 2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Hyperparameters",
"sec_num": "4.3"
},
{
"text": "We rely on the validation sets corresponding to our extrinsic tasks (Section 6.2) for choosing these hyperparameter values. We ran a grid search on the hyperparameter space and selected the final set of hyperparameters by first ranking validation results for each task in descending order, then calculating the mean rank across the tasks. We selected the hyperparameters that achieved the best (i.e., lowest) mean rank.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Hyperparameters",
"sec_num": "4.3"
},
{
"text": "Vanilla. The Vanilla baselines refer to the original GloVe word embeddings without any lexical constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5"
},
{
"text": "Retrofitting. Retrofitting (Faruqui et al., 2015) uses similarity constraints from lexical resources to pull similar words together. The objective function that retrofitting optimizes consists of a reconstruction loss L dist and a symmetric-attract loss L syn att-sym with d = h, W proj syn = I h , and",
"cite_spans": [
{
"start": 27,
"end": 49,
"text": "(Faruqui et al., 2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5"
},
{
"text": "d = \u2022 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5"
},
{
"text": "Counterfitting. Counterfitting (Mrk\u0161i\u0107 et al., 2016) builds up on retrofitting but also support repel symmetric relations. Their objective function consists of three parts: Synonym Attract, Antonym Repel, and a Vector Space Preservation loss, similar to L syn att-sym , L syn rep-sym , and L dist , respectively.",
"cite_spans": [
{
"start": 31,
"end": 52,
"text": "(Mrk\u0161i\u0107 et al., 2016)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5"
},
{
"text": "LEAR. LEAR expands the counterfitting framework by adding a Lexical Entailment (LE) loss. This LE loss encodes a hierarchical ordering between concepts (hyponym-hypernym relationships) and can handle attract asymmetric relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5"
},
{
"text": "We train each of the baseline models using the lexical resources described in Section 4.2. LEAR, LEXSUB, and Counterfitting were trained on all four lexical relations whereas the Retrofitting was trained only on attract relations, namely, synonymy, hypernymy, and meronymy. This is due to Retofitting's inability to handle repel type relations. We also report the results of our experiments with LEXSUB and the baselines trained on the lexical resource from LEAR in Appendix B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5"
},
{
"text": "Word Similarity Task. We use four popular word similarity test sets to evaluate word similarity. We use the men3k dataset by (Bruni et al., 2014) and the relatedness section of the WordSim353 dataset (Agirre et al., 2009) to measure the ability of the embedding's to retain the distributional information. We use the SimLex-999 dataset (Hill et al., 2015) and SimVerb 3500 (Gerz et al., 2016) to evaluate the embedding's ability to detect graded synonymy and antonymy relations. Both the relatedness and similarity tasks were evaluated in the main vector space for LEXSUB.",
"cite_spans": [
{
"start": 125,
"end": 145,
"text": "(Bruni et al., 2014)",
"ref_id": null
},
{
"start": 200,
"end": 221,
"text": "(Agirre et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 336,
"end": 355,
"text": "(Hill et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 373,
"end": 392,
"text": "(Gerz et al., 2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Tasks",
"sec_num": "6.1"
},
{
"text": "Hypernymy Tasks. Following Roller et al. 2018, we consider three tasks involving hypernymy: graded hypernymy evaluation, hypernymy classification, and directionality detection. We use the hypernymy subspace embeddings for LEXSUB for these experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Tasks",
"sec_num": "6.1"
},
{
"text": "For graded hypernymy evaluation, we use the Hyperlex dataset and report the results on the complete hyperlex dataset. We measure Spearman's \u03c1 between the cosine similarity of embeddings of the word pairs and the human evaluations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Tasks",
"sec_num": "6.1"
},
{
"text": "The hypernymy classification task is an unsupervised task to classify whether a pair of words are hypernym/hyponym of each other. We consider four of the five benchmark datasets considered in Roller et al. (2018) ; namely, BLESS (Baroni and Lenci, 2011), LEDS (Baroni et al., 2012) , EVAL (Santus et al., 2014) , and WBLESS (Weeds et al., 2014) . We do not consider the SHWARTZ dataset (Shwartz et al., 2016) , as the number of OOV was high (38% for LEXSUB, Retrofitting, and LEAR and 60% for Counterfitting for GloVe). The evaluation is done by ranking the word pairs by cosine similarity and computing the mean average precision over the ranked list.",
"cite_spans": [
{
"start": 192,
"end": 212,
"text": "Roller et al. (2018)",
"ref_id": "BIBREF50"
},
{
"start": 260,
"end": 281,
"text": "(Baroni et al., 2012)",
"ref_id": "BIBREF4"
},
{
"start": 289,
"end": 310,
"text": "(Santus et al., 2014)",
"ref_id": "BIBREF53"
},
{
"start": 324,
"end": 344,
"text": "(Weeds et al., 2014)",
"ref_id": "BIBREF68"
},
{
"start": 386,
"end": 408,
"text": "(Shwartz et al., 2016)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Tasks",
"sec_num": "6.1"
},
{
"text": "The hypernymy directionality detection task is designed to detect which of the two terms is the hypernym of the other; that is, given two words w 1 and w 2 , is w 1 the hypernym of w 2 or vice versa. We consider two of the three datasets from Roller et al., (2018) ; namely, WBLESS and BIBLESS (Kiela et al., 2015b) . The classification setup is similar to Roller et al. (2018) and is done using the open source package provided by the authors. 2",
"cite_spans": [
{
"start": 243,
"end": 264,
"text": "Roller et al., (2018)",
"ref_id": "BIBREF50"
},
{
"start": 294,
"end": 315,
"text": "(Kiela et al., 2015b)",
"ref_id": "BIBREF27"
},
{
"start": 357,
"end": 377,
"text": "Roller et al. (2018)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Tasks",
"sec_num": "6.1"
},
{
"text": "We evaluate our embeddings on five extrinsic tasks that could benefit from the lexical relational cues. We do so by injecting our embeddings into recent high-performing models for those tasks. The tasks and models are: NER Classification. We use the CoNLL 2003 NER task (Tjong Kim Sang and De Meulder, 2003) for the Named Entity Recognition (NER) Task. The dataset consists of news stories from Reuters where the entities have been labeled into four classes (PER, LOC, ORG, MISC) . We use the model proposed by for the NER task.",
"cite_spans": [
{
"start": 270,
"end": 307,
"text": "(Tjong Kim Sang and De Meulder, 2003)",
"ref_id": "BIBREF62"
},
{
"start": 458,
"end": 479,
"text": "(PER, LOC, ORG, MISC)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Tasks",
"sec_num": "6.2"
},
{
"text": "Sentiment Classification. We use the Bi-Attentive Classification Network (BTN) by McCann et al. (2017) to train a sentiment classifier. We train all models for sentiment classification on the Stanford Sentiment Treebank (SST) (Socher et al., 2013) . We use a two-class granularity where we remove the ''neutral'' class following McCann et al. 2017and just use the ''positive'' and ''negative'' classes for classification.",
"cite_spans": [
{
"start": 82,
"end": 102,
"text": "McCann et al. (2017)",
"ref_id": "BIBREF32"
},
{
"start": 226,
"end": 247,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Tasks",
"sec_num": "6.2"
},
{
"text": "Textual Entailment. For textual entailment experiments, we use the Decomposable Attention model by Parikh et al. (2016) for our experiments. We train and evaluate the models on the Stanford Natural Language Inference (SNLI) dataset (Bowman et al., 2015) using the standard train, test and validation split.",
"cite_spans": [
{
"start": 99,
"end": 119,
"text": "Parikh et al. (2016)",
"ref_id": "BIBREF45"
},
{
"start": 232,
"end": 253,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Tasks",
"sec_num": "6.2"
},
{
"text": "Question Answering. We use the SQUAD1.1 question answering dataset (Rajpurkar et al., 2016) . The dataset contains 100k+ crowd-sourced question answer pairs. We use the BiDAF model (Seo et al., 2016 ) for the question answering task. We report the accuracy on the development set for SQuAD.",
"cite_spans": [
{
"start": 67,
"end": 91,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF49"
},
{
"start": 181,
"end": 198,
"text": "(Seo et al., 2016",
"ref_id": "BIBREF54"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Tasks",
"sec_num": "6.2"
},
{
"text": "Paraphrase Detection. For the paraphrase detection task, we use the BIMPM model by Wang et al. (2017) for our experiments. We train and evaluate the models on the Quora Question Pairs (QQP) dataset 3 using the standard splits.",
"cite_spans": [
{
"start": 83,
"end": 101,
"text": "Wang et al. (2017)",
"ref_id": "BIBREF67"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Tasks",
"sec_num": "6.2"
},
{
"text": "Method For the above models, we use the reference implementations of the models provided by the AllenNLP toolkit . We replace the input layer of these models with the embeddings we want to evaluate. We use two different setups for our extrinsic experiments and report results for both.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Tasks",
"sec_num": "6.2"
},
{
"text": "Setup 1: In our first setup, we standardize several representational and training decisions to remove potential confounding effects. This ensures that performance differences in the extrinsic tasks are reflective of the quality of the embeddings under evaluation. We achieve this by making the following changes to all extrinsic task models. First, for the Vanilla models, we use pretrained GloVe embeddings of 300 dimensions, trained on 6 billion tokens. Similarly, we train all post-hoc embeddings using the 6 billion token 300-dimensional pretrained GloVe embeddings and plug these post-hoc embeddings into the extrinsic task model. Second, we remove character embeddings from the input layer. Finally, we do not fine-tune the pretrained embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Tasks",
"sec_num": "6.2"
},
{
"text": "Setup 2: In order to demonstrate that we are not unfairly penalizing the base models, we also conduct a second set of experiments where models for all the extrinsic tasks are trained in the original settings (i.e., without the changes mentioned above). In these experiments, we do not remove character embeddings from any model, nor do we put any restrictions on fine-tuning of the pretrained word embeddings. These results for both the experiments are reported in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 465,
"end": 472,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Extrinsic Tasks",
"sec_num": "6.2"
},
{
"text": "We now report on the results of our comparisons of LEXSUB to Vanilla embeddings and baselines trained on the same lexical resource as LEXSUB. We use the main vector space embeddings in all our experiments except for hypernymy experiments, for which we use the hypernymy space embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "Intrinsic Evaluations. Table 2 shows that our model outperforms the Vanilla baseline on both relatedness and similarity tasks, outperforms all the other baselines on relatedness, and is competitive with the other baselines on all the word similarity tasks. Table 3 demonstrates that we considerably outperform Vanilla as well as other baseline post-hoc methods on hypernymy tasks. Thus, our subspace-based approach can learn lexical-semantic relations and can perform as well or better than the approaches that enforce lexical constraints directly on the distributional space.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 30,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 257,
"end": 264,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "Another important result from on relatedness tasks like men3k and WS-353R. We hypothesize that enforcing symmetricrepel (Counterfitting) and asymmetric-attract (Counterfitting and LEAR) constraints directly on the distributional space leads to distortion of the distributional vector space, resulting in poor performance on relatedness tasks. LEXSUB performs competitively on similarity tasks without sacrificing its performance in relatedness tasks, unlike contemporary methods that sacrifice relatedness by optimizing for similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "Extrinsic Evaluations. Table 4 presents the results of the extrinsic evaluations. Rows 3-7 present the results for first setup-that is, experiments without confounds (Setup 1) such as character embeddings and further fine-tuning of the input embeddings. The results for the models trained with the original setting (Setup 2) are presented in rows 9-14. In the original setting, the model for QQP, SQuAD, and NER contains additional trainable character embeddings in the input layer. The original NER model further fine-tunes the input embeddings. In our first set of experiments, we find that the LEXSUB model outperforms the baseline methods on every extrinsic task and Vanilla on every extrinsic task except SNLI. In the case of our second experiment, LEXSUB outperforms previous post-hoc methods in all extrinsic tasks but does worse than GloVe in NER. We hypothesize the relatively poor performance of LEXSUB with respect to GloVe on NER might be due to the task-specific fine-tuning of the embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 30,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "In fact, we find that the baseline approaches, with a few exceptions, do worse than Vanilla across the whole suite of extrinsic tasks in both the settings. Taken together, this indicates that our subspace-based approach is superior if the objective is to use these modified embeddings in downstream tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "We hypothesize that these results are indicative of the fact that the preservation of distributional information is crucial to the downstream performance of the embeddings. The baseline approaches, Table 4 : Extrinsic evaluation results for baselines and LEXSUB. Setup 1 refers to the experiments without extrinsic model confounds such as character embeddings and further fine-tuning of the input embeddings. Setup 2 refers to the experiments in the original AllenNLP setting where the model for QQP, SQuAD, and, NER contains additional trainable character embeddings in the input layer, and the original NER model further fine-tunes the input embeddings. In both the setups, we see that LEXSUB outperforms the baselines on most of the extrinsic tasks. We hypothesize the relatively poor performance of LEXSUB compared to Vanilla on NER might be due to the task-specific fine-tuning of the embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 198,
"end": 205,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "which learn the lexical-semantic relations in the original distributional space, disrupt the distributional information, leading to poor extrinsic task performance. We expand on this point in Section 8.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "State-of-the-Art Results in Extrinsic Tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "We have also added the current state-of-theart results for the respective extrinsic tasks in Table 4 (last row). The current state of the art for NER is Baevski et al. (2019) . The authors also use the model proposed by but initialize the model with contextualized embeddings from a bi-directional transformer. Similarly, the current state of the art for SST-2 and QQP (ERNIE 2.0; Sun et al., 2019), SNLI (MT-DNN; , and SQuAD (XLNet; Yang et al., 2019) are all initialized with contextualized embeddings from a bidirectional transformerbased model trained on a data that is orders of magnitude larger than the GloVe variant used in our experiments. The contextualized embeddings, because of their ability to represent the word in the context of its usage, are considerably more powerful than GloVe, hence the models relying on them are not directly comparable to our model or the other baselines.",
"cite_spans": [
{
"start": 153,
"end": 174,
"text": "Baevski et al. (2019)",
"ref_id": "BIBREF2"
},
{
"start": 434,
"end": 452,
"text": "Yang et al., 2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 93,
"end": 100,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "In this section, we perform several analyses to understand the behaviors of our model and the baselines better, focusing on the following questions: Q1: How well do LEXSUB's lexical subspaces capture the specific lexical relations for which they were optimized, as opposed to the other relations? Q2: Can the lexical subspaces and the manifolds in the main distributional space be exploited by a downstream neural network model? Q3: How well do the models preserve relatedness in the main distributional space?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "8"
},
{
"text": "8.1 LEXSUB Subspace Neighborhoods (Q1) Table 6 : MAP@100 scores for query words taken from Hyperlex and Simlex999.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 46,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "8"
},
{
"text": "To systematically quantify these results, we compute the mean average precision (MAP) over the top 100 neighbors for a list of query words. We use the words from the Hyperlex and Simlex (Hill et al., 2015) datasets as the query words for this experiment. For each query word and for each lexical relation, we obtain a list of words from WordNet which are related to the query word through that particular lexical relation. These words form the gold-standard labels for computing the average precision for the query word. Table 6 shows the MAP scores for the top 100 neighborhood words for the baselines, for LEXSUB, and for its lexical subspaces. The main vector space subspace does worse than all the baselines, which is expected because the baselines learn to fit their lexical relations in the original distributional space. However, if we look at the individual lexical subspaces, we can see that the synonymy, hypernymy, and meronymy subspaces have the best MAP score for their respective relation, demonstrating the separation of concerns property that motivated our approach.",
"cite_spans": [
{
"start": 186,
"end": 205,
"text": "(Hill et al., 2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 521,
"end": 528,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "8"
},
{
"text": "One of the motivations behind enforcing explicit lexical constraints on the distributional space is to learn lexico-relational manifolds within the distributional vector space. On any such lexicorelational manifold, the respective lexical relation will hold. For example, on a synonymy manifold, all the synonyms of a word would be clustered together and the antonyms would be maximally distant. The deep learning based models then will be able to exploit these lexico-relational manifolds to improve generalization on the downstream tasks. To evaluate this hypothesis, we propose a simplified classification setup of predicting the lexical relation between a given word pair. If a downstream model is able to detect these manifolds, it should be able to generalize beyond the word pairs seen in the training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Relation Prediction Task (Q2)",
"sec_num": "8.2"
},
{
"text": "Lexical Relation Prediction Dataset. The lexical relation prediction dataset is composed of word pairs as input and their lexical relation as the target. The problem is posed as a four-way classification problem between the relations synonymy, antonymy, hypernymy, and meronomy. The dataset is collected from WordNet and has a total of 606,160 word pairs and labels split in 80/20 ratio into training and validation. The training set contains 192,045 synonyms, 9,733 antonyms, 257,844 hypernyms, and 25,308 meronyms. Similarly, the validation set by relation split is 96,022 synonyms, 4,866 antonyms, 128,920 hypernyms, and 12,652 meronyms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Relation Prediction Task (Q2)",
"sec_num": "8.2"
},
{
"text": "We use the word pairs with lexical relation labels from the Hyperlex as our test set. We only consider synonymy, Table 7 : Macro-averaged F1 across four lexical relation classes, namely, synonymy, antonymy, hypernymy, and meronymy, for lexical relation prediction task.",
"cite_spans": [],
"ref_spans": [
{
"start": 113,
"end": 120,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lexical Relation Prediction Task (Q2)",
"sec_num": "8.2"
},
{
"text": "antonymy, meronomy, and degree-1 hypernymy relations from the Hyperlex as these directly map to our training labels. We remove all the word pairs that occur in the training set. This leads to 917 examples with 194 synonym, 98 antonym, 384 hypernym, and 241 meronym pairs. 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Relation Prediction Task (Q2)",
"sec_num": "8.2"
},
{
"text": "Lexical Relation Prediction Model. We use a Siamese Network for the relation classification task. The input to the model is a one-hot encoded word pair, which is fed into the embedding layer. This embedding layer is initialized with the embedding that is to be evaluated and is not fine-tuned during training. This is followed by a 1,500-dimensional affine hidden layer with a ReLU activation function that is shared by both word embeddings. This shared non-linear layer is expected to learn a mapping from the distributional vector space to lexico-relational manifolds within the distributional vector space. The shared layer is followed by two different sets of two-dimensional 125 \u00d7 4 affine layers, one for each word. These linear layers are put in place to capture the various idiosyncrasies of lexical relations such as asymmetry and attract and repel nature. Finally, the cosine similarity of the hidden representation corresponding to two words is fed into the softmax layer to map the output to probabilities. The models are trained for 30 epochs using the Adagrad (Duchi et al., 2011) optimizer with an initial learning rate of 0.01 and a gradient clipping ratio of 5.0. Table 7 shows the results of our lexical relation prediction experiments. All the post-hoc models except for retrofitting can exploit the 4 The Lexical Relation Prediction Dataset can be downloaded from https://github.com/aishikchakraborty/ LexSub. lexical relation manifold to classify word pairs by their lexical relation. The LEXSUB model again outperforms all the baseline models in the task. We hypothesize that this is because LEXSUB learns the lexical relations in a linear subspace which happens to be the simplest possible manifold. Hence, it might be easier for downstream models to exploit it for better generalization.",
"cite_spans": [
{
"start": 1074,
"end": 1094,
"text": "(Duchi et al., 2011)",
"ref_id": "BIBREF13"
},
{
"start": 1319,
"end": 1320,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1181,
"end": 1188,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lexical Relation Prediction Task (Q2)",
"sec_num": "8.2"
},
{
"text": "As previously discussed, one of the main motivations of LEXSUB is to separate the learning of lexical relations into subspaces, so that the main distributional vector space is not deformed to as great a degree. We directly measure this deformation by computing the mean shift in the learned embedding space. We define the mean shift as the average L2-distance between the learned and the Vanilla embeddings. We find that the mean shift for LEXSUB is about 30 times lower than the baselines (Table 8 ). This shows that LEXSUB better preserves the original distributional space, which may explain its better performance in intrinsic relatedness evaluations and extrinsic evaluations.",
"cite_spans": [],
"ref_spans": [
{
"start": 490,
"end": 498,
"text": "(Table 8",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Preserving the Distributional Space (Q3)",
"sec_num": "8.3"
},
{
"text": "We presented LEXSUB, a novel framework for learning lexical subspaces in a distributional vector space. The proposed approach properly separates various lexical relations from the main distributional space, which leads to improved downstream task performance, interpretable learned subspaces, and preservation of distributional information in the distributional space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "In future work, we plan to extend our framework to contextualized embeddings and expand the framework to support hyperbolic distances, which The Ad-hoc Distributional Space. Given a set of tokens in a corpus C = (w 1 , w 2 , . . . , w t ), we minimize the negative log likelihood function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "L adhoc dist = \u2212 k i=1 log P (w i |w i\u2212k , \u2022 \u2022 \u2022 , w i\u22121 ; \u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "where k is the size of the sequence under consideration, and the conditional probability P is modeled using a neural language model with \u03b8 parameters which includes the embedding matrix",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "X \u2032 = [x \u2032 1 , \u2022 \u2022 \u2022 , x \u2032 n ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "T . Ad-hoc LEXSUB Loss. The total loss in case of ad-hoc LEXSUB is thus:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "L total = L adhoc dist + L lex , where L lex is defined by equation 10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "Training Dataset. The ad-hoc model is trained on the Wikitext-103 dataset . We preprocess the data by lowercasing all the tokens in the dataset across the splits, and limiting the vocabulary to top 100k words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "Ad-Hoc LEXSUB Model. The distributional component of our ad-hoc model is a two-layer QRNN-based language model with a 300-dimensional embedding layer and a 1,200-dimensional hidden layer. The batchsize, BPTT length, and dropout ratio values for : Intrinsic and extrinsic experiment results for baselines and LEXSUB trained with lexical resource from LEAR. We observe a similar trend in the intrinsic and the extrinsic evaluation as to when the models were trained on lexical resources from Section 4.2. This indicates that the LEXSUB stronger performance is due to our novel subspace-based formulation rather than its ability to better exploit a specific lexical resource.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "our model are 30, 140, and 0.1 respectively. We train our model for 10 epochs using the Adam (Kingma and Ba, 2014) optimizer with an initial learning rate of 0.001, which is reduced during training by a factor of 10 in epochs 3, 6, and 7. We use the same set of hyperparameters that were used for the post-hoc experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "Results Table 9c presents the extrinsic evaluations of the ad-hoc LEXSUB model. Vanilla, in this case, refers to embeddings from the language model trained on Wikitext-103 without any lexical constraints. We observe that ad-hoc LEXSUB outperforms Vanilla on all extrinsic tasks, demonstrating that learning lexical relations in subspaces is also helpful in the ad-hoc setting.",
"cite_spans": [],
"ref_spans": [
{
"start": 8,
"end": 16,
"text": "Table 9c",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "We observe similar gains for ad-hoc LEXSUB on intrinsic evaluation in Table 9a and 9b.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 78,
"text": "Table 9a",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "Appendix B: Experiments with Lexical Resource from In Section 7, we discussed the performance of LEXSUB and the baselines trained on the lexical resource presented in Section 4.2. In this section, we repeat the same set of experiments but with the LEXSUB and the baselines trained on lexical resource from LEAR, our strongest competitor. The objective of these experiments is to ascertain that the LEXSUB's competitive advantage is due to our novel subspace-based formulation rather than its ability to better exploit the lexical resource discussed in Section 4.2. The hyperparameters used to train the models is the same as Section 4.3. For baselines, we use the hyperparameters reported in the respective papers. We observe a similar trend in intrinsic and extrinsic evaluation. LEXSUB outperforms all the baselines on relatedness (Table 10a) , hypernymy intrinsic tasks (Table 10b) , and all the extrinsic tasks (Table 10c ). We again observe that LEAR and Counterfitting perform poorly in the relatedness tasks. We suspect the poor relatedness score of LEAR and Counterfitting is because these models distort the original distributional space. ",
"cite_spans": [],
"ref_spans": [
{
"start": 833,
"end": 844,
"text": "(Table 10a)",
"ref_id": "TABREF2"
},
{
"start": 873,
"end": 884,
"text": "(Table 10b)",
"ref_id": "TABREF2"
},
{
"start": 915,
"end": 925,
"text": "(Table 10c",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "https://github.com/facebookresearch/ hypernymysuite.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.kaggle.com/c/quora-questionpairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.calculquebec.ca. 6 https://www.computecanada.ca.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the reviewers for their valuable comments. This work is supported by funding from Samsung Electronics. The last author is supported by the Canada CIFAR AI Chair program. This research was enabled in part by support provided by Calcul Qu\u00e9bec, 5 and Compute Canada. 6 We would also like to thank Prof. Timothy O'Donnell, Ali Emami, and Jad Kabbara for their valuable input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Appendix A: Ad-hoc LEXSUB In this section, we show how LEXSUB can be extended to the ad-hoc setting. We achieve this by substituting the GloVe reconstruction loss from Section 3.3 with a language modeling objective that enables us to learn the embedding matrix X \u2032 from scratch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A study on similarity and relatedness using distributional and WordNet-based approaches",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Alfonseca",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Kravalova",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Pa\u015fca",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Soroa",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics on -NAACL '09",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pa\u015fca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and WordNet-based ap- proaches. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics on -NAACL '09, page 19, Boulder, Colorado. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Multimodal Word Distributions",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Athiwaratkun",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1645--1656",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Athiwaratkun and Andrew Wilson. 2017. Multimodal Word Distributions. In Proceed- ings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1645-1656.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Cloze-driven Pretraining of Self-attention Networks",
"authors": [
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5360--5369",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, and Michael Auli. 2019. Cloze-driven Pretraining of Self-attention Net- works. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Process- ing (EMNLP-IJCNLP), pages 5360-5369, Hong Kong, China. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The Berkeley FrameNet Project",
"authors": [
{
"first": "Collin",
"middle": [
"F"
],
"last": "Baker",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
},
{
"first": "John",
"middle": [
"B"
],
"last": "Lowe",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 17th International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet Project. In Proceedings of the 17th International Con- ference on Computational Linguistics -Vol- ume 1, COLING '98, pages 86-90, Montreal, Quebec, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Entailment above the word level in distributional semantics",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Ngoc-Quynh",
"middle": [],
"last": "Do",
"suffix": ""
},
{
"first": "Chung-Chieh",
"middle": [],
"last": "Shan",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "23--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 23-32.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "How We BLESSed Distributional Semantic Evaluation",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, GEMS '11",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni and Alessandro Lenci. 2011. How We BLESSed Distributional Semantic Eval- uation. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, GEMS '11, pages 1-10, Edinburgh, Scotland. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Knowledge-powered Deep Learning for Word Embedding",
"authors": [
{
"first": "Jiang",
"middle": [],
"last": "Bian",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014th European Conference on Machine Learning and Knowledge Discovery in Databases -Volume Part I, ECMLPKDD'14",
"volume": "",
"issue": "",
"pages": "132--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiang Bian, Bin Gao, and Tie-Yan Liu. 2014. Knowledge-powered Deep Learning for Word Embedding. In Proceedings of the 2014th European Conference on Machine Learning and Knowledge Discovery in Databases -Vol- ume Part I, ECMLPKDD'14, pages 132-148, Nancy, France. Springer-Verlag.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Enriching Word Vectors with Subword Information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching Word Vectors with Subword Information. Transac- tions of the Association for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Joint Word Representation Learning Using a Corpus and a Semantic Lexicon",
"authors": [
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
},
{
"first": "Alsuhaibani",
"middle": [],
"last": "Mohammed",
"suffix": ""
},
{
"first": "Takanori",
"middle": [],
"last": "Maehara",
"suffix": ""
},
{
"first": "Ken-Ichi",
"middle": [],
"last": "Kawarabayashi",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16",
"volume": "",
"issue": "",
"pages": "2690--2696",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danushka Bollegala, Alsuhaibani Mohammed, Takanori Maehara, and Ken-ichi Kawarabayashi. 2016. Joint Word Representation Learning Using a Corpus and a Semantic Lexicon. In Proceedings of the Thirtieth AAAI Confer- ence on Artificial Intelligence, AAAI'16, pages 2690-2696, Phoenix, Arizona. AAAI Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Conference Proceedings -EMNLP 2015: Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural lan- guage inference. In Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing, pages 632-642. Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Quasi-Recurrent Neural Networks",
"authors": [
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01576"
]
},
"num": null,
"urls": [],
"raw_text": "James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. 2016. Quasi-Recurrent Neural Networks. arXiv:1611.01576 [cs].",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Distributional Inclusion Vector Embedding for Unsupervised Hypernymy Detection",
"authors": [
{
"first": "",
"middle": [],
"last": "Haw-Shiuan",
"suffix": ""
},
{
"first": "Ziyun",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Vilnis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "485--495",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haw-Shiuan Chang, Ziyun Wang, Luke Vilnis, and Andrew McCallum. 2018. Distributional Inclusion Vector Embedding for Unsupervised Hypernymy Detection. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 485-495, New Orleans, Louisiana. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Adaptive Subgradient Methods for Online Learning and Stochastic Optimization",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2121--2159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Jour- nal of Machine Learning Research, 12(Jul): 2121-2159.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Retrofitting Word Vectors to Semantic Lexicons",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Dodge",
"suffix": ""
},
{
"first": "Sujay",
"middle": [],
"last": "Kumar Jauhar",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1606--1615",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting Word Vectors to Semantic Lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, pages 1606-1615.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A synopsis of linguistic theory, 1930-1955",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Firth",
"suffix": ""
}
],
"year": 1957,
"venue": "Studies in Linguistic Analysis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. R. Firth. 1957. A synopsis of linguistic theory, 1930-1955. Studies in Linguistic Analysis.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Incorporating Both Distributional and Relational Semantics in Word Representations",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Fried",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.4369"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Fried and Kevin Duh. 2014. Incorporating Both Distributional and Relational Semantics in Word Representations. arXiv:1412.4369 [cs].",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "PPDB: The Paraphrase Database",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "758--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Para- phrase Database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 758-764. Atlanta, Georgia. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "AllenNLP: A Deep Semantic Natural Language Processing Platform",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Grus",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Nelson",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Schmitz",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.07640"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A Deep Semantic Natural Language Processing Platform. arXiv:1803. 07640 [cs].",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "SimVerb-3500: A Large-Scale Evaluation Set of Verb Similarity",
"authors": [
{
"first": "Daniela",
"middle": [],
"last": "Gerz",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2173--2182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniela Gerz, Ivan Vuli\u0107, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. SimVerb-3500: A Large-Scale Evaluation Set of Verb Similarity. In Proceedings of the 2016 Conference on Em- pirical Methods in Natural Language Processing, pages 2173-2182, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Explicit Retrofitting of Distributional Word Vectors",
"authors": [
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "34--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goran Glava\u0161 and Ivan Vuli\u0107. 2018. Explicit Retrofitting of Distributional Word Vectors. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 34-45.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "SimLex-999: Evaluating Semantic Models With (Genuine) Similarity Estimation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "41",
"issue": "4",
"pages": "665--695",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating Semantic Models With (Genuine) Similarity Estimation. Compu- tational Linguistics, 41(4):665-695.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Ontologically Grounded Multisense Representation Learning for Semantic Vector Space Models",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Sujay Kumar Jauhar",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "683--693",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sujay Kumar Jauhar, Chris Dyer, and Eduard Hovy. 2015. Ontologically Grounded Multi- sense Representation Learning for Semantic Vector Space Models. In Proceedings of the 2015 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 683-693, Denver, Colorado. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Expansional Retrofitting for Word Vector Enrichment",
"authors": [
{
"first": "Hwiyeol",
"middle": [],
"last": "Jo",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.07337"
]
},
"num": null,
"urls": [],
"raw_text": "Hwiyeol Jo. 2018. Expansional Retrofitting for Word Vector Enrichment. arXiv:1808. 07337 [cs].",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Extrofitting: Enriching Word Representation and its Vector Space with Semantic Lexicons",
"authors": [
{
"first": "Hwiyeol",
"middle": [],
"last": "Jo",
"suffix": ""
},
{
"first": "Stanley Jungkyu",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The Third Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "24--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwiyeol Jo and Stanley Jungkyu Choi. 2018. Extrofitting: Enriching Word Representation and its Vector Space with Semantic Lexicons. In Proceedings of The Third Workshop on Rep- resentation Learning for NLP, pages 24-29, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Specializing Word Embeddings for Similarity or Relatedness",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2044--2048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela, Felix Hill, and Stephen Clark. 2015a. Specializing Word Embeddings for Sim- ilarity or Relatedness. In Proceedings of the 2015 Conference on Empirical Methods in Nat- ural Language Processing, pages 2044-2048, Lisbon, Portugal. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "ciation for Computational Linguistics, Diederik P. Kingma and Jimmy Ba",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "119--124",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela, Laura Rimell, Ivan Vuli\u0107, and Stephen Clark. 2015b. Exploiting Image Gen- erality for Lexical Entailment Detection. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Nat- ural Language Processing (Volume 2: Short Papers), pages 119-124, Beijing, China, Asso- ciation for Computational Linguistics, Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs].",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Identifying Synonyms Among Distributionally Similar Words",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shaojun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Lijuan",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 18th International Joint Conference on Artificial Intelligence, IJCAI'03",
"volume": "",
"issue": "",
"pages": "1492--1493",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin, Shaojun Zhao, Lijuan Qin, and Ming Zhou. 2003. Identifying Synonyms Among Distributionally Similar Words. In Proceed- ings of the 18th International Joint Con- ference on Artificial Intelligence, IJCAI'03, pages 1492-1493. Acapulco, Mexico. Morgan Kaufmann Publishers Inc..",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Learning Semantic Word Embeddings based on Ordinal Knowledge Constraints",
"authors": [
{
"first": "Quan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Zhen-Hua",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1501--1511",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quan Liu, Hui Jiang, Si Wei, Zhen-Hua Ling, and Yu Hu. 2015. Learning Semantic Word Embeddings based on Ordinal Knowledge Constraints. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1501-1511.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Multi-Task Deep Neural Networks for Natural Language Understanding",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4487--4496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-Task Deep Neural Networks for Natural Language Understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487-4496, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Learned in Translation: Contextualized Word Vectors",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "6294--6305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in Trans- lation: Contextualized Word Vectors. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 6294-6305. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Pointer Sentinel Mixture Models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.07843"
]
},
"num": null,
"urls": [],
"raw_text": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer Sentinel Mixture Models. arXiv:1609.07843 [cs].",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Efficient Estimation of Word Representations in Vector Space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. arXiv:1301.3781 [cs].",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Luk\u00e1s",
"middle": [],
"last": "Burget",
"suffix": ""
}
],
"year": 2010,
"venue": "Eleventh Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Luk\u00e1s Burget, Jan Cernock\u00fd, and Sanjeev Khudanpur. 2010. Recurrent neural network based lan- guage model. In Eleventh Annual Conference of the International Speech Communication Association.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "WordNet: A Lexical Database for English",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller. 1995. WordNet: A Lexical Database for English. Communications of ACM, 38(11):39-41.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Counter-fitting Word Vectors to Linguistic Constraints",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "Diarmuid\u00f3",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Lina",
"middle": [
"M"
],
"last": "Rojas-Barahona",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "Tsung-Hsien",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "142--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola Mrk\u0161i\u0107, Diarmuid\u00d3 S\u00e9aghdha, Blaise Thomson, Milica Ga\u0161i\u0107, Lina M. Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting Word Vectors to Linguistic Constraints. In Pro- ceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 142-148.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Semantic Specialization of Distributional Word Vector Spaces using Monolingual and Cross-Lingual Constraints",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Diarmuid\u00f3",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Ira",
"middle": [],
"last": "Leviant",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "0",
"pages": "309--324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola Mrk\u0161i\u0107, Ivan Vuli\u0107, Diarmuid\u00d3 S\u00e9aghdha, Ira Leviant, Roi Reichart, Milica Ga\u0161i\u0107, Anna Korhonen, and Steve Young. 2017. Semantic Specialization of Distributional Word Vector Spaces using Monolingual and Cross-Lingual Constraints. Transactions of the Association for Computational Linguistics, 5(0):309-324.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
}
],
"year": 2012,
"venue": "Artificial Intelligence",
"volume": "193",
"issue": "",
"pages": "217--250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intel- ligence, 193:217-250.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Hierarchical Embeddings for Hypernymy Detection and Directionality",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Kim Anh Nguyen",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "K\u00f6per",
"suffix": ""
},
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Schulte Im Walde",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "233--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim Anh Nguyen, Maximilian K\u00f6per, Sabine Schulte im Walde, and Ngoc Thang Vu. 2017. Hierarchical Embeddings for Hypernymy Detection and Directionality. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 233-243.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Integrating Distributional Lexical Contrast into Word Embeddings for Antonym-Synonym Distinction",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Kim Anh Nguyen",
"suffix": ""
},
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Schulte Im Walde",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "454--459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim Anh Nguyen, Sabine Schulte im Walde, and Ngoc Thang Vu. 2016. Integrating Distribu- tional Lexical Contrast into Word Embeddings for Antonym-Synonym Distinction. In Pro- ceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 454-459, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Poincar\u00e9 Embeddings for Learning Hierarchical Representations",
"authors": [
{
"first": "Maximillian",
"middle": [],
"last": "Nickel And Douwe Kiela",
"suffix": ""
},
{
"first": ";",
"middle": [
"I"
],
"last": "Guyon",
"suffix": ""
},
{
"first": "U",
"middle": [
"V"
],
"last": "Luxburg",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Fergus",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vishwanathan",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Garnett",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "6338--6347",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximillian Nickel and Douwe Kiela. 2017. Poincar\u00e9 Embeddings for Learning Hierar- chical Representations. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 6338-6347. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Word Embedding-based Antonym Detection using Thesauri and Distributional Information",
"authors": [
{
"first": "Masataka",
"middle": [],
"last": "Ono",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Yutaka",
"middle": [],
"last": "Sasaki",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "984--989",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masataka Ono, Makoto Miwa, and Yutaka Sasaki. 2015. Word Embedding-based Antonym Detection using Thesauri and Distributional Information. In Proceedings of the 2015 Con- ference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 984-989, Denver, Colorado. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Encoding Prior Knowledge with Eigenword Embeddings",
"authors": [
{
"first": "Dominique",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "417--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominique Osborne, Shashi Narayan, and Shay B. Cohen. 2016. Encoding Prior Knowledge with Eigenword Embeddings. Transactions of the Association for Computational Linguistics, 4:417-430.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "A Decomposable Attention Model for Natural Language Inference",
"authors": [
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2249--2255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankur Parikh, Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, and Jakob Uszkoreit. 2016. A Decomposable Attention Model for Natural Language Infer- ence. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249-2255, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification",
"authors": [
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "425--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embed- dings, and style classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Lan- guage Processing (Volume 2: Short Papers), pages 425-430, Beijing, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Glove: Global Vectors for Word Representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Deep Contextualized Word Representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contex- tualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237. New Orleans, Louisiana. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "SQuAD: 100,000+ Questions for Machine Comprehension of Text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehen- sion of Text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Hearst Patterns Revisited: Automatic Hypernym Detection from Large Text Corpora",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Maximilian",
"middle": [],
"last": "Nickel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "358--363",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Roller, Douwe Kiela, and Maximilian Nickel. 2018. Hearst Patterns Revisited: Auto- matic Hypernym Detection from Large Text Corpora. In Proceedings of the 56th Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), pages 358-363. Melbourne, Australia. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Ultradense Word Embeddings by Orthogonal Transformation",
"authors": [
{
"first": "Sascha",
"middle": [],
"last": "Rothe",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ebert",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1602.07572"
]
},
"num": null,
"urls": [],
"raw_text": "Sascha Rothe, Sebastian Ebert, and Hinrich Sch\u00fctze. 2016. Ultradense Word Embeddings by Orthogonal Transformation. arXiv:1602. 07572 [cs].",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "AutoExtend: Extending Word Embeddings to Embeddings for Synsets and Lexemes",
"authors": [
{
"first": "Sascha",
"middle": [],
"last": "Rothe",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1793--1803",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sascha Rothe and Hinrich Sch\u00fctze. 2015. AutoExtend: Extending Word Embeddings to Embeddings for Synsets and Lexemes. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1793-1803, Beijing, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Chasing Hypernyms in Vector Spaces with Entropy",
"authors": [
{
"first": "Enrico",
"middle": [],
"last": "Santus",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter",
"volume": "2",
"issue": "",
"pages": "38--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Enrico Santus, Alessandro Lenci, Qin Lu, and Sabine Schulte im Walde. 2014. Chasing Hypernyms in Vector Spaces with Entropy. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, Volume 2: Short Papers, pages 38-42, Gothenburg, Sweden. Association for Computational Linguistics.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Bidirectional Attention Flow for Machine Comprehension",
"authors": [
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Aniruddha",
"middle": [],
"last": "Kembhavi",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01603"
]
},
"num": null,
"urls": [],
"raw_text": "Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional Attention Flow for Machine Comprehension. arXiv:1611.01603 [cs].",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Improving Hypernymy Detection with an Integrated Path-based and Distributional Method",
"authors": [
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2389--2398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vered Shwartz, Yoav Goldberg, and Ido Dagan. 2016. Improving Hypernymy Detection with an Integrated Path-based and Distributional Method. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2389-2398.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Hypernyms under Siege: Linguistically-motivated Artillery for Hypernymy Detection",
"authors": [
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Santus",
"suffix": ""
},
{
"first": "Dominik",
"middle": [],
"last": "Schlechtweg",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "65--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vered Shwartz, Enrico Santus, and Dominik Schlechtweg. 2017. Hypernyms under Siege: Linguistically-motivated Artillery for Hyper- nymy Detection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguis- tics: Volume 1, Long Papers, pages 65-75.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "ERNIE 2.0: A Continual Pre-training Framework for Language Understanding",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Shuohuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shikun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Hao Tian",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.12412"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2019. ERNIE 2.0: A Continual Pre-training Frame- work for Language Understanding. arXiv:1907. 12412 [cs].",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "LSTM neural networks for language modeling",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Sundermeyer",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Schl\u00fcter",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2012,
"venue": "Thirteenth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Sundermeyer, Ralf Schl\u00fcter, and Hermann Ney. 2012. LSTM neural networks for language modeling. In Thirteenth Annual Conference of the International Speech Communication Association.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Poincar\u00e9 GloVe: Hyperbolic Word Embeddings",
"authors": [
{
"first": "Alexandru",
"middle": [],
"last": "Tifrea",
"suffix": ""
},
{
"first": "Gary",
"middle": [],
"last": "B\u00e9cigneul",
"suffix": ""
},
{
"first": "Octavian-Eugen",
"middle": [],
"last": "Ganea",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.06546"
]
},
"num": null,
"urls": [],
"raw_text": "Alexandru Tifrea, Gary B\u00e9cigneul, and Octavian- Eugen Ganea. 2018. Poincar\u00e9 GloVe: Hyper- bolic Word Embeddings. arXiv:1810.06546 [cs].",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Introduction to the CoNLL-2003 Shared Task: Language-independent Named Entity Recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003",
"volume": "4",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 Shared Task: Language-independent Named Entity Rec- ognition. In Proceedings of the Seventh Con- ference on Natural Language Learning at HLT-NAACL 2003 -Volume 4, CONLL '03, pages 142-147, Edmonton, Canada. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Order-Embeddings of Images and Language",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vendrov",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Fidler",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Urtasun",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.06361"
]
},
"num": null,
"urls": [],
"raw_text": "Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2015. Order-Embeddings of Images and Language. arXiv:1511.06361 [cs].",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Word Representations via Gaussian Embedding",
"authors": [
{
"first": "Luke",
"middle": [],
"last": "Vilnis",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6623"
]
},
"num": null,
"urls": [],
"raw_text": "Luke Vilnis and Andrew McCallum. 2014. Word Representations via Gaussian Embedding. arXiv:1412.6623 [cs].",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "HyperLex: A Large-Scale Evaluation of Graded Lexical Entailment",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Gerz",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics",
"volume": "43",
"issue": "4",
"pages": "781--835",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107, Daniela Gerz, Douwe Kiela, Felix Hill, and Anna Korhonen. 2017. HyperLex: A Large- Scale Evaluation of Graded Lexical Entailment. Computational Linguistics, 43(4):781-835.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Specialising Word Vectors for Lexical Entailment",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1710.06371"
]
},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107 and Nikola Mrk\u0161i\u0107. 2017. Specialising Word Vectors for Lexical Entailment. arXiv: 1710.06371 [cs].",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Bilateral Multi-perspective Matching for Natural Language Sentences",
"authors": [
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wael",
"middle": [],
"last": "Hamza",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI'17",
"volume": "",
"issue": "",
"pages": "4144--4150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral Multi-perspective Matching for Natural Language Sentences. In Proceedings of the 26th International Joint Conference on Arti- ficial Intelligence, IJCAI'17, pages 4144-4150, Melbourne, Australia. AAAI Press.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Learning to Distinguish Hypernyms and Co-Hyponyms",
"authors": [
{
"first": "Julie",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "Daoud",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Reffin",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2249--2259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julie Weeds, Daoud Clarke, Jeremy Reffin, David Weir, and Bill Keller. 2014. Learn- ing to Distinguish Hypernyms and Co-Hyponyms. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2249-2259.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "From Paraphrase Database to Compositional Paraphrase Model and Back",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "345--358",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From Paraphrase Database to Compositional Paraphrase Model and Back. Transactions of the Association for Computational Linguistics, 3:345-358.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "RC-NET: A General Framework for Incorporating Knowledge into Word Representations",
"authors": [
{
"first": "Chang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yalong",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Bian",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Gang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiaoguang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management -CIKM '14",
"volume": "",
"issue": "",
"pages": "1219--1228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chang Xu, Yalong Bai, Jiang Bian, Bin Gao, Gang Wang, Xiaoguang Liu, and Tie-Yan Liu. 2014. RC-NET: A General Framework for Incorporating Knowledge into Word Representations. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management - CIKM '14, pages 1219-1228, Shanghai, China. ACM Press.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding",
"authors": [
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d' Alch\u00e9-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "can exclusively model antonymy, Tifrea et al. (2018) and Nguyen et al. (2017) can only model hypernymy whereas Mrk\u0161i\u0107 et al. (2016); Mrk\u0161i\u0107 et al. (2017) can model synonymy and antonymy, and Vuli\u0107 and Mrk\u0161i\u0107 (2017) can handle synonymy, antonymy, and hypernymy relations."
},
"TABREF2": {
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "Statistics for lexical relation pairs extracted from WordNet.",
"html": null
},
"TABREF3": {
"content": "<table><tr><td>is the</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Similarity and relatedness results for baselines and LEXSUB. The results indicate that LEXSUB outperforms all the baselines on relatedness tasks and is competitive on the similarity tasks. This indicates that our model retains the distributional information better than the other models while also learning synonymy and antonymy relations.",
"html": null
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "Hypernymy evaluation results for baselines and LEXSUB. LEXSUB considerably outperforms all the other methods and the Vanilla on nearly all hypernymy tasks. We attribute this performance to our novel loss function formulation for asymmetric relations and the separation of concerns imposed by the LEXSUB.",
"html": null
},
"TABREF6": {
"content": "<table><tr><td>lists the top five neighbors for selected</td></tr><tr><td>query words for each of the lexical subspaces of</td></tr><tr><td>the LEXSUB, as well as the main vector space. The</td></tr><tr><td>distance metric used for computing the neighbors</td></tr><tr><td>for main vector space, synonymy, hypernymy,</td></tr><tr><td>and meronymy subspaces are d, d proj r d asym r , respectively. We see that most of the closest , d asym , and r</td></tr><tr><td>neighbors in the learned subspace are words that</td></tr><tr><td>are in the specified lexical relation with the query</td></tr><tr><td>words.</td></tr></table>",
"type_str": "table",
"num": null,
"text": "",
"html": null
},
"TABREF7": {
"content": "<table><tr><td>, d asym r</td><td>, and d asym r</td><td>, respectively.</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Neighborhoods for the query words for the main vector space, as well as each of the lexical subspaces. Words in bold letters indicate that the given word is related to the query word by the said lexical relation. The distance metric used for computing the neighbors for main vector space, synonymy, hypernymy, and meronymy subspaces are d, d proj r",
"html": null
},
"TABREF10": {
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "Mean shift comparison between baselines and LEXSUB models.",
"html": null
},
"TABREF11": {
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">Relatedness Tasks</td><td colspan=\"3\">Similarity Tasks</td></tr><tr><td colspan=\"2\">Models</td><td colspan=\"6\">men3k(\u03c1) WS-353R(\u03c1) Simlex(\u03c1) Simverb(\u03c1)</td></tr><tr><td colspan=\"2\">Vanilla</td><td/><td>0.5488</td><td>0.3917</td><td>0.3252</td><td colspan=\"2\">0.2870</td></tr><tr><td colspan=\"3\">ad-hoc LEXSUB</td><td>0.5497</td><td>0.3943</td><td>0.3489</td><td colspan=\"2\">0.3215</td></tr><tr><td>(a) Models</td><td colspan=\"4\">Similarity (\u03c1) Directionality (Acc)</td><td colspan=\"3\">Classification (Acc)</td></tr><tr><td/><td/><td>Hyperlex</td><td>wbless</td><td>bibless</td><td>bless</td><td>leds</td><td>eval</td><td>weeds</td></tr><tr><td>Vanilla</td><td/><td>0.1354</td><td>0.5309</td><td>0.5129</td><td colspan=\"3\">0.1202 0.6987 0.2402 0.5473</td></tr><tr><td>adhoc LEXSUB</td><td/><td>0.1639</td><td>0.5362</td><td>0.5220</td><td colspan=\"3\">0.1237 0.7029 0.2456 0.5476</td></tr><tr><td colspan=\"8\">(b) Intrinsic evaluation results for ad-hoc models in hypernymy classification tasks.</td></tr><tr><td>Models</td><td/><td colspan=\"6\">NER(F1) SST(Acc) SNLI(Acc) SQuAD(EM) QQP(Acc)</td></tr><tr><td>Vanilla</td><td/><td>86.67</td><td>85.78</td><td>83.99</td><td colspan=\"2\">68.22</td><td>87.83</td></tr><tr><td colspan=\"2\">ad-hoc LEXSUB</td><td>86.73</td><td>86.00</td><td>84.00</td><td colspan=\"2\">68.50</td><td>88.33</td></tr><tr><td/><td/><td colspan=\"5\">(c) Extrinsic Evaluation results (Setup 1) for ad-hoc models.</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Intrinsic evaluation results for ad-hoc models in word similarity and relatedness tasks.",
"html": null
},
"TABREF12": {
"content": "<table><tr><td>can better model hierarchical relations like</td></tr><tr><td>hypernymy.</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Intrinsic and extrinsic experiment results for the ad-hoc LEXSUB. The Vanilla model here refers to language model embeddings trained on Wikitext-103 without the lexical constraints. Ad-hoc LEXSUB outperforms the Vanilla embeddings on both intrinsic and extrinsic tasks indicating the gains from post-hoc LEXSUB can be extended to the ad-hoc formulation.",
"html": null
},
"TABREF13": {
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">Relatedness Tasks</td><td colspan=\"3\">Similarity Tasks</td></tr><tr><td colspan=\"2\">Models</td><td colspan=\"6\">men3k(\u03c1) WS-353R(\u03c1) Simlex(\u03c1) Simverb(\u03c1)</td></tr><tr><td colspan=\"2\">Vanilla</td><td/><td>0.7375</td><td>0.4770</td><td>0.3705</td><td colspan=\"2\">0.2275</td></tr><tr><td colspan=\"3\">Retrofitting</td><td>0.7451</td><td>0.4662</td><td>0.4561</td><td colspan=\"2\">0.2884</td></tr><tr><td colspan=\"3\">Counterfitting</td><td>0.6034</td><td>0.2820</td><td>0.5605</td><td colspan=\"2\">0.4260</td></tr><tr><td colspan=\"2\">LEAR</td><td/><td>0.5024</td><td>0.2300</td><td>0.7273</td><td colspan=\"2\">0.7050</td></tr><tr><td colspan=\"2\">LEXSUB</td><td/><td>0.7562</td><td>0.4787</td><td>0.4838</td><td colspan=\"2\">0.3371</td></tr><tr><td>(a) Models</td><td colspan=\"4\">Similarity (\u03c1) Directionality (Acc)</td><td colspan=\"3\">Classification (Acc)</td></tr><tr><td/><td colspan=\"2\">Hyperlex</td><td>wbless</td><td>bibless</td><td>bless</td><td>leds</td><td>eval</td><td>weeds</td></tr><tr><td>Vanilla</td><td/><td>0.1352</td><td>0.5101</td><td>0.4894</td><td colspan=\"3\">0.1115 0.7164 0.2404 0.5335</td></tr><tr><td>Retrofitting</td><td/><td>0.1718</td><td>0.5603</td><td>0.5469</td><td colspan=\"3\">0.1440 0.7337 0.2648 0.5846</td></tr><tr><td>Counterfitting</td><td/><td>0.3440</td><td>0.6196</td><td>0.6071</td><td colspan=\"3\">0.1851 0.7344 0.3296 0.6342</td></tr><tr><td>LEAR</td><td/><td>0.4346</td><td>0.6779</td><td>0.6683</td><td colspan=\"3\">0.2815 0.7413 0.3623 0.6926</td></tr><tr><td>LEXSUB</td><td/><td>0.5327</td><td>0.8228</td><td>0.7252</td><td colspan=\"3\">0.5884 0.9290 0.4359 0.9101</td></tr><tr><td>(b) Models</td><td/><td colspan=\"6\">NER(F1) SST-2(Acc) SNLI(Acc) SQuAD(EM) QQP(Acc)</td></tr><tr><td>Vanilla</td><td/><td>87.88</td><td>87.31</td><td>85.00</td><td/><td>64.23</td><td>87.08</td></tr><tr><td>retrofitting</td><td/><td>85.88</td><td>87.26</td><td>84.61</td><td/><td>64.91</td><td>86.98</td></tr><tr><td colspan=\"2\">Counterfitting</td><td>80.00</td><td>87.53</td><td>84.93</td><td/><td>63.70</td><td>86.82</td></tr><tr><td>LEAR</td><td/><td>80.23</td><td>88.08</td><td>83.70</td><td/><td>62.96</td><td>86.01</td></tr><tr><td>LEXSUB</td><td/><td>88.02</td><td>88.69</td><td>85.03</td><td/><td>64.95</td><td>87.65</td></tr><tr><td>(c)</td><td/><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"num": null,
"text": "Intrinsic evaluation results for for baselines and LEXSUB trained with lexical resource from LEAR. Hypernymy evaluation results for baselines and LEXSUB trained with lexical resource from LEAR. Extrinsic evaluation results (Setup 1) for baselines and LEXSUB trained with lexical resource from LEAR.",
"html": null
},
"TABREF14": {
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "",
"html": null
},
"TABREF15": {
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "Systems 32, pages 5753-5763. Curran Associates, Inc. Mo Yu and Mark Dredze. 2014. Improving Lexical Embeddings with Semantic Knowledge. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 545-550, Baltimore, Maryland. Association for Computational Linguistics.",
"html": null
}
}
}
}