Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S17-1017",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:29:22.741327Z"
},
"title": "The (Too Many) Problems of Analogical Reasoning with Word Vectors",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Massachusetts Lowell Lowell",
"location": {
"region": "MA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Aleksandr",
"middle": [],
"last": "Drozd",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Bofang",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of China",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper explores the possibilities of analogical reasoning with vector space models. Given two pairs of words with the same relation (e.g. man:woman :: king:queen), it was proposed that the offset between one pair of the corresponding word vectors can be used to identify the unknown member of the other pair (\u2212\u2212\u2192 king \u2212 \u2212\u2212\u2192 man + \u2212 \u2212\u2212\u2212\u2212 \u2192 woman = ? \u2212\u2212\u2212\u2192 queen). We argue against such \"linguistic regularities\" as a model for linguistic relations in vector space models and as a benchmark, and we show that the vector offset (as well as two other, better-performing methods) suffers from dependence on vector similarity.",
"pdf_parse": {
"paper_id": "S17-1017",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper explores the possibilities of analogical reasoning with vector space models. Given two pairs of words with the same relation (e.g. man:woman :: king:queen), it was proposed that the offset between one pair of the corresponding word vectors can be used to identify the unknown member of the other pair (\u2212\u2212\u2192 king \u2212 \u2212\u2212\u2192 man + \u2212 \u2212\u2212\u2212\u2212 \u2192 woman = ? \u2212\u2212\u2212\u2192 queen). We argue against such \"linguistic regularities\" as a model for linguistic relations in vector space models and as a benchmark, and we show that the vector offset (as well as two other, better-performing methods) suffers from dependence on vector similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper considers the phenomenon of \"vectororiented reasoning\" via linear vector offset in vector space models (VSMs) (Mikolov et al., 2013c,a) . Given two pairs of words with the same linguistic relation (woman:man :: king:queen), it has been proposed that the offset between one pair of word vectors can be used to identify the unknown member of a different pair of words via solving proportional analogy problems ( \u2212\u2212\u2192 king \u2212 \u2212\u2212\u2192 man + \u2212 \u2212\u2212\u2212\u2212 \u2192 woman = ? \u2212\u2212\u2212\u2192 queen), as shown in Fig. 1 . We will refer to this method as 3CosAdd.",
"cite_spans": [
{
"start": 121,
"end": 146,
"text": "(Mikolov et al., 2013c,a)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 486,
"end": 492,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This approach attracted a lot of attention, both as the \"poster child\" of word embeddings, and for its potential practical utility. Given the vital role that analogical reasoning plays in human cognition for discovering new knowledge and understanding new concepts, automated analogical reasoning could become a game-changer in many fields, providing a universal mechanism for detecting linguistic relations (Turney, 2008) and word sense disambiguation (Federici et al., 1997) . It is already used in many downstream NLP tasks, such as splitting compounds (Daiber et al., 2015) , semantic search (Cohen et al., 2015) , cross-language relational search (Duc et al., 2012) , to name a few. Figure 2 : Left panel shows vector offsets for three word pairs illustrating the gender relation. Right panel shows a different projection, and the singular/plural relation for two words. In high-dimensional space, multiple relations can be embedded for a single word.",
"cite_spans": [
{
"start": 408,
"end": 422,
"text": "(Turney, 2008)",
"ref_id": "BIBREF31"
},
{
"start": 453,
"end": 476,
"text": "(Federici et al., 1997)",
"ref_id": "BIBREF9"
},
{
"start": 556,
"end": 577,
"text": "(Daiber et al., 2015)",
"ref_id": "BIBREF2"
},
{
"start": 596,
"end": 616,
"text": "(Cohen et al., 2015)",
"ref_id": "BIBREF1"
},
{
"start": 652,
"end": 670,
"text": "(Duc et al., 2012)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 688,
"end": 696,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "provided. We have explored several related methods and found that the proposed method performs well for both syntactic and semantic relations. We note that this measure is qualitatively similar to relational similarity model of (Turney, 2012) , which predicts similarity between members of the word pairs (x b , x d ), (x c , x d ) and dis-similarity for (x a , x d ).",
"cite_spans": [
{
"start": 228,
"end": 242,
"text": "(Turney, 2012)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To evaluate the vector offset method, we used vectors generated by the RNN toolkit of Mikolov (2012) . Vectors of dimensionality 80, 320, and 640 were generated, along with a composite of several systems, with total dimensionality 1600. The systems were trained with 320M words of Broadcast News data as described in (Mikolov et al., 2011a) , and had an 82k vocabulary. Table 2 shows results for both RNNLM and LSA vectors on the syntactic task. LSA was trained on the same data as the RNN. We see that the RNN vectors capture significantly more syntactic regularity than the LSA vectors, and do remarkably well in an absolute sense, answering more than one in three questions correctly. 2 In Table 3 we compare the RNN vectors with those based on the methods of Collobert and Weston (2008) and Mnih and Hinton (2009) , as implemented by (Turian et al., 2010) and available online 3 Since different words are present in these datasets, We conduc mantic test se tion category, ilarity to each then uses the are evaluated in the task, S \u03c1 and MaxDi ues are better. report the ave Figure 1 : Linguistic relations modeled by linear vector offset (Mikolov et al., 2013c) The idea that linguistic relations are mirrored in neat geometrical relations (as shown in Fig. 1 ) is also intuitively appealing, and 3CosAdd has become a popular benchmark. Roughly, the current VSMs score between 40% (Lai et al., 2016) and 75% (Pennington et al., 2014) on the Google test set (Mikolov et al., 2013a) . However, in fact performance varies widely for different types of relations (Levy and Goldberg, 2014; K\u00f6per et al., 2015; .",
"cite_spans": [
{
"start": 86,
"end": 100,
"text": "Mikolov (2012)",
"ref_id": null
},
{
"start": 317,
"end": 340,
"text": "(Mikolov et al., 2011a)",
"ref_id": null
},
{
"start": 688,
"end": 689,
"text": "2",
"ref_id": null
},
{
"start": 784,
"end": 790,
"text": "(2008)",
"ref_id": null
},
{
"start": 795,
"end": 817,
"text": "Mnih and Hinton (2009)",
"ref_id": null
},
{
"start": 881,
"end": 882,
"text": "3",
"ref_id": null
},
{
"start": 1142,
"end": 1165,
"text": "(Mikolov et al., 2013c)",
"ref_id": "BIBREF24"
},
{
"start": 1385,
"end": 1403,
"text": "(Lai et al., 2016)",
"ref_id": "BIBREF16"
},
{
"start": 1412,
"end": 1437,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF26"
},
{
"start": 1461,
"end": 1484,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF22"
},
{
"start": 1563,
"end": 1588,
"text": "(Levy and Goldberg, 2014;",
"ref_id": "BIBREF19"
},
{
"start": 1589,
"end": 1608,
"text": "K\u00f6per et al., 2015;",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 370,
"end": 377,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 693,
"end": 700,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 1078,
"end": 1086,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1257,
"end": 1263,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "One way to explain the current limitations is to attribute them to the imperfections of the current models and/or corpora with which they are built: with this view, in a perfect VSM, any linguistic relation should be recoverable via vector offset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "The alternative to be explored in this paper is that perhaps natural language semantics is more complex than suggested by Fig. 1 , and there may be both theoretical and mathematical issues with analogical reasoning with word vectors and its 3CosAdd implementation.",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 128,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "We present a series of experiments with two popular VSMs (GloVe and Word2Vec) to show that the accuracy of 3CosAdd depends on the proximity of the target vector to its source (i.e. \u2212\u2212\u2212\u2192 queen should be quite similar to \u2212\u2212\u2192 king). Since not all linguistic relations can be expected to result in high word vector proximity, the method is limited to those that happen to be so in a given VSM. Furthermore, its accuracy also varies because the \"linguistic regularities\" are actually not so regular, and should not be expected to be so. We also compare 3CosAdd to two alternative methods to investigate whether better algorithms can improve on these and other accounts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "2 Background: \"Relational Similarity\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "vs \"Word Analogies\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "The most fundamental term for what 3CosAdd is supposed to capture is actually not analogy, but rather relational similarity, i.e. the idea that pairs of words may hold similar relations to those between other pairs of words. For example, the relation between cat and feline is similar to the relation between dog and canine. Notably, this is similarity rather than identity: \"instances of a single relation may still have significant variability in how characteristic they are of that class\" (Jurgens et al., 2012) . Analogy as it is known in philosophy and logic is something quite different. The \"classical\" analogical reasoning follows roughly this template: objects X and Y share properties a, b, and c; therefore, they may also share the property d. For example, both Earth and Mars orbit the Sun, have at least one moon, revolve on axis, and are subject to gravity; therefore, if Earth supports life, so could Mars (Bartha, 2016) .",
"cite_spans": [
{
"start": 492,
"end": 514,
"text": "(Jurgens et al., 2012)",
"ref_id": "BIBREF14"
},
{
"start": 921,
"end": 935,
"text": "(Bartha, 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "The NLP move from relational similarity to analogy follows the use of the term by P. Turney, who distinguishes between attributional similarity between two words and relational similarity between two pairs of words. On this interpretation, two word pairs that have a high degree of relational similarity are analogous (Turney, 2006) .",
"cite_spans": [
{
"start": 318,
"end": 332,
"text": "(Turney, 2006)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "In terms of practical NLP tasks, Turney et al. (2003) introduced the task of solving SAT 1 analogy problems by choosing from several provided options. These problems were formulated as proportional analogies, written in the form a : a :: b : b (a is to a as b is to b )",
"cite_spans": [
{
"start": 33,
"end": 53,
"text": "Turney et al. (2003)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "It is this use of the term \"analogy\" that Mikolov et al. (2013c) followed in proposing the 3CosAdd method. They formulated the task as selecting a single best fitting vector out of the whole vocabu-lary of the VSM. It became known as word analogy task, but in its core it is still basically estimation of relational similarity, and could be formulated as such: given a pair of words a and a , find how they are related and then find word b , such that it has a similar relation with the word b. A crucial difference is that the graded, non-binary nature of relational similarity is now not in focus: the goal is to find a single correct answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "The dataset that came to be known as the Google analogy test set (Mikolov et al., 2013a) , included 14 linguistic relations with 19544 questions in total. It has become one of the most popular benchmarks for VSMs. This evaluation paradigm assumes that:",
"cite_spans": [
{
"start": 65,
"end": 88,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "(1) Words in similar linguistic relations should in principle be recoverable via relational similarity to known word pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "(2) 3CosAdd score reflects the extent to which a given VSM encodes linguistic relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "(1) became dubious when it was shown that accuracy of 3CosAdd varies widely between categories (Levy and Goldberg, 2014) , and even the best-performing GloVe model scores under 30% on the more challenging Bigger Analogy Test Set (BATS) ). It appears that not all relations can be identified in this way, with lexical semantic relations such as synonymy and antonymy being particularly difficult (K\u00f6per et al., 2015; Vylomova et al., 2016) . The assumption of a single best-fitting candidate answer is also being targeted (Newman-Griffis et al., 2017).",
"cite_spans": [
{
"start": 95,
"end": 120,
"text": "(Levy and Goldberg, 2014)",
"ref_id": "BIBREF19"
},
{
"start": 395,
"end": 415,
"text": "(K\u00f6per et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 416,
"end": 438,
"text": "Vylomova et al., 2016)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "(2) was refuted when demonstrated that some relations missed by 3CosAdd could be recovered with a supervised method, and therefore the information was present in the VSM -just not recoverable with 3CosAdd.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "Let us consider why both (1) and (2) failed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "3 What Does 3CosAdd Really Do?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "We present a series of experiments performed with BATS dataset. Although there are more results on analogy task published with Google test than with BATS, Google test only contains 15 types of linguistic relations, and these happen to be the easier ones . relations (98,000 questions in total). BATS covers most relations in the Google set, but it adds many new and more difficult relations, balanced across derivational and inflectional morphology, lexicographic and encyclopedic semantics (10 relations of each type). Thus BATS provides a less flattering, but more accurate estimate of the capacity for analogical reasoning in the current VSMs. We use pre-trained GloVe vectors by Pennington et al. 2014, released by the authors 2 and trained on Gigaword 5 + Wikipedia 2014 (300 dimensions, window size 10). We also experiment with Word2Vec vectors (Mikolov et al., 2013b) released by the authors 3 , trained on a subcorpus of Google news (also with 300 dimensions).",
"cite_spans": [
{
"start": 851,
"end": 874,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1"
},
{
"text": "The evaluation with 3CosAdd and LRCos methods was conducted with the Python script that accompanies BATS. We also added an implementation of 3CosMul, a multiplicative objective proposed by Levy and Goldberg (2014) , now available in the same script 4 . Since 3CosMul requires normalization, we used normalized GloVe and Word2Vec vectors in all experiments.",
"cite_spans": [
{
"start": 189,
"end": 213,
"text": "Levy and Goldberg (2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1"
},
{
"text": "Questions with words not in the model vocabulary were excluded (0.01% BATS questions for GloVe and 0.016% for Word2Vec).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1"
},
{
"text": "Let us remember that 3CosAdd as initially formulated by Mikolov et al. (2013c) excludes the three source vectors a, a and b from the pool of possible answers. Linzen 2016showed that if that is not done, the accuracy drops dramatically, hitting zero for 9 out of 15 Google test categories.",
"cite_spans": [
{
"start": 56,
"end": 78,
"text": "Mikolov et al. (2013c)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The \"Honest\" 3CosAdd",
"sec_num": "3.2"
},
{
"text": "Let us investigate what happens on BATS data, split by 4 relation types. The rows of Fig. 2 represent all questions of a given category, with darker color indicating higher percentage of predicted vectors being the closest to a, a , b, b , or any other vector. shows that if we do not exclude the source vectors, b is the most likely to be predicted; in derivational and encyclopedic categories a is also possible in under 30% of cases. b is as unlikely to be predicted as a, or any other vector.",
"cite_spans": [],
"ref_spans": [
{
"start": 85,
"end": 91,
"text": "Fig. 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The \"Honest\" 3CosAdd",
"sec_num": "3.2"
},
{
"text": "This experiment suggests that the addition of the offset between a and a typically has a very small effect on the b vector -not sufficient to induce a shift to a different vector on its own. This would in effect limit the search space of 3CosAdd to the close neighborhood of the b vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The \"Honest\" 3CosAdd",
"sec_num": "3.2"
},
{
"text": "It explains another phenomenon pointed out by Linzen (2016): for the plural noun category in the The numerical values for all data can be found in the Appendix. Google test set 70% accuracy was achieved by simply taking the closest neighbor of the vector b, while 3CosAdd improved the accuracy by only 10%. That would indeed be expected if most singular (a) and plural (a ) forms of the same noun were so similar, that subtracting them would result in a nearly-null vector which would not change much when added to b.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The \"Honest\" 3CosAdd",
"sec_num": "3.2"
},
{
"text": "Levy and Goldberg (2014, p.173) suggested that 3CosAdd method is \"mathematically equivalent to seeking a word (b ) which is similar to b and a but is different from a.\" We examined the similarity between all source vector pairs, looking not only at the actual, top-1 accuracy of the 3CosAdd (i.e. the vector the closest to the hypothetical vector), but also at whether the correct answer was found in the top-3 and top-5 neighbors of the predicted vector. For each similarity bin we also estimated how many questions of the whole BATS dataset there were. The results are presented in Fig. 3 . Our data indicates that, indeed, for all combinations of source vectors, the accuracy of 3CosAdd decreases as their distance in vector space increases. It is the most successful when all three source vectors are relatively close to each other and the target vector. This is in line with the above evidence from the \"honest\" 3CosAdd: if the offset is typically small, for it to lead to the target vector, that target vector should be close.",
"cite_spans": [],
"ref_spans": [
{
"start": 584,
"end": 590,
"text": "Fig. 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Distance to the Target Vector",
"sec_num": "3.3"
},
{
"text": "Consider also the ranks of the b vectors in the neighborhood of b , shown in Fig. 3f . For nearly 40% of the successful questions b was within 10 neighbors of b -and over 40% of low-accuracy questions were over 90 neighbors away.",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 84,
"text": "Fig. 3f",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Distance to the Target Vector",
"sec_num": "3.3"
},
{
"text": "As predicted by Levy et al., b and a vectors do not exhibit the same clear trend for higher accuracy with higher similarity that is observed in all other cases ( Fig. 3f ). However, in experiments with only 20 morphological categories we did observe the same trend for b and a as for the other vector pairs (see Fig. 4 ). This is counter-intuitive, and requires further examination. The observed correlation between the accuracy of 3CosAdd and the distance to the target vector could explain in particular the overall lower performance on BATS derivational morphology questions (only 0.08% top-1 accuracy) as opposed to inflectional (0.59%) or encyclopedic semantics (0.26%). \u2212\u2212\u2192 man and \u2212 \u2212\u2212\u2212\u2212 \u2192 woman could be expected to be reasonably similar distributionally, as they combine with many of the same verbs: both men and women sit, sleep, drink etc. However, the same could not be said of words derived with prefixes that change part of speech. Going from",
"cite_spans": [],
"ref_spans": [
{
"start": 162,
"end": 169,
"text": "Fig. 3f",
"ref_id": "FIGREF3"
},
{
"start": 312,
"end": 318,
"text": "Fig. 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Distance to the Target Vector",
"sec_num": "3.3"
},
{
"text": "\u2212\u2212\u2212\u2192 happy to \u2212 \u2212\u2212\u2212\u2212\u2212\u2212 \u2192 happiness, or from \u2212 \u2212\u2212\u2212 \u2192 govern to \u2212\u2212\u2212\u2212\u2212\u2212\u2212\u2212\u2192 government",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance to the Target Vector",
"sec_num": "3.3"
},
{
"text": ", is likely to have to take us further in the vector space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance to the Target Vector",
"sec_num": "3.3"
},
{
"text": "To make sure that the above trend is not specific to GloVe, we repeated these experiments with Word2Vec, which exhibited the same trends. All data is presented in Appendix A.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance to the Target Vector",
"sec_num": "3.3"
},
{
"text": "Note that the dependence of 3CosAdd on similarity is not entirely straightforward: Fig. 3b shows that for the highest similarity (0.9 and more) there is actually a drop in accuracy. The same trend was observed with Word2Vec (Fig 10 in Appendix 1) . Theoretically, it could be attributed to there not being much data in the highest similarity range; but BATS has 98,000 questions, and even 0.1% of that is considerable. The culprit is the \"dishonesty\" of 3CosAdd: as discussed above, it excludes the source vectors a, a , and b from the pool of possible answers. Not only does this mask the real extent of the difference between a and a , but it also creates a fundamental difficulty with categories where the source vectors may be the correct answers. This is what explains the unexpected drops in accuracy at the highest similarity between vectors b and a . \u2212 \u2212\u2212\u2212 \u2192 ?white, the correct answer would a priori be excluded. In BATS data, this factor affects several semantic categories, including country:language, thing:color, animal:young, and animal:shelter.",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 90,
"text": "Fig. 3b",
"ref_id": "FIGREF3"
},
{
"start": 224,
"end": 246,
"text": "(Fig 10 in Appendix 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Uniqueness of a Relation",
"sec_num": "3.4"
},
{
"text": "If solving proportional analogies with word vectors is like shooting, the farther away the target vector is, the more difficult it should be to hit. Also, we can hypothesize that the more crowded a particular region is, the more difficult it should be to hit a particular target.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Density of Vector Neighborhoods",
"sec_num": "3.5"
},
{
"text": "However, density of vector neighborhoods is not as straightforward to measure as vector similarity. We could look at average similarity between, e.g., top-10 ranking neighbors, but that could misrepresent the situation if some neighbors were very close and some were very far.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Density of Vector Neighborhoods",
"sec_num": "3.5"
},
{
"text": "In this experiment we estimate density as the similarity to the 5th neighbor. The higher it is, the more highly similar neighbors a word vector has. This approach is shown in Fig. 5 . The results seem counter-intuitive: denser neighborhoods actually yield higher accuracy (although there are virtually no cases of very tight neighborhoods). One explanation could be its reverse correlation with distance: if the neighborhood of b is sparse, the closest word is likely to be relatively far away. But that runs contrary to the above findings that closer source vectors improve the accuracy of 3CosAdd. Then we could expect lower accuracy in sparser neighborhoods.",
"cite_spans": [],
"ref_spans": [
{
"start": 175,
"end": 181,
"text": "Fig. 5",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Density of Vector Neighborhoods",
"sec_num": "3.5"
},
{
"text": "In this respect, too, GloVe and Word2Vec behave similarly (Fig. 15 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 66,
"text": "(Fig. 15",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Density of Vector Neighborhoods",
"sec_num": "3.5"
},
{
"text": "We repeat the above experiments on GloVe with 3CosMul, a multiplication-based alternative to 3CosAdd proposed by Levy and Goldberg (2014) : As 3CosMul does not explicitly calculate the predicted vector, we did not plot the similarity of b to the predicted vector. But for other vector pairs shown in Fig. 6 , we can see that 3CosMul, The numerical values for all data can be found in the Appendix. like 3CosAdd, has much higher chances of success where target vectors are close to the source.",
"cite_spans": [
{
"start": 113,
"end": 137,
"text": "Levy and Goldberg (2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 300,
"end": 306,
"text": "Fig. 6",
"ref_id": "FIGREF10"
}
],
"eq_spans": [],
"section": "Comparison with Other Methods",
"sec_num": "4"
},
{
"text": "We also consider LRCos, a method based on supervised learning from a set of word pairs . LRCos reinterprets the analogy task as follows: given a set of word pairs (e.g. brother:sister, husband:wife, man:woman, etc.), the available examples of the class of the target b vector (sister, wife, woman, etc.) and randomly selected negative examples are used to learn a representation of the target class with a supervised classifier. The question is this: what word is the closest to \u2212\u2212\u2192 king, but belongs to the \"women\" class?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Other Methods",
"sec_num": "4"
},
{
"text": "With LRCos it is only meaningful to look at the similarity of b to b (Fig. 7) . Once again, we see the same trend: closer targets are easier to hit. However, if we look at overall accuracy, there is a big difference between the three methods. Fig. 8b shows that the accuracy of LRCos is much higher than the top-1 3CosAdd or 3Cos-Mul. Moreover, its \"honest\" version ( Fig. 8a) performs just as well as the \"dishonest\" one. These results are consistent with the results reported by . As for 3CosMul, Levy et al. (2015) show that 3CosMul outperforms 3CosAdd in PPMI, SGNS, GloVe and SVD models with the Google dataset, sometimes yielding 10-25% improvement. Our BATS experiment confirms the overall superiority of 3CosMul to 3CosAdd, although the difference is less dramatic.",
"cite_spans": [
{
"start": 499,
"end": 517,
"text": "Levy et al. (2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 69,
"end": 77,
"text": "(Fig. 7)",
"ref_id": "FIGREF11"
},
{
"start": 243,
"end": 250,
"text": "Fig. 8b",
"ref_id": null
},
{
"start": 368,
"end": 376,
"text": "Fig. 8a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with Other Methods",
"sec_num": "4"
},
{
"text": "Thus LRCos considerably outdoes its competitors, although it does not manage to avoid the similarity problem. We attribute this to the set-based, supervised nature of LRCos that gives it an edge on a different problem that affects both 3CosAdd and 3CosMul: the assumption of \"linguistic regularities\" from which we started. Figure 8 : LRCos performance on BATS and a provides access to certain features combinable with vector b to detect b , and that such offset should be more or less constant for all words in a given linguistic relations. Table 2 shows that this does not happen in a reliable way (data: BATS category D06 \"re+verb\"). Both correct and incorrect answers lie in about the same similarity range, so we cannot attribute the failures to the reliance of 3CosAdd on close neighborhoods. The distance from \u2212 \u2212\u2212\u2212 \u2192 marry to \u2212 \u2212\u2212\u2212\u2212\u2212 \u2192 remarry is the same; thus it must be the case that the offset between different a and a is not the same, and leads to different answers -with a frustratingly small margin of error.",
"cite_spans": [],
"ref_spans": [
{
"start": 324,
"end": 332,
"text": "Figure 8",
"ref_id": null
},
{
"start": 542,
"end": 549,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Comparison with Other Methods",
"sec_num": "4"
},
{
"text": "Source corpora are noisy, and it is tempting to blame almost anything on that. It could be literal text-processing noise (e.g. not quite cleaned HTML data and ad texts) or, more broadly, any kind of information in the VSM that is irrelevant to the question at hand. This includes polysemy: for a word-level VSM the difference between \u2212\u2212\u2192 king and \u2212\u2212\u2212\u2192 queen is not exactly the same as the difference between \u2212\u2212\u2192 man and \u2212 \u2212\u2212\u2212\u2212 \u2192 woman just for the existence of the Queen band (although that factor should not affect the \"re-\" prefix verbs in Table 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 542,
"end": 549,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Can We Just Blame the Corpus?",
"sec_num": "5.2"
},
{
"text": "In addition to irrelevant information, there is also missing information. Corpora of written texts are a priori not the same source of input as what children get when they learn their language. Natural language semantics relies on much data that the current VSMs do not have, including multimodal data and frequencies of events too commonplace to be mentioned in writing (Erk, 2016, p.18) .",
"cite_spans": [
{
"start": 371,
"end": 388,
"text": "(Erk, 2016, p.18)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Can We Just Blame the Corpus?",
"sec_num": "5.2"
},
{
"text": "This means that the distributional difference between \u2212 \u2192 tell and \u2212 \u2212\u2212 \u2192 retell (or \u2212 \u2212\u2212\u2212 \u2192 marry and \u2212 \u2212\u2212\u2212\u2212\u2212 \u2192 remarry, or both pairs) does not necessarily reflect the full range of the relevant difference, which could perhaps have helped to bring the vector offset calculation closer to the desired outcome. On this view, in the ideal world all word vectors with the \"re-\" feature would be nearly aligned. Some blame could also be passed to the condensed vectors such as SVD or neural word embeddings, which blend distributional features in a non-transparent way, potentially obscuring the relevant ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Can We Just Blame the Corpus?",
"sec_num": "5.2"
},
{
"text": "The current source corpora and VSMs could certainly be improved. But both linguistics and philosophy suggest that there are also issues with the idea of linguistic relations being so regular.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Can We Just Blame the Corpus?",
"sec_num": "5.2"
},
{
"text": "In theory, according to the distributional hypothesis, we would expect the relatively straightforward \"repeated action\" paradigm of verbs with and without the prefix \"re-\" in Table 2 to surface distributionally in the use of adverbs like \"again\". However, we have no reason to expect this to happen in quantitatively exactly the same way for all the verbs, even in an \"ideal\" corpus. And variation would lead to irregularities that we observe.",
"cite_spans": [],
"ref_spans": [
{
"start": 175,
"end": 182,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Semantics is Messy",
"sec_num": "5.3"
},
{
"text": "In fact, such variation would make VSMs more like human mental lexicon, not less. A wellknown problem in psychology is the asymmetry of similarity judgments, upon which relational similarity and analogical reasoning are based. Logically a is like b is equivalent to b is like a, but humans do not necessarily agree with both statements to the same degree (Tversky, 1977) .",
"cite_spans": [
{
"start": 355,
"end": 370,
"text": "(Tversky, 1977)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantics is Messy",
"sec_num": "5.3"
},
{
"text": "Consider the \"re-\" prefix examples above. We could expect 100% success by native English speakers on a \"complete the verb paradigm\" task, because they would be inevitably made aware of the \"add re-\" rule during its completion. Even so, processing time would vary due to such factors as frequencies and prototypicality. The psychological evidence is piling for certain gradedness in mental representation of morphological rules: people can rate the same structure differently on complexity (\"settlement\" is reported more affixed that \"government\"), similarity judgments for semantically transparent and non-transparent bases are continuous, and there are graded priming effects for both orthographic, semantic and phonological similarity between derived words and their roots (Hay and Baayen, 2005) .",
"cite_spans": [
{
"start": 775,
"end": 797,
"text": "(Hay and Baayen, 2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantics is Messy",
"sec_num": "5.3"
},
{
"text": "There are several connectionist proposals to simulate asymmetry through biases, saliency features, or structural alignment (Thomas and Mareschal, 1997, p.758) . The irregularities we observe in the VSMs could perhaps even be welcomed as another way to model this phenomenon -although it remains to be seen to what extent the parallel we draw here is appropriate.",
"cite_spans": [
{
"start": 123,
"end": 158,
"text": "(Thomas and Mareschal, 1997, p.758)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantics is Messy",
"sec_num": "5.3"
},
{
"text": "As a side note, let us remember that equations such as \u2212\u2212\u2192 king \u2212 \u2212\u2212\u2192 man + \u2212 \u2212\u2212\u2212\u2212 \u2192 woman = \u2212\u2212\u2212\u2192 queen should only be interpreted distributionally, although it is tempting to suppose that they reflect something like semantic features. That would be misleading on several accounts. First of all, the 3CosAdd math is commutative, which would be dubious for semantic features 5 . Secondly, it would bring us to the wall that componential analysis in linguistic semantics has hit a long time ago: semantic features defy definitions 6 , they only apply to a portion of vocabulary, and they impose binary oppositions that are psycholinguistically unrealistic (Leech, 1981, pp.117-119) . woman result certainly \"femaleness\" -or perhaps \"maleness\", or some mysterious \"malefemale gender change\" semantic feature?",
"cite_spans": [
{
"start": 654,
"end": 679,
"text": "(Leech, 1981, pp.117-119)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantics is Messy",
"sec_num": "5.3"
},
{
"text": "Let us now come back to the fact that the \"linguistic regularities\" are in fact relying on relational similarity (Section 2), and relational similarity is not something binary. That takes us straight to the most fundamental difficulty with analogy as it is known in philosophy and logic. Analogy is undeniably fundamental to human reasoning as an instrument for discovery and understanding the unknown from the known -but it is not, and has never been an inference rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analogy Is Not an Inference Rule",
"sec_num": "5.4"
},
{
"text": "Consider the example where Mars is similar to Earth in several ways, and therefore could be supporting life. This analogy does not guarantee the existence of Martians, and it could even be similarly applied to even less suitable planets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analogy Is Not an Inference Rule",
"sec_num": "5.4"
},
{
"text": "Basically, the problem with analogy is that not all similarities warrant all conclusions, and establishing valid analogies requires much case-by-case consideration. For this and some other reasons, analogy has long been rejected in generative linguistics as a mechanism for language acquisition through discovery, although now it is making a comeback (Itkonen, 2005, p.67-75) .",
"cite_spans": [
{
"start": 351,
"end": 375,
"text": "(Itkonen, 2005, p.67-75)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analogy Is Not an Inference Rule",
"sec_num": "5.4"
},
{
"text": "This general difficulty with analogical reasoning -it does work in humans, but selectively, so to say, -is inherited by the so-called proportional analogies of the a : a :: b : b kind. A case in point is their use in schools as verbal reasoning tests. In 2005 analogies were removed from SAT, its criticisms including ambiguity, guesswork and puzzle-like nature (Pringle, 2003) . It is also telling that SAT analogy problems came with a set of potential answers to choose from, because otherwise students would supply a range of answers with varying degrees of incorrectness.",
"cite_spans": [
{
"start": 362,
"end": 377,
"text": "(Pringle, 2003)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analogy Is Not an Inference Rule",
"sec_num": "5.4"
},
{
"text": "In case of the \"re-\" prefix above, once again, we could expect 100% success rate by humans who could see the \"add re-\" pattern; but semantic BATS questions would yield more variation. Consider the question \"trout is to river as lion is to \". Some would say den, thinking of the river as the trout's \"home\", but some could say savanna in the broader habitat terms; cage or zoo or safari park or even circus would all be valid to various degrees. BATS accepts several answer options, but it is hardly feasible to list them all for all cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analogy Is Not an Inference Rule",
"sec_num": "5.4"
},
{
"text": "Given the above, the question is: if analogical reasoning requires much case-by-case consideration in humans, what should we expect from VSMs with a single linear algebra operation?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analogy Is Not an Inference Rule",
"sec_num": "5.4"
},
{
"text": "6 Implications for Evaluation of VSMs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analogy Is Not an Inference Rule",
"sec_num": "5.4"
},
{
"text": "The analogy task continues to enjoy immense popularity in the NLP community as the standard evaluation task for VSMs. We have already mentioned two problems with the task: the problem of the Google test scores being flattering to the VSMs , and also 3CosAdd disadvantaging them, because the required semantic information may be encoded in more complex ways .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analogy Is Not an Inference Rule",
"sec_num": "5.4"
},
{
"text": "What the present work adds to the discussion is the demonstration of how strongly the accuracy on the analogy task depends on the target vector being relatively close to the source in the vector space model -not only for 3CosAdd, but also 3CosMul and LRCos. This is in fact a fundamental problem that is encountered in many other NLP tasks 7 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analogy Is Not an Inference Rule",
"sec_num": "5.4"
},
{
"text": "That problem brings about the following question: what have we been evaluating with 3CosAdd all this time?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analogy Is Not an Inference Rule",
"sec_num": "5.4"
},
{
"text": "The answer seems to be this: analogy task scores indicate to what extent the semantic space of a given VSM was structured in a way that, for each word category, favored the linguistic relation that happened to be picked by the creators of the particular test dataset. BATS makes this clearer, because it is well balanced across different types of relations. Most models score well on morphological inflections -because morphological forms of the same word are highly distributionally similar and are likely to be close. But we do not see equal success for synonyms, suffixes, colors and other categories -because it is hard to expect of any one model to \"guess\" which words should have synonyms as closest neighbors and which words should be close to their antonyms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analogy Is Not an Inference Rule",
"sec_num": "5.4"
},
{
"text": "As a matter of fact, for a general-purpose VSM we would not want that: every word can participate in hundreds of linguistic relations that we may be interested in, but we cannot expect them all to be close neighbors. We would want a VSM whose vector neighborhoods simply reflect whatever distributional properties were observed in a corpus. The challenge is to find reasoning methods that could reliably identify linguistic relations from vectors at any distance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analogy Is Not an Inference Rule",
"sec_num": "5.4"
},
{
"text": "Given the irregularities discussed in section 5, these methods would also have to rely on a more linguistically and cognitively realistic model of how meanings are reflected in distributional properties of words. LRCos made a step in the right direction, as it does not rely on unique and neatly aligned word pairs, but it can only work for relations between coherent word classes. That excludes many lexicographic relations like synonyms (car is to automobile as snake is to serpent), frame-semantic or encyclopedic relations (white is to snow as red is to rose).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analogy Is Not an Inference Rule",
"sec_num": "5.4"
},
{
"text": "While it would be highly desirable to have automated reasoning about linguistic relations with VSMs as a powerful, all-purpose tool, it is so far a remote goal. We investigated the potential of the vector offset method in solving the so-called proportional analogies, which rely on one pair of words with a known linguistic relation to identify the missing member of another pair of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We have presented a series of experiments showing that the success of the linear vector offset (as well as two better-performing methods) depends on the structure of the VSM: the targets that are further away in the vector space have worse chances of being recovered. This is a crucial limitation: no model could possibly hold all related words close in the vector space, as there are many thousands of linguistic relations, and many are context-dependent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Furthermore, the offsets of different word vector pairs appear to not be so regular, even for relatively straightforward linguistic relations. We argue that the observed irregularities should not just be blamed on the corpus. There is a number of theoretical issues with the very approach to linguistic relations as something neat and binary. We hope to drive attention to the graded nature of relational similarity that underlies analogical reasoning, and the need for automated reasoning algorithms to become more psychologically plausible in order to become more successful. Accuracy (%) Bin top 1 top 3 top 5 0 -0.1 8.6 1.8 4.1 5.3 0.1 -0.2 11.0 7.4 15.2 17.5 0.2 -0.3 13.1 20.8 35.1 44.4 0.3 -0.4 14.1 36.7 56.5 65.8 0.4 -0.5 15.9 47.9 66.0 71.7 0.5 -0.6 14.0 63.5 81.2 85.6 0.6 -0.7 10.1 76.4 87.9 92.0 0.7 -0.8 10.2 85.6 93.6 95.5 0.8 -0.9 3.1 88.5 96.7 96.7 0.9 -1 0.0 --- Similarity Share Accuracy (%) Bin top 1 top 3 top 5 0.0 -0.1 2.6 2.1 2.1 4.2 0.1 -0.2 6.9 5.4 7.7 7.7 0.2 -0.3 11.2 9.0 15.7 22.9 0.3 -0.4 15.1 19.1 41.7 48.8 0.4 -0.5 16.5 43.2 62.0 70.1 0.5 -0.6 18.2 67.4 77.4 82.4 0.6 -0.7 18.5 87.3 92.5 95.4 0.7 -0.8 9.6 91.1 93.9 95.0 0.8 -0.9 1.2 69.6 95.7 100 0.9 -1 0.2 33.3 100 100 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Scholastic Aptitude Test.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://nlp.stanford.edu/projects/ glove/ 3 https://code.google.com/archive/p/ word2vec/ 4 http://vsm.blackbird.pw/tools/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "E.g. in taxonomy construction it was found helpful to narrow the semantic space with domains or clusters, essentially \"zooming in\" on certain relations(Fu et al., 2014;Espinosa Anke et al., 2016).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partially supported by JST CREST Grant number JPMJCR1303, JSPS KAKENHI Grant number JP17K12739, and performed under the auspices of Real-world Big-Data Computation Open Innovation Laboratory, Japan.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "Accuracy (%) Bin top 1 top 3 top 5 0 -0.1 8.5 7.2 13.1 16.0 0.1 -0.2 10.9 12.0 21.0 25.7 0.2 -0.3 13.1 12.4 22.7 28.0 0.3 -0.4 14.0 16.9 29.2 35.4 0.4 -0.5 15.9 21.8 34.6 41.3 0.5 -0.6 14.0 31.8 46.7 53.3 0.6 -0.7 10.1 51.4 65.7 70.4 0.7 -0.8 10.3 54.1 73.6 78.2 0.8 -0.9 3.1 56.2 76.7 81.9 0.9 -1 0.1 61.4 77. Similarity Share Accuracy (%) Bin top 1 top 3 top 5 0 -0.1 2.3 3.9 6.9 9.2 0.1 -0.2 7.0 6.7 12.7 15.9 0.2 -0.3 11.5 5.9 12.2 16.0 0.3 -0.4 15.0 11.4 20.0 25.1 0.4 -0.5 16.5 17.3 29.4 35.9 0.5 -0.6 18.0 31.5 45.6 52.1 0.6 -0.7 18.4 48.4 62.8 68.3 0.7 -0.8 9.7 52.6 69.7 75.3 0.8 -0.9 1.3 37.5 53.1 59.7 0.9 -1 0.3 32.6 48.5 56.8 Accuracy (%) Bin top 1 top 3 top 5 0 -0.1 27.6 10.4 19.3 24.5 0.1 -0.2 25.8 20.9 33.4 38.8 0.2 -0.3 21.2 33.5 47.9 53.4 0.3 -0.4 13.3 42.1 58.2 64.0 0.4 -0.5 6.2 49.6 65.7 71.6 0.5 -0.6 3.9 45.9 67.5 73.8 0.6 -0.7 0.9 61.6 77.2 82.3 0.7 -0.8 0.5 60.4 77.1 80.9 0.8 -0.9 0.0 91.2 94.1 97.1 0.9 -1 0.6 10.0 24.1 31.6 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.0 Similarity Share Accuracy (%) Bin top 1 top 3 top 5 0 -0.1 2.3 1.3 2.3 3.0 0.1 -0.2 7.0 2.7 4.3 5.3 0.2 -0.3 11.6 1.7 4.0 5.5 0.3 -0.4 15.0 1.9 5.5 8.4 0.4 -0.5 16.5 6.3 15.6 23.2 0.5 -0.6 18.0 27.2 47.7 58.1 0.6 -0.7 18.4 57.8 79.2 85.7 0.7 -0.8 9.7 78.1 91.8 95.4 0.8 -0.9 1.3 83.5 93.6 96.2 0.9 -1 0.2 90.5 100 100 Similarity Share Accuracy (%) Bin top 1 top 3 top 5 0 -0.1 3.0 0.1 0.7 0.9 0.1 -0.2 8.3 0.4 1.1 1.7 0.2 -0.3 15.7 0.4 1.3 2.2 0.3 -0.4 20.7 1.9 5.6 9.6 0.4 -0.5 18.7 12.7 35.9 52.5 0.5 -0.6 14.8 50.4 83.2 91.1 0.6 -0.7 12.6 81.4 92.1 93.0 0.7 -0.8 5.5 86.3 87.6 87.7 0.8 -0.9 0.7 90.2 90.2 90.2 0.9 -1 0.0 100 100 100 Similarity Share Accuracy (%) Bin top 1 top 3 top 5 0 -0.1 8.5 6.0 10.1 12.0 0.1 -0.2 10.9 11.7 18.8 22.1 0.2 -0.3 13.1 11.5 20.0 23.8 0.3 -0.4 14.0 16.8 26.9 31.9 0.4 -0.5 15.9 21.9 32.8 38.2 0.5 -0.6 14.0 33.0 45.5 51.1 0.6 -0.7 10.1 53.7 65.2 69.4 0.7 -0.8 10.3 57.7 73.4 77.4 0.8 -0.9 3.1 60.5 77.4 81.8 0.9 -1 0.1 61.4 77.3 77.3 Similarity Share Accuracy (%) Bin top 1 top 3 top 5 0 -0.1 27.6 9.1 16.0 19.9 0.1 -0.2 25.8 21.8 31.9 36.3 0.2 -0.3 21.2 35.8 47.6 51.9 0.3 -0.4 13.3 44.7 58.4 63.1 0.4 -0.5 6.2 50.7 64.6 69.6 0.5 -0.6 3.9 46.0 62.7 68.9 0.6 -0.7 0.9 59.1 71.8 77.3 0.7 -0.8 0.5 48.7 66.1 69.9 0.8 -0.9 0.0 88.2 91.2 94.1 0.9 -1 0.6 10.6 21.3 27.8 Similarity Share Accuracy (%) Bin top 1 top 3 top 5 0 -0.1 8.5 1.4 2.6 3.5 0.1 -0.2 11.0 4.3 8.3 10.3 0.2 -0.3 13.1 7.6 13.9 17.4 0.3 -0.4 14.0 13.6 23.0 27.7 0.4 -0.5 15.9 19.6 30.8 36.4 0.5 -0.6 14.0 31.9 49.9 57.6 0.6 -0.7 10.1 56.9 73.9 79.3 0.7 -0.8 10.3 74.8 88.2 91.4 0.8 -0.9 3.1 81.3 93.7 95.7 0.9 -1 0.0 --- Similarity Share Accuracy (%) Bin top 1 top 3 top 5 0 -0.1 52.0 23.1 33.2 37.6 0.1 -0.2 25.8 29.2 40.3 44.5 0.2 -0.3 14.3 32.9 43.7 48.0 0.3 -0.4 5.8 34.9 46.2 50.4 0.4 -0.5 1.6 36.2 46.3 49.7 0.5 -0.6 0.4 34.8 42.4 46.9 0.6 -0.7 0.1 30.9 41.8 45.5 0.7 -0.8 0.0 47.6 52.4 52.4 0.8 -0.9 0.0 ---0.9 -1 0.0 0.0 6.2 12.5 Similarity Share Accuracy (%) Bin top 1 top 3 top 5 0 -10 36.5 53.5 68.9 74.3 10 -20 6.8 25.0 38.5 45.1 20 -30 4.5 17.7 28.1 33.5 30 -40 3.0 22.2 34.8 40.9 40 -50 2.1 24.0 34.7 40.7 50 -60 1.8 16.5 27.6 31.8 60 -70 1.2 10.2 22.7 27.8 70 -80 1.2 16.9 25.9 31.2 80 -90 1.2 23.9 34.6 38.9 90 -100 41.7 7.2 12.6 15.3 ---0.6 -0.7 5.3 1.7 3.9 5.7 0.7 -0.8 71.9 23.3 33.1 37.4 0.8 -0.9 22.7 44.8 59.4 64.5 0.9 -1 0.1 0.0 0.0 2.1 Figure 21 : Similarity between b and its 5th neighbor Figure 24: 3CosAdd vs 3CosMul vs LRCos (\"honest\" version)",
"cite_spans": [],
"ref_spans": [
{
"start": 3356,
"end": 3365,
"text": "Figure 21",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Supplementary Material",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Analogy and analogical reasoning",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Bartha",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Bartha. 2016. Analogy and analogical reason- ing. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy, Metaphysics Research Lab, Stanford University. Winter 2016 edition. https://plato.stanford.edu/archives/win2016/entries/ reasoning-analogy/.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Expansion-by-analogy: a vector symbolic approach to semantic search",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Dominic",
"middle": [],
"last": "Widdows",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Rindflesch",
"suffix": ""
}
],
"year": 2015,
"venue": "Quantum Interaction",
"volume": "",
"issue": "",
"pages": "54--66",
"other_ids": {
"DOI": [
"10.1007/978-3-319-15931-75"
]
},
"num": null,
"urls": [],
"raw_text": "Trevor Cohen, Dominic Widdows, and Thomas Rind- flesch. 2015. Expansion-by-analogy: a vec- tor symbolic approach to semantic search. In Quantum Interaction, Springer, pages 54-66. https://doi.org/10.1007/978-3-319-15931-7 5.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Splitting compounds by semantic analogy",
"authors": [
{
"first": "Joachim",
"middle": [],
"last": "Daiber",
"suffix": ""
},
{
"first": "Lautaro",
"middle": [],
"last": "Quiroz",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Wechsler",
"suffix": ""
},
{
"first": "Stella",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 1st Deep Machine Translation Workshop",
"volume": "",
"issue": "",
"pages": "20--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachim Daiber, Lautaro Quiroz, Roger Wechsler, and Stella Frank. 2015. Splitting compounds by seman- tic analogy. In Proceedings of the 1st Deep Machine Translation Workshop. Charles University in Prague, Praha, Czech Republic, 3-4 September 2015, pages 20-28. http://www.aclweb.org/anthology/W15-",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Word embeddings, analogies, and machine learning: beyond king -man + woman = queen",
"authors": [
{
"first": "Aleksandr",
"middle": [],
"last": "Drozd",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Gladkova",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Matsuoka",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "3519--3530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aleksandr Drozd, Anna Gladkova, and Satoshi Mat- suoka. 2016. Word embeddings, analogies, and machine learning: beyond king -man + woman = queen. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. pages 3519-3530. https://www.aclweb.org/anthology/C/C16/C16- 1332.pdf.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Cross-language latent relational search between Japanese and English languages using a Web corpus",
"authors": [
{
"first": "Danushka",
"middle": [],
"last": "Nguyen Tuan Duc",
"suffix": ""
},
{
"first": "Mitsuru",
"middle": [],
"last": "Bollegala",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ishizuka",
"suffix": ""
}
],
"year": 2012,
"venue": "ACM Transactions on Asian Language Information Processing (TALIP)",
"volume": "11",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nguyen Tuan Duc, Danushka Bollegala, and Mitsuru Ishizuka. 2012. Cross-language latent relational search between Japanese and English languages us- ing a Web corpus. ACM Transactions on Asian Lan- guage Information Processing (TALIP) 11(3):11. http://dl.acm.org/citation.cfm?id=2334805.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "What do you know about an alligator when you know the company it keeps",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
}
],
"year": 2016,
"venue": "Semantics and Pragmatics",
"volume": "9",
"issue": "17",
"pages": "1--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Erk. 2016. What do you know about an alligator when you know the company it keeps. Semantics and Pragmatics 9(17):1-63.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Supervised distributional hypernym discovery via domain adaptation",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Espinosa Anke",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Claudio",
"middle": [
"Delli"
],
"last": "Bovi",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Espinosa Anke, Jose Camacho-Collados, Clau- dio Delli Bovi, and Horacio Saggion. 2016. Su- pervised distributional hypernym discovery via do- main adaptation. In Proceedings of the 2016",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "424--435",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conference on Empirical Methods in Natural Language Processing. Association for Computa- tional Linguistics, Austin, Texas, pages 424-435. https://aclweb.org/anthology/D16-1041.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Inferring semantic similarity from distributional evidence: An analogy-based approach to word sense disambiguation",
"authors": [
{
"first": "Stefano",
"middle": [],
"last": "Federici",
"suffix": ""
},
{
"first": "Simonetta",
"middle": [],
"last": "Montemagni",
"suffix": ""
},
{
"first": "Vito",
"middle": [],
"last": "Pirrelli",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the ACL/EACL Workshop on Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications",
"volume": "",
"issue": "",
"pages": "90--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefano Federici, Simonetta Montemagni, and Vito Pir- relli. 1997. Inferring semantic similarity from dis- tributional evidence: An analogy-based approach to word sense disambiguation. In Proceedings of the ACL/EACL Workshop on Automatic Infor- mation Extraction and Building of Lexical Se- mantic Resources for NLP Applications. pages 90-97. http://aclweb.org/anthology/W/W97/W97- 0813.pdf.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning semantic hierarchies via word embeddings",
"authors": [
{
"first": "Ruiji",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1199--1209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruiji Fu, Jiang Guo, Bing Qin, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Learn- ing semantic hierarchies via word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguis- tics. Association for Computational Linguistics, Baltimore, Maryland, USA, pages 1199-1209.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Analogy-based detection of morphological and semantic relations with word embeddings: What works and what doesn't",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Gladkova",
"suffix": ""
},
{
"first": "Aleksandr",
"middle": [],
"last": "Drozd",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Matsuoka",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL-HLT SRW. ACL",
"volume": "",
"issue": "",
"pages": "47--54",
"other_ids": {
"DOI": [
"10.18653/v1/N16-2002"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Gladkova, Aleksandr Drozd, and Satoshi Mat- suoka. 2016. Analogy-based detection of mor- phological and semantic relations with word em- beddings: What works and what doesn't. In Proceedings of the NAACL-HLT SRW. ACL, San Diego, California, June 12-17, 2016, pages 47-54. https://doi.org/10.18653/v1/N16-2002.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Shifting paradigms: Gradient structure in morphology",
"authors": [
{
"first": "Jennifer",
"middle": [
"B"
],
"last": "Hay",
"suffix": ""
},
{
"first": "R",
"middle": [
"Harald"
],
"last": "Baayen",
"suffix": ""
}
],
"year": 2005,
"venue": "Trends in cognitive sciences",
"volume": "9",
"issue": "7",
"pages": "342--348",
"other_ids": {
"DOI": [
"10.1016/j.tics.2005.04.002"
]
},
"num": null,
"urls": [],
"raw_text": "Jennifer B. Hay and R. Harald Baayen. 2005. Shift- ing paradigms: Gradient structure in morphol- ogy. Trends in cognitive sciences 9(7):342-348. https://doi.org/10.1016/j.tics.2005.04.002.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Esa Itkonen. 2005. Analogy as Structure and Process: Approaches in Linguistic, Cognitive Psychology, and Philosophy of Science. Number 14 in Human cognitive processing",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1075/hcp.14"
]
},
"num": null,
"urls": [],
"raw_text": "Esa Itkonen. 2005. Analogy as Structure and Pro- cess: Approaches in Linguistic, Cognitive Psy- chology, and Philosophy of Science. Num- ber 14 in Human cognitive processing. John Benjamins Pub. Co, Amsterdam ; Philadelphia. https://doi.org/10.1075/hcp.14.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Semeval-2012 task 2: measuring degrees of relational similarity",
"authors": [
{
"first": "David",
"middle": [
"A"
],
"last": "Jurgens",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Keith",
"middle": [
"J"
],
"last": "Mohammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Holyoak",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM)",
"volume": "",
"issue": "",
"pages": "356--364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David A. Jurgens, Peter D. Turney, Saif M. Mo- hammad, and Keith J. Holyoak. 2012. Semeval- 2012 task 2: measuring degrees of relational sim- ilarity. In Proceedings of the First Joint Con- ference on Lexical and Computational Semantics (*SEM). Association for Computational Linguistics, Montr\u00e9al, Canada, June 7-8, 2012, pages 356-364. http://dl.acm.org/citation.cfm?id=2387693.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multilingual reliability and \"semantic\" structure of continuous word spaces",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "K\u00f6per",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Scheible",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 11th International Conference on Computational Semantics",
"volume": "",
"issue": "",
"pages": "40--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian K\u00f6per, Christian Scheible, and Sabine Schulte im Walde. 2015. Multilingual reliability and \"semantic\" structure of continuous word spaces. In Proceedings of the 11th Interna- tional Conference on Computational Semantics. Association for Computational Linguistics, pages 40-45. http://www.aclweb.org/anthology/W15- 01#page=56.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "How to generate a good word embedding?",
"authors": [
{
"first": "Siwei",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Liheng",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE Intelligent Systems",
"volume": "31",
"issue": "6",
"pages": "5--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siwei Lai, Kang Liu, Liheng Xu, and Jun Zhao. 2016. How to generate a good word embed- ding? IEEE Intelligent Systems 31(6):5-14.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semantics: The Study of Meaning",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Leech",
"suffix": ""
}
],
"year": 1981,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Leech. 1981. Semantics: The Study of Mean- ing. Harmondsworth: Penguin Books.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Linguistic regularities in sparse and explicit word representations",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "171--180",
"other_ids": {
"DOI": [
"10.3115/v1/W14-1618"
]
},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Linguistic regu- larities in sparse and explicit word representations. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning. pages 171-180. https://doi.org/10.3115/v1/W14-1618.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Improving distributional similarity with lessons learned from word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "211--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the As- sociation for Computational Linguistics 3:211-225. http://www.aclweb.org/anthology/Q15-1016.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Issues in evaluating semantic spaces using word analogies",
"authors": [
{
"first": "",
"middle": [],
"last": "Tal Linzen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Workshop on Evaluating Vector Space Representations for NLP. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2503"
]
},
"num": null,
"urls": [],
"raw_text": "Tal Linzen. 2016. Issues in evaluating semantic spaces using word analogies. In Proceedings of the First Workshop on Evaluating Vector Space Representa- tions for NLP. Association for Computational Lin- guistics. https://doi.org/10.18653/v1/W16-2503.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word repre- sentations in vector space. Proceedings of Inter- national Conference on Learning Representations (ICLR) http://arxiv.org/abs/1301.3781.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems 26 (NIPS 2013)",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013b. Distributed repre- sentations of words and phrases and their composi- tionality. In Advances in Neural Information Pro- cessing Systems 26 (NIPS 2013). pages 3111-3119. http://papers.nips.cc/paper/5021-di.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies. Associa- tion for Computational Linguistics, pages 746-751. http://aclweb.org/anthology/N13-1090.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Insights into analogy completion from the biomedical domain",
"authors": [
{
"first": "Denis",
"middle": [],
"last": "Newman-Griffis",
"suffix": ""
},
{
"first": "Albert",
"middle": [
"M"
],
"last": "Lai",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Fosler-Lussier",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.02241"
]
},
"num": null,
"urls": [],
"raw_text": "Denis Newman-Griffis, Albert M. Lai, and Eric Fosler- Lussier. 2017. Insights into analogy completion from the biomedical domain. arXiv:1706.02241 [cs] http://arxiv.org/abs/1706.02241.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "GloVe: global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "12",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. GloVe: global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). volume 12, pages 1532-1543. https://doi.org/10.3115/v1/D14-1162.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "College board scores with critics of SAT analogies",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Pringle",
"suffix": ""
}
],
"year": 2003,
"venue": "Los Angeles Times",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Pringle. 2003. College board scores with critics of SAT analogies. Los Angeles Times http://articles.latimes.com/2003",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Connectionism and psychological notions of similarity",
"authors": [
{
"first": "S",
"middle": [
"C"
],
"last": "Michael",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mareschal",
"suffix": ""
}
],
"year": 1997,
"venue": "The Proceedings of the 19th Annual Conference of the Cognitive Science Society. Mahwah",
"volume": "",
"issue": "",
"pages": "757--762",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael SC Thomas and Denis Mareschal. 1997. Con- nectionism and psychological notions of similar- ity. In The Proceedings of the 19th Annual Con- ference of the Cognitive Science Society. Mah- wah, NJ: Erlbaum, Stanford, USA, pages 757-762. http://eprints.bbk.ac.uk/4611/.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Combining independent modules to solve multiple-choice synonym and analogy problems",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"L"
],
"last": "Littman",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Bigham",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Shnayder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "482--489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Turney, Michael L. Littman, Jeffrey Bigham, and Victor Shnayder. 2003. Combining inde- pendent modules to solve multiple-choice syn- onym and analogy problems. In Proceed- ings of the International Conference on Re- cent Advances in Natural Language Process- ing. pages 482-489. http://nparc.cisti-icist.nrc- cnrc.gc.ca/npsi/ctrl?action=rtdoc&an=8913366.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Similarity of semantic relations",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Linguistics",
"volume": "32",
"issue": "3",
"pages": "379--416",
"other_ids": {
"DOI": [
"10.1162/coli.2006.32.3.379"
]
},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney. 2006. Similarity of semantic rela- tions. Computational Linguistics 32(3):379-416. https://doi.org/10.1162/coli.2006.32.3.379.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A uniform approach to analogies, synonyms, antonyms, and associations",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "905--912",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney. 2008. A uniform approach to analogies, synonyms, antonyms, and associa- tions. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008). pages 905-912. http://nparc.cisti-icist.nrc- cnrc.gc.ca/npsi/ctrl?action=rtdoc&an=5764174.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Features of similarity",
"authors": [
{
"first": "Amos",
"middle": [],
"last": "Tversky",
"suffix": ""
}
],
"year": 1977,
"venue": "Psychological Review",
"volume": "84",
"issue": "4",
"pages": "327--352",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amos Tversky. 1977. Features of similar- ity. Psychological Review 84(4):327-352.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Take and took, gaggle and goose, book and read: evaluating the utility of vector differences for lexical relation learning",
"authors": [
{
"first": "Ekaterina",
"middle": [],
"last": "Vylomova",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Rimmel",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1671--1682",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1158"
]
},
"num": null,
"urls": [],
"raw_text": "Ekaterina Vylomova, Laura Rimmel, Trevor Cohn, and Timothy Baldwin. 2016. Take and took, gag- gle and goose, book and read: evaluating the util- ity of vector differences for lexical relation learn- ing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers). Association for Computa- tional Linguistics, Berlin, Germany, pages 1671- 1682. https://doi.org/10.18653/v1/P16-1158.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "The result of a \u2212 a + b calculation on BATS: source vectors a, a , and b are not excluded.",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "Fig. 2 shows that if we do not exclude the source vectors, b is the most likely to be predicted; in derivational and encyclopedic categories a is also possible in under 30% of cases. b is as unlikely to be predicted as a, or any other vector. This experiment suggests that the addition of the offset between a and a typically has a very small effect on the b vector -not sufficient to induce a shift to a different vector on its own. This would in effect limit the search space of 3CosAdd to the close neighborhood of the b vector. It explains another phenomenon pointed out by Linzen (2016): for the plural noun category in the",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "rank of b in the neighborhood of b *X-axis labels indicate lower boundary of the corresponding similarity/rank bins.",
"uris": null
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"text": "Accuracy of 3CosAdd method on GloVe vs characteristics of the vector space.",
"uris": null
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"text": "0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 The similarity between b and a on GloVe: morphological BATS categories only.",
"uris": null
},
"FIGREF5": {
"num": null,
"type_str": "figure",
"text": "The vector offset could theoretically solve it, but if the question is \u2212 \u2212\u2212 \u2192 snow: \u2212 \u2212\u2212 \u2192 white :: \u2212\u2212\u2212\u2192 sugar:",
"uris": null
},
"FIGREF7": {
"num": null,
"type_str": "figure",
"text": "The similarity between b and its 5th neighbor",
"uris": null
},
"FIGREF8": {
"num": null,
"type_str": "figure",
"text": "argmax b \u2208V cos(b , b)cos(b , a ) cos(b , a) + \u03b5 (\u03b5 = 0.001 is used to prevent division by zero)",
"uris": null
},
"FIGREF9": {
"num": null,
"type_str": "figure",
"text": "rank of b in the neighborhood of b *X-axis labels indicate lower boundary of the corresponding similarity/rank bins.",
"uris": null
},
"FIGREF10": {
"num": null,
"type_str": "figure",
"text": "Accuracy of 3CosMul method on GloVe model vs characteristics of the vector space.",
"uris": null
},
"FIGREF11": {
"num": null,
"type_str": "figure",
"text": "Accuracy of LRCos method vs similarity between vectors b and b",
"uris": null
},
"FIGREF12": {
"num": null,
"type_str": "figure",
"text": "5 (( \u2212 \u2212\u2212\u2212\u2212\u2212 \u2192 remarry \u2212 \u2212 \u2212\u2212\u2212 \u2192 marry) + \u2212 \u2212\u2212 \u2192 write) makes some sense, but (( \u2212 \u2212\u2212 \u2192 write \u2212 \u2212 \u2212\u2212\u2212 \u2192 marry) + \u2212 \u2212\u2212\u2212\u2212\u2212 \u2192 remarry) does not. 6 Is the \u2212\u2212\u2192 man \u2212 \u2212 \u2212\u2212\u2212\u2212 \u2192",
"uris": null
},
"FIGREF15": {
"num": null,
"type_str": "figure",
"text": "Similarity between vectors b and bA.4 Comparison between 3CosAdd, 3CosMul and LRCos on GloVe",
"uris": null
},
"FIGREF16": {
"num": null,
"type_str": "figure",
"text": "",
"uris": null
},
"TABREF1": {
"html": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td>: Resul</td></tr><tr><td>different word</td></tr><tr><td>Method</td></tr><tr><td>RNN-80</td></tr><tr><td>CW-50</td></tr><tr><td>CW-100</td></tr><tr><td>HLBL-50</td></tr><tr><td>HLBL-100</td></tr></table>",
"num": null
},
"TABREF2": {
"html": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td>: Comp</td></tr><tr><td>lobert and We</td></tr><tr><td>Log-Bilinear m</td></tr><tr><td>questions. Tur</td></tr><tr><td>tors do poorly</td></tr><tr><td>Log-Bilinear</td></tr><tr><td>2009) do ess</td></tr><tr><td>These represe</td></tr><tr><td>of data and th</td></tr><tr><td>the HLBL me</td></tr></table>",
"num": null
},
"TABREF3": {
"html": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td>lists examples of each BATS category:</td></tr><tr><td>there are 50 word pairs for each of 40 linguistic</td></tr></table>",
"num": null
},
"TABREF4": {
"html": null,
"text": "",
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF6": {
"html": null,
"text": "3CosAdd: effect of various a : a vector pairs with the same b : b pair ( \u2212 \u2212\u2212\u2212 \u2192",
"type_str": "table",
"content": "<table><tr><td>marry: \u2212 \u2212\u2212\u2212\u2212\u2212 \u2192 remarry)</td></tr><tr><td>No a 1 acquire reacquire marry fianc\u00e9e a b predicted vector 2 tell retell marry betrothed 0.51 0.49 Sim. score correct b score 0.54 &lt;0.51 3 engage reengage marry eloped 0.52 0.51 4 appear reappear marry marries 0.65 0.55 5 establish reestablish marry marries 0.58 0.52 6 invest reinvest marry marries 0.59 0.57 7 adjust readjust marry marrying 0.59 0.55 8 arrange rearrange marry marrying 0.52 0.43 9 discover rediscover marry marrying 0.54 0.49 10 apply reapply marry remarry 0.53 0.53</td></tr></table>",
"num": null
}
}
}
}