Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P17-1007",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:16:54.616555Z"
},
"title": "Skip-Gram -Zipf + Uniform = Vector Additivity",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Gittens",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rensselaer Polytechnic Institute",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Dimitris",
"middle": [],
"last": "Achlioptas",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Michael",
"middle": [
"W"
],
"last": "Mahoney",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In recent years word-embedding models have gained great popularity due to their remarkable performance on several tasks, including word analogy questions and caption generation. An unexpected \"sideeffect\" of such models is that their vectors often exhibit compositionality, i.e., adding two word-vectors results in a vector that is only a small angle away from the vector of a word representing the semantic composite of the original words, e.g., \"man\" + \"royal\" = \"king\".",
"pdf_parse": {
"paper_id": "P17-1007",
"_pdf_hash": "",
"abstract": [
{
"text": "In recent years word-embedding models have gained great popularity due to their remarkable performance on several tasks, including word analogy questions and caption generation. An unexpected \"sideeffect\" of such models is that their vectors often exhibit compositionality, i.e., adding two word-vectors results in a vector that is only a small angle away from the vector of a word representing the semantic composite of the original words, e.g., \"man\" + \"royal\" = \"king\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This work provides a theoretical justification for the presence of additive compositionality in word vectors learned using the Skip-Gram model. In particular, it shows that additive compositionality holds in an even stricter sense (small distance rather than small angle) under certain assumptions on the process generating the corpus. As a corollary, it explains the success of vector calculus in solving word analogies. When these assumptions do not hold, this work describes the correct non-linear composition operator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Finally, this work establishes a connection between the Skip-Gram model and the Sufficient Dimensionality Reduction (SDR) framework of Globerson and Tishby: the parameters of SDR models can be obtained from those of Skip-Gram models simply by adding information on symbol frequencies. This shows that Skip-Gram embeddings are optimal in the sense of Globerson and Tishby and, further, im- plies that the heuristics commonly used to approximately fit Skip-Gram models can be used to fit SDR models.",
"cite_spans": [
{
"start": 350,
"end": 388,
"text": "Globerson and Tishby and, further, im-",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The strategy of representing words as vectors has a long history in computational linguistics and machine learning. The general idea is to find a map from words to vectors such that wordsimilarity and vector-similarity are in correspondence. Whilst vector-similarity can be readily quantified in terms of distances and angles, quantifying word-similarity is a more ambiguous task. A key insight in that regard is to posit that the meaning of a word is captured by \"the company it keeps\" (Firth, 1957) and, therefore, that two words that keep company with similar words are likely to be similar themselves.",
"cite_spans": [
{
"start": 487,
"end": 500,
"text": "(Firth, 1957)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the simplest case, one seeks vectors whose inner products approximate the co-occurrence frequencies. In more sophisticated methods cooccurrences are reweighed to suppress the effect of more frequent words (Rohde et al., 2006) and/or to emphasize pairs of words whose co-occurrence frequency maximally deviates from the independence assumption (Church and Hanks, 1990 ).",
"cite_spans": [
{
"start": 208,
"end": 228,
"text": "(Rohde et al., 2006)",
"ref_id": "BIBREF15"
},
{
"start": 346,
"end": 369,
"text": "(Church and Hanks, 1990",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An alternative to seeking word-embeddings that reflect co-occurrence statistics is to extract the vectorial representation of words from non-linear statistical language models, specifically neural networks. (Bengio et al., 2003) already proposed (i) associating with each vocabulary word a feature vector, (ii) expressing the probability function of word sequences in terms of the feature vectors of the words in the sequence, and (iii) learning simultaneously the vectors and the parameters of the probability function. This approach came into prominence recently through works of Mikolov et al. (see below) whose main departure from (Bengio et al., 2003) was to follow the suggestion of (Mnih and Hinton, 2007) and tradeaway the expressive capacity of general neuralnetwork models for the scalability (to very large corpora) afforded by (the more restricted class of) log-linear models.",
"cite_spans": [
{
"start": 207,
"end": 228,
"text": "(Bengio et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 582,
"end": 608,
"text": "Mikolov et al. (see below)",
"ref_id": null
},
{
"start": 635,
"end": 656,
"text": "(Bengio et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 689,
"end": 712,
"text": "(Mnih and Hinton, 2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An unexpected side effect of deriving wordembeddings via neural networks is that the wordvectors produced appear to enjoy (approximate) additive compositionality: adding two wordvectors often results in a vector whose nearest word-vector belongs to the word capturing the composition of the added words, e.g., \"man\" + \"royal\" = \"king\" (Mikolov et al., 2013c) . This unexpected property allows one to use these vectors to answer word-analogy questions algebraically, e.g., answering the question \"Man is to king as woman is to \" by returning the word whose word-vector is nearest to the vector",
"cite_spans": [
{
"start": 335,
"end": 358,
"text": "(Mikolov et al., 2013c)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "v(king) -v(man) + v(woman).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we focus on explaining the source of this phenomenon for the most prominent such model, namely the Skip-Gram model introduced in (Mikolov et al., 2013a) . The Skip-Gram model learns vector representations of words based on their patterns of co-occurrence in the training corpus as follows: it assigns to each word c in the vocabulary V , a \"context\" and a \"target\" vector, respectively u c and v c , which are to be used in order to predict the words that appear around each occurrence of c within a window of \u2206 tokens. Specifically, the log probability of any target word w to occur at any position within distance \u2206 of a context word c is taken to be proportional to the inner product between u c and v w , i.e., letting",
"cite_spans": [
{
"start": 142,
"end": 165,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "n = |V |, p(w|c) = e u T c vw n i=1 e u T c v i .",
"eq_num": "(1)"
}
],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Further, Skip-Gram assumes that the conditional probability of each possible set of words in a window around a context word c factorizes as the product of the respective conditional probabilities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "p(w \u2212\u2206 , . . . , w \u2206 |c) = \u2206 \u03b4=\u2212\u2206 \u03b4 =0 p(w \u03b4 |c).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) (Mikolov et al., 2013a) proposed learning the Skip-Gram parameters on a training corpus by using maximum likelihood estimation under (1) and (2). Thus, if w i denotes the i-th word in the training corpus and T the length of the corpus, we seek the word vectors that maximize",
"cite_spans": [
{
"start": 4,
"end": 27,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 T T i=1 \u2206 \u03b4=\u2212\u2206 \u03b4 =0 log p(w i+\u03b4 |w i ) .",
"eq_num": "(3)"
}
],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As mentioned, the normalized context vectors obtained from maximizing (3) under (1) and (2) exhibit additive compositionality. For example, the cosine distance between the sum of the context vectors of the words \"Vietnam\" and \"capital\" and the context vector of the word \"Hanoi\" is small.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While there has been much interest in using algebraic operations on word vectors to carry out semantic operations like composition, and mathematically-flavored explanations have been offered (e.g., in the recent work (Paperno and Baroni, 2016) ), the only published work which attempts a rigorous theoretical understanding of this phenomenon is (Arora et al., 2016) . This work guarantees that word vectors can be recovered by factorizing the so-called PMI matrix, and that algebraic operations on these word vectors can be used to solve analogies, under certain conditions on the process that generated the training corpus. Specifically, the word vectors must be known a priori, before their recovery, and to have been generated by randomly scaling uniformly sampled vectors from the unit sphere 1 . Further, the ith word in the corpus must have been selected with probability proportional to e u T w c i , where the \"discourse\" vector c i governs the topic of the corpus at the ith word. Finally, the discourse vector is assumed to evolve according to a random walk on the unit sphere that has a uniform stationary distribution.",
"cite_spans": [
{
"start": 217,
"end": 243,
"text": "(Paperno and Baroni, 2016)",
"ref_id": "BIBREF12"
},
{
"start": 345,
"end": 365,
"text": "(Arora et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "By way of contrast, our results assume nothing a priori about the properties of the word vectors. In fact, the connection we establish between the Skip-Gram and the Sufficient Dimensionality Reduction model of (Globerson and Tishby, 2003) shows that the word vectors learned by Skip-Gram are information-theoretically optimal. Further, the context word c in the Skip-Gram model essentially serves the role that the discourse vector does in the PMI model of (Arora et al., 2016) : the words neighboring c are selected with probability proportional to e u T c vw . We find the exact non-linear composition operator when no assumptions are made on the context word. When an analogous assumption to that of (Arora et al., 2016) is made, that the context words are uniformly distributed, we prove that the composition operator reduces to vector addition.",
"cite_spans": [
{
"start": 210,
"end": 238,
"text": "(Globerson and Tishby, 2003)",
"ref_id": "BIBREF5"
},
{
"start": 457,
"end": 477,
"text": "(Arora et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 703,
"end": 723,
"text": "(Arora et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While our primary motivation has been to provide a better theoretical understanding of word compositionality in the popular Skip-Gram model, our connection with the SDR method illuminates a much more general point about the practical applicability of the Skip-Gram model. In particular, it addresses the question of whether, for a given corpus, fitting a Skip-Gram model will give good embeddings. Even if we are making reasonable linguistic assumptions about how to model words and the interdependencies of words in a corpus, it's not clear that these have to hold universally on all corpuses to which we apply Skip-Gram. However, the fact that when we fit a Skip-Gram model we are fitting an SDR model (up to frequency information), and the fact that SDR models are information-theoretically optimal in a certain sense, argues that regardless of whether the Skip-Gram assumptions hold, Skip-Gram always gives us optimal features in the following sense: the learned context embeddings and target embeddings preserve the maximal amount of mutual information between any pair of random variables X and Y consistent with the observed co-occurence matrix, where Y is the target word and X is the predictor word (in a min-max sense, since there are many ways of coupling X and Y , each of which may have different amounts of mutual information). Importantly, this statement requires no assumptions on the distribution P (X, Y ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we first give a mathematical formulation of the intuitive notion of compositionality of words. We then prove that the composition operator for the Skip-Gram model in full generality is a non-linear function of the vectors of the words being composed. Under a single simplifying assumption, the operator linearizes and reduces to the addition of the word vectors. Finally, we explain how linear compositionality allows for solving word analogies with vector algebra.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "A natural way of capturing the compositionality of words is to say that the set of context words c 1 , . . . , c m has the same meaning as the single word c if for every other word w,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "p(w|c 1 , . . . , c m ) = p(w|c) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "Although this is an intuitively satisfying definition, we never expect it to hold exactly; instead, we replace exact equality with the minimization of KL-divergence. That is, we state that the best candidate for having the same meaning as the set of context words C is the word",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "arg min c\u2208V D KL (p(\u2022|C) | p(\u2022|c)) .",
"eq_num": "(4)"
}
],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "We refer to any vector that minimizes (4) as a paraphrase of the set of words C. There are two natural concerns with (4). The first is that, in general, it is not clear how to define p(\u2022|C). The second is that KL-divergence minimization is a hard problem, as it involves optimization over many high dimensional probability distributions. Our main result shows that both of these problems go away for any language model that satisfies the following two assumptions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "A1. For every word c, there exists Z c such that for every word w,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(w|c) = 1 Z c exp(u T c v w ) .",
"eq_num": "(5)"
}
],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "A2. For every set of words C = {c 1 , c 2 , . . . , c m }, there exists Z C such that for every word w,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(w|C) = p(w) 1\u2212m Z C m i=1 p(w|c i ) .",
"eq_num": "(6)"
}
],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "Clearly, the Skip-Gram model satisfies A1 by definition. We prove that it also satisfies A2 when m \u2264 \u2206 (Lemma 1). Next, we state a theorem that holds for any model satisfying assumptions A1 and A2, including the Skip-Gram model when m \u2264 \u2206. Theorem 1. In every word model that satisfies A1 and A2, for every set of words",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C = {c 1 , . . . , c m }, any paraphase c of C satisfies w\u2208V p(w|c)v w = w\u2208V p(w|C)v w .",
"eq_num": "(7)"
}
],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "Theorem 1 characterizes the composition operator for any language model which satisfies our two assumptions; in general, this operator is not addition. Instead, a paraphrase c is a vector such that the average word vector under p(\u2022|c) matches that under p(\u2022|C). When the expectations in (7) can be computed, the composition operator can be implemented by solving a non-linear system of equations to find a vector u for which the left-hand side of (7) equals the right-hand side.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "Our next result proves that although the composition operator is nontrivial in the general case, to recover vector addition as the composition operator, it suffices to assume that the word frequency is uniform.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "Theorem 2. In every word model that satisfies A1, A2, and where p(w) = 1/|V | for every w \u2208 V , the paraphrase of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "C = {c 1 , . . . , c m } is u 1 + . . . + u m .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "As word frequencies are typically much closer to a Zipf distribution (Piantadosi, 2014), the uniformity assumption of Theorem 2 is not realistic. That said, we feel it is important to point out that, as reported in (Mikolov et al., 2013b) , additivity captures compositionality more accurately when the training set is manipulated so that the prior distribution of the words is made closer to uniform.",
"cite_spans": [
{
"start": 215,
"end": 238,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "Using composition to solve analogies. It has been observed that word vectors trained using nonlinear models like Skip-Gram tend to encode semantic relationships between words as linear relationships between the word vectors (Mikolov et al., 2013b; Pennington et al., 2014; Levy and Goldberg, 2014) . In particular, analogies of the form \"man:woman::king:?\" can often be solved by taking ? to be the word in the vocabulary whose context vector has the smallest angle with u woman + (u king \u2212 u man ). Theorems 1 and 2 offer insight into the solution such analogy questions.",
"cite_spans": [
{
"start": 224,
"end": 247,
"text": "(Mikolov et al., 2013b;",
"ref_id": "BIBREF9"
},
{
"start": 248,
"end": 272,
"text": "Pennington et al., 2014;",
"ref_id": "BIBREF13"
},
{
"start": 273,
"end": 297,
"text": "Levy and Goldberg, 2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "We first consider solving an analogy of the form \"m:w::k:?\"\" in the case where the composition operator is nonlinear. The fact that m and w share a relationship means m is a paraphrase of the set of words {w, R}, where R is a set of words encoding the relationship between m and w. Similarly, the fact that k and ? share the same relationship means k is a paraphrase of the set of words {?, R}. By Theorem 1, we have that R and ? must satisfy",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "\u2208V p( |m)v = \u2208V p( |w, R)v and \u2208V p( |k)v = \u2208V p( |?, R)v .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "We see that solving analogies when the composition operator is nonlinear requires the solution of two highly nonlinear systems of equations. In sharp contrast, when the composition operator is linear, the solution of analogies delightfully reduces to elementary vector algebra. To see this, we again begin with the assertion that the fact that m and w share a relationship means m is a paraphrase of the set of words {w, R}; Similarly, k is a paraphrase of {?, R}. By Theorem 2, u m = u w + u r and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "u k = u ? + u r ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "which gives the expected relationship",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "u ? = u k + (u w \u2212 u m ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "Note that because this expression for u ? is in terms of k, w, and m, there is actually no need to assume that R is a set of actual words in V .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality of Skip-Gram",
"sec_num": "2"
},
{
"text": "Proof of Theorem 1. Note that p(w|C) equals",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proofs",
"sec_num": "2.1"
},
{
"text": "p(w) 1\u2212m Z C m i=1 p(w|c i ) = p(w) 1\u2212m Z C exp m i=1 u T c i v w \u2212 m i=1 log Z c i = 1 Z p(w) 1\u2212m exp(u T C v w ) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proofs",
"sec_num": "2.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proofs",
"sec_num": "2.1"
},
{
"text": "Z = Z C m i=1 Z i , and u C = m i=1 u i . Minimizing the KL-divergence D KL (p(\u2022|c 1 , . . . , c m ) p(\u2022|c))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proofs",
"sec_num": "2.1"
},
{
"text": "as a function of c is equivalent to maximizing the negative cross-entropy as a function of u c , i.e., as maximizing",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proofs",
"sec_num": "2.1"
},
{
"text": "Q(u c ) = Z w exp(u T C v w ) p(w) m\u22121 (u T c v w \u2212 log Z c ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proofs",
"sec_num": "2.1"
},
{
"text": "Since Q is concave, the maximizers occur where its gradient vanishes. As \u2207 uc Q equals",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proofs",
"sec_num": "2.1"
},
{
"text": "Z w exp(u T C v w ) p(w) m\u22121 v w \u2212 n =1 exp(u T c v )v n k=1 exp(u T c v k ) = n =1 exp(u T c v )v n k=1 exp(u T c v k ) \u2212 Z w exp(u T C v w )v w p(w) m\u22121 = w\u2208V p(w|c)v w \u2212 w\u2208V p(w|c 1 , . . . , c m )v w ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proofs",
"sec_num": "2.1"
},
{
"text": "we see that (7) follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proofs",
"sec_num": "2.1"
},
{
"text": "C = m i=1 u i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof of Theorem 2. Recall that u",
"sec_num": null
},
{
"text": "When p(w) = 1/|V | for all w \u2208 V , the negative cross-entropy simplifies to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof of Theorem 2. Recall that u",
"sec_num": null
},
{
"text": "Q(u c ) = Z w exp u T C v w (u T c v w \u2212 log Z c ) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof of Theorem 2. Recall that u",
"sec_num": null
},
{
"text": "and its gradient \u2207 uc Q to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof of Theorem 2. Recall that u",
"sec_num": null
},
{
"text": "Z w exp(u C T v w ) v w \u2212 n =1 exp(u T c v )v n k=1 exp(u T c v k ) = Z w exp(u C T v w )v w \u2212 w exp(u T c v w )v w .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof of Theorem 2. Recall that u",
"sec_num": null
},
{
"text": "Thus, \u2207Q(u C ) = 0 and since Q is concave, u C is its unique maximizer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof of Theorem 2. Recall that u",
"sec_num": null
},
{
"text": "Lemma 1. The Skip-Gram model satisfies assumption A2 when m \u2264 \u2206.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof of Theorem 2. Recall that u",
"sec_num": null
},
{
"text": "Proof of Lemma 1. First, assume that m = \u2206. In the Skip-Gram model target words are conditionally independent given a context word, i.e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof of Theorem 2. Recall that u",
"sec_num": null
},
{
"text": "p(c 1 , . . . , c m |w) = m i=1 p(c i |w).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof of Theorem 2. Recall that u",
"sec_num": null
},
{
"text": "Applying Baye's rule,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof of Theorem 2. Recall that u",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(w|c 1 , . . . , c m ) = p(c 1 , . . . , c m |w)p(w) p(c 1 , . . . , c m ) = p(w) p(c 1 , . . . , c m ) m i=1 p(c i |w) = p(w) p(c 1 , . . . , c m ) m i=1 p(w|c i )p(c i ) p(w) = p(w) 1\u2212m Z C m i=1 p(w|c i ) ,",
"eq_num": "(8)"
}
],
"section": "Proof of Theorem 2. Recall that u",
"sec_num": null
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof of Theorem 2. Recall that u",
"sec_num": null
},
{
"text": "Z C = 1/ ( m i=1 p(c i ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof of Theorem 2. Recall that u",
"sec_num": null
},
{
"text": ". This establishes the result when m = \u2206. The cases m < \u2206 follow by marginalizing out \u2206 \u2212 m context words in the equality (8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof of Theorem 2. Recall that u",
"sec_num": null
},
{
"text": "Theorem 2 states that if there is a word c in the vocabulary V whose context vector equals the sum of the context vectors of the words c 1 , . . . , c m , then c has the same \"meaning\", in the sense of (4), as the composition of the words c 1 , . . . , c m . For any given set of words C = {c 1 , . . . , c m }, it is unlikely that there exists a word c \u2208 V whose context vector is exactly equal to the sum of the context vectors of the words c 1 , . . . , c m . Similarly, in Theorem 1, the solution(s) to (7) will most likely not equal the context vector of any word in V . In both cases, we thus need to project the vector(s) onto words in our vocabulary in some manner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Projection of paraphrases onto the vocabulary",
"sec_num": null
},
{
"text": "Since Theorem 1 holds for any prior over V , in theory, we could enumerate all words in V and find the word(s) that minimize the difference of the left hand side of (7) from the right hand side. In practice, it turns out that the angle between the context vector of a word w \u2208 V and solutionvector(s) is a good proxy and one gets very good experimental results by selecting as the paraphrase of a collection of words, the word that minimizes the angle to the paraphrase vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Projection of paraphrases onto the vocabulary",
"sec_num": null
},
{
"text": "Minimizing the angle has been empirically successful at capturing composition in multiple loglinear word models. One way to understand the success of this approach is to recall that each word c is characterized by a categorical distribution over all other words w, as stated in (1). The peaks of this categorical distribution are precisely the words with which c co-occurs most often. These words characterize c more than all the other words in the vocabulary, so it is reasonable to expect that a word c whose categorical distribution has similar peaks as the categorical distribution of c is similar in meaning to c. Note that the location of the peaks of p(\u2022|c) are immune to the scaling of u c (athough the values of p(\u2022|c) may change); thus, the words w which best characterize c are those for which v w has a high inner product with u c / u c 2 . Since",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Projection of paraphrases onto the vocabulary",
"sec_num": null
},
{
"text": "u T c v w u c 2 \u2212 u T c v w u c 2 \u2264 2 1 \u2212 u T c u c u c 2 u c 2 v w 2 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Projection of paraphrases onto the vocabulary",
"sec_num": null
},
{
"text": "it is clear that if the angle between the context representations of c and c is small, the distributions p(w|c) and p(w|c ) will tend to have similar peaks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Projection of paraphrases onto the vocabulary",
"sec_num": null
},
{
"text": "The Skip-Gram model assumes that the distribution of the neighbors of a word follows a specific exponential parametrization of a categorical distribution. There is empirical evidence that this model generates features that are useful for NLP tasks, but there is no a priori guarantee that the training corpus was generated in this manner. In this section, we provide theoretical support for the usefulness of the features learned even when the Skip-Gram model is misspecified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "To do so, we draw a connection between Skip-Gram and the Sufficient Dimensionality Reduction (SDR) factorization of Globerson and Tishby (Globerson and Tishby, 2003) . The SDR model learns optimal 2 embeddings for discrete random variables X and Y without assuming any parametric form on the distributions of X and Y , and it is useful in a variety of applications, including information retrieval, document classification, and association analysis (Globerson and Tishby, 2003) . As it turns out, these embeddings, like Skip-Gram, are obtained by learning the parameters of an exponentially parameterized distribution. In Theorem 3 below, we show that if a Skip-Gram model is fit to the cooccurence statistics of X and Y , then the output can be trivially modified (by adding readily-available information on word frequencies) to obtain the parameters of an SDR model. This connection is significant for two reasons: first, the original algorithm of (Globerson and Tishby, 2003) for learning SDR embeddings is expensive, as it involves information projections. Theorem 3 shows that if one can efficiently fit a Skip-Gram model, then one can efficiently fit an SDR model. This implies that Skip-Gram specific approximation heuristics like negativesampling, hierarchical softmax, and Glove, which are believed to return high-quality approximations to Skip-Gram parameters (Mikolov et al., 2013b; Pennington et al., 2014) , can be used to efficiently approximate SDR model parameters. Second, (Globerson and Tishby, 2003) argues for the optimality of the SDR embedding in any domain where the training information on X and Y consists of their coocurrence statistics; this optimality and the Skip-Gram/SDR connection argues for the use of Skip-Gram approximations in such domains, and supports the positive experimental results that have been observed in applications in network science (Grover and Leskovec, 2016) , proteinomics (Asgari and Mofrad, 2015) , and other fields.",
"cite_spans": [
{
"start": 116,
"end": 165,
"text": "Globerson and Tishby (Globerson and Tishby, 2003)",
"ref_id": "BIBREF5"
},
{
"start": 449,
"end": 477,
"text": "(Globerson and Tishby, 2003)",
"ref_id": "BIBREF5"
},
{
"start": 950,
"end": 978,
"text": "(Globerson and Tishby, 2003)",
"ref_id": "BIBREF5"
},
{
"start": 1370,
"end": 1393,
"text": "(Mikolov et al., 2013b;",
"ref_id": "BIBREF9"
},
{
"start": 1394,
"end": 1418,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF13"
},
{
"start": 1490,
"end": 1518,
"text": "(Globerson and Tishby, 2003)",
"ref_id": "BIBREF5"
},
{
"start": 1883,
"end": 1910,
"text": "(Grover and Leskovec, 2016)",
"ref_id": "BIBREF6"
},
{
"start": 1926,
"end": 1951,
"text": "(Asgari and Mofrad, 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "As stated above, the SDR factorization solves the problem of finding information-theoretically optimal features, given co-occurrence statistics for a pair of discrete random variables X and Y . Associate a vector w i to the ith state of X, a vector h j to the jth state of Y , and let",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "W = [w T 1 \u2022 \u2022 \u2022 w T |X| ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "T and H be defined similarly. Globerson and Tishby show that such optimal features can be obtained from a low-rank factoriza-tion of the matrix G of co-occurence measurements: G ij counts the number of times state i of X has been observed to co-occur with state j of Y. The loss of this factorization is measured using the KL-divergence, and so the optimal features are obtained from solving the problem",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "arg min W,H D KL G Z G 1 Z W,H e WH T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "Here, Z G = ij G ij normalizes G into an estimate of the joint pmf of X and Y , and similarly Z W,H is the constant that normalizes e WH T into a joint pmf. The expression e WH T denotes entrywise exponentiation of WH T . Now we revisit the Skip-Gram training objective, and show that it differs from the SDR objective only slightly. Whereas the SDR objective measures the distance between the pmfs given by (normalized versions of) G and e WH T , the Skip-Gram objective measures the distance between the pmfs given by (normalized versions of) the rows of G and e WH T . That is, SDR emphasizes fitting the entire pmfs, while Skip-Gram emphasizes fitting conditional distributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "Before presenting our main result, we state and prove the following lemma, which is of independent interest and is used in the proof of our main theorem. Recall that Skip-Gram represents each word c as a multinomial distribution over all other words w, and it learns the parameters for these distributions by a maximum likelihood estimation. It is known that learning model parameters by maximum likelihood estimation is equivalent to minimizing the KL-divergence of the learned model from the empirical distribution; the following lemma establishes the KL-divergence that Skip-Gram minimizes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "Lemma 2. Let G be the word co-occurrence matrix constructed from the corpus on which a Skip-Gram model is trained, in which case G cw is the number of times word w occurs as a neighboring word of c in the corpus. For each word c, let g c denote the empirical frequency of the word in the corpus, so that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "g c = w G cw / t,w G t,w .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "Given a positive vector x, letx = x/ x 1 . Then, the Skip-Gram model parameters U =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "u 1 \u2022 \u2022 \u2022 u |V | T and V = v 1 \u2022 \u2022 \u2022 u |V | T minimize the objective c g c D KL (\u011d c e u T c V T ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "where g c is the cth row of G.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "Proof. Recall that Skip-Gram chooses U and V to maximize",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "Q = 1 T T i=1 C \u03b4=\u2212C \u03b4 =0 log p(w i+\u03b4 |w i ) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "p(w|c) = e u T c vw n i=1 e u T c v i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "This objective can be rewritten using the pairwise cooccurence statistics as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "Q= 1 T c,w G cw log p(w|c) = 1 T c t G ct w G cw t G ct log p(w|c) \u221d 1 T c ( t G ct ) ( tw G tw ) w G cw t G ct log p(w|c) = c g c w \u011d c w log p(w|c) = c g c \u2212D KL (\u011d c p(\u2022|c)) \u2212 H(\u011d c ) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "where H(\u2022) denotes the entropy of a distribution. It follows that since Skip-Gram maximizes Q, it minimizes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "c g c D KL (\u011d c p(\u2022|c))= c g c D KL (\u011d c e u T c V T ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "We now prove our main theorem of this section, which states that SDR parameters can be obtained by augmenting the Skip-Gram embeddings to account for word frequencies. Theorem 3. Let U, V be the results of fitting a Skip-Gram model to G, and consider the augmented matrices",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "U = [U | \u03b1] and\u1e7c = [V | 1], where \u03b1 c = log g c w e u T c vw and g c = w G c,w t,w G t,w .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "Then, the features (\u0168,\u1e7c) constitute a sufficient dimensionality reduction of G.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "Proof. For convenience, let G denote the joint pdf matrix G/Z G , and let G denote the matrix obtained by normalizing each row of G to be a probability distribution. Then, it suffices to show that D KL (G q W,H ) is minimized over the set of probability distributions q W,H q W,H (w, c) = 1 Z e WH T cw , when W =\u0168 and H =\u1e7c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "To establish this result, we use a chain rule for the KL-divergence. Recall that if we denote the expected KL-divergence between two marginal pmfs by Using this chain rule, we get",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "D",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D KL (G q W,H (w, c))",
"eq_num": "(9)"
}
],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "=D KL (g q W,H (c))+D KL ( G q W,H (w|c)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "Note that the second term in this sum is, in the notation of Lemma 2,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "D KL ( G q W,H (w|c)) = c g c D KL (\u011d c e w T c H T ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "so the matrices U and V that are returned by fitting the Skip-Gram model minimize the second term in this sum. We now show that the augmented matrices W =\u0168 and H =\u1e7c also minimize this second term, and in addition they make the first term vanish. To see that the first of these claims holds, i.e., that the augmented matrices make the second term in (9) vanish, note that q\u0168 ,\u1e7c (w|c) \u221d e\u0169 T c\u1e7dw = e u T c vw+\u03b1c \u221d q U,V (w|c), and the constant of proportionality is independent of w. It follows that q\u0168 ,\u1e7c (w|c) = q U,V (w|c) and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "D KL ( G q\u0168 ,\u1e7c (w|c)) = D KL ( G q U,V (w|c)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "Thus, the choice W =\u0168 and H =\u1e7c minimizes the second term in (9).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "To see that the augmented matrices make the first term in (9) vanish, observe that when W =\u0168 and H =\u1e7c, we have that q\u0168 ,\u1e7c (c) = g by construction. This can be verified by calculation: The choice W =\u0168 and H =\u1e7c makes the first term in (9) vanish, and it also minimizes the second term in (9). Thus, it follows that the features (\u0168,\u1e7c) constitute a sufficient dimensionality reduction of G.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "q\u0168",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram learns a Sufficient Dimensionality Reduction Model",
"sec_num": "3"
},
{
"text": "More generally, it suffices that the word vectors have certain properties consistent with this sampling process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Optimal in an information-theoretic sense: they preserve the maximal mutual information between any pair of random variables with the observed coocurrence statistics, without regard to the underlying joint distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A latent variable model approach to PMI-based word embeddings. Transactions of the Association for",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Yuanzhi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Risteski",
"suffix": ""
}
],
"year": 2016,
"venue": "Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "385--399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2016. A latent variable model approach to PMI-based word embeddings. Transac- tions of the Association for Computational Linguis- tics 4:385-399.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Continuous distributed representation of biological sequences for deep proteomics and genomics",
"authors": [
{
"first": "Ehsaneddin",
"middle": [],
"last": "Asgari",
"suffix": ""
},
{
"first": "R",
"middle": [
"K"
],
"last": "Mohammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mofrad",
"suffix": ""
}
],
"year": 2015,
"venue": "PloS One",
"volume": "10",
"issue": "11",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehsaneddin Asgari and Mohammad R.K. Mofrad. 2015. Continuous distributed representation of bi- ological sequences for deep proteomics and ge- nomics. PloS One 10(11).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Neural Probabilistic Language Model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Rejean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Jauvin",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal Of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A Neural Probabilistic Lan- guage Model. Journal Of Machine Learning Re- search 3:1137-1155.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Word Association Norms, Mutual Information, and Lexicography",
"authors": [
{
"first": "Kenneth",
"middle": [
"Ward"
],
"last": "Church",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "1",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Ward Church and Patrick Hanks. 1990. Word Association Norms, Mutual Information, and Lexi- cography. Computational Linguistics 16(1):22-29.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A synopsis of linguistic theory 1930-1955",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Firth",
"suffix": ""
}
],
"year": 1957,
"venue": "Studies in Linguistic Analysis",
"volume": "",
"issue": "",
"pages": "1--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.R. Firth. 1957. A synopsis of linguistic theory 1930- 1955. Studies in Linguistic Analysis pages 1-32.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Sufficient Dimensionality Reduction",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
},
{
"first": "Naftali",
"middle": [],
"last": "Tishby",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1307--1331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Globerson and Naftali Tishby. 2003. Sufficient Dimensionality Reduction. Journal of Machine Learning Research 3:1307-1331.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "node2vec: Scalable Feature Learning for Networks",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Grover",
"suffix": ""
},
{
"first": "Jure",
"middle": [],
"last": "Leskovec",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "855--864",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable Feature Learning for Networks. In Pro- ceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Min- ing. pages 855-864.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Linguistic Regularities in Sparse and Explicit Word Representations",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "171--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Linguistic Reg- ularities in Sparse and Explicit Word Representa- tions. In Proceedings of the Eighteenth Confer- ence on Computational Natural Language Learning. pages 171-180.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Efficient Estimation of Word Representations in Vector Space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient Estimation of Word Repre- sentations in Vector Space. In International Confer- ence on Learning Representations.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Distributed Representations of Words and Phrases and their Compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed Rep- resentations of Words and Phrases and their Com- positionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems. pages 3111-3119.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Human Language Tech- nologies: Conference of the North American Chap- ter of the Association of Computational Linguistics, Proceedings. pages 746-751.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Three New Graphical Models for Statistical Language Modelling",
"authors": [
{
"first": "Andriy",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 24th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "641--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andriy Mnih and Geoffrey Hinton. 2007. Three New Graphical Models for Statistical Language Mod- elling. In Proceedings of the 24th International Conference on Machine Learning. ACM, pages 641-648.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "When the Whole is Less than the Sum of Its Parts: How Composition Affects PMI Values in Distributional Semantic Vectors",
"authors": [
{
"first": "Denis",
"middle": [],
"last": "Paperno",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2016,
"venue": "Computational Linguistics",
"volume": "42",
"issue": "",
"pages": "345--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denis Paperno and Marco Baroni. 2016. When the Whole is Less than the Sum of Its Parts: How Com- position Affects PMI Values in Distributional Se- mantic Vectors. Computational Linguistics 42:345- 350.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "GloVe: Global Vectors for Word Representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Empirical Methods in Nat- ural Language Processing (EMNLP). pages 1532- 1543.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Zipf's word frequency law in natural language: A critical review and future directions",
"authors": [
{
"first": "T",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Piantadosi",
"suffix": ""
}
],
"year": 2014,
"venue": "Psychonomic Bulletin & Review",
"volume": "21",
"issue": "5",
"pages": "1112--1130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven T. Piantadosi. 2014. Zipf's word frequency law in natural language: A critical review and fu- ture directions. Psychonomic Bulletin & Review 21(5):1112-1130.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "An improved model of semantic similarity based on lexical co-occurence",
"authors": [
{
"first": "L",
"middle": [
"T"
],
"last": "Douglas",
"suffix": ""
},
{
"first": "Laura",
"middle": [
"M"
],
"last": "Rohde",
"suffix": ""
},
{
"first": "David",
"middle": [
"C"
],
"last": "Gonnerman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Plaut",
"suffix": ""
}
],
"year": 2006,
"venue": "Communications of the ACM",
"volume": "8",
"issue": "",
"pages": "627--633",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas L. T. Rohde, Laura M. Gonnerman, and David C. Plaut. 2006. An improved model of semantic similarity based on lexical co-occurence. Communications of the ACM 8:627-633.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "KL (p(\u2022|c) q(\u2022|c)) the KL-divergence satisfies the chain rule:D KL (p(w, c) q(w, c)) = D KL (p(c) q(c)) + D KL (p(w|c) q(w|c)).",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": ",\u1e7c (c) = w q\u0168 ,\u1e7c (w, c) w,t q\u0168 ,\u1e7c (w, t) e UV T 1) e \u03b1 .Here, the notation x y denotes entry-wise multiplication of vectors.Since\u03b1 c = log(g c ) \u2212 log e UV T 1 c , we have q\u0168 ,\u1e7c (c) = (e UV T 1) e \u03b1 c 1 T (e UV T 1) e \u03b1 = g c t g t = g c .",
"type_str": "figure",
"num": null
}
}
}
}