n_chapter
stringclasses 10
values | chapter
stringclasses 10
values | n_section
stringlengths 3
5
| section
stringlengths 3
48
| n_subsection
stringlengths 3
6
| subsection
stringlengths 3
51
| text
stringlengths 1
2.65k
|
---|---|---|---|---|---|---|
6 | Vector Semantics and Embeddings | 6.6 | Pointwise Mutual Information (PMI) | nan | nan | Thus for example we could compute PPMI(w=information, c=data), assuming we pretended that Figure 6 .11 Replacing the counts in Fig. 6 .6 with joint probabilities, showing the marginals around the outside. |
6 | Vector Semantics and Embeddings | 6.6 | Pointwise Mutual Information (PMI) | nan | nan | PMI has the problem of being biased toward infrequent events; very rare words tend to have very high PMI values. One way to reduce this bias toward low frequency Figure 6 .12 The PPMI matrix showing the association between words and context words, computed from the counts in Fig. 6 .11. Note that most of the 0 PPMI values are ones that had a negative PMI; for example PMI(cherry,computer) = -6.7, meaning that cherry and computer co-occur on Wikipedia less often than we would expect by chance, and with PPMI we replace negative values by zero. events is to slightly change the computation for P(c), using a different function P α (c) that raises the probability of the context word to the power of α: |
6 | Vector Semantics and Embeddings | 6.6 | Pointwise Mutual Information (PMI) | nan | nan | PPMI α (w, c) = max(log 2 P(w, c) P(w)P α (c) , 0) (6.21) P α (c) = count(c) α c count(c) α (6.22) |
6 | Vector Semantics and Embeddings | 6.6 | Pointwise Mutual Information (PMI) | nan | nan | Levy et al. 2015found that a setting of α = 0.75 improved performance of embeddings on a wide range of tasks (drawing on a similar weighting used for skipgrams described below in Eq. 6.32). This works because raising the count to α = 0.75 increases the probability assigned to rare contexts, and hence lowers their PMI (P α (c) > P(c) when c is rare). |
6 | Vector Semantics and Embeddings | 6.6 | Pointwise Mutual Information (PMI) | nan | nan | Another possible solution is Laplace smoothing: Before computing PMI, a small constant k (values of 0.1-3 are common) is added to each of the counts, shrinking (discounting) all the non-zero values. The larger the k, the more the non-zero counts are discounted. |
6 | Vector Semantics and Embeddings | 6.7 | Applications of the TF-IDF or PPMI Vector Models | nan | nan | In summary, the vector semantics model we've described so far represents a target word as a vector with dimensions corresponding either to the documents in a large collection (the term-document matrix) or to the counts of words in some neighboring window (the term-term matrix). The values in each dimension are counts, weighted by tf-idf (for term-document matrices) or PPMI (for term-term matrices), and the vectors are sparse (since most values are zero). |
6 | Vector Semantics and Embeddings | 6.7 | Applications of the TF-IDF or PPMI Vector Models | nan | nan | The model computes the similarity between two words x and y by taking the cosine of their tf-idf or PPMI vectors; high cosine, high similarity. This entire model is sometimes referred to as the tf-idf model or the PPMI model, after the weighting function. |
6 | Vector Semantics and Embeddings | 6.7 | Applications of the TF-IDF or PPMI Vector Models | nan | nan | The tf-idf model of meaning is often used for document functions like deciding if two documents are similar. We represent a document by taking the vectors of all the words in the document, and computing the centroid of all those vectors. |
6 | Vector Semantics and Embeddings | 6.7 | Applications of the TF-IDF or PPMI Vector Models | nan | nan | The centroid is the multidimensional version of the mean; the centroid of a set of vectors is a single vector that has the minimum sum of squared distances to each of the vectors in the set. Given k word vectors w 1 , w 2 , ..., w k , the centroid document vector d is: |
6 | Vector Semantics and Embeddings | 6.7 | Applications of the TF-IDF or PPMI Vector Models | nan | nan | document vector d = w 1 + w 2 + ... + w k k (6.23) |
6 | Vector Semantics and Embeddings | 6.7 | Applications of the TF-IDF or PPMI Vector Models | nan | nan | Given two documents, we can then compute their document vectors d 1 and d 2 , and estimate the similarity between the two documents by cos (d 1 , d 2 ) . Document similarity is also useful for all sorts of applications; information retrieval, plagiarism detection, news recommender systems, and even for digital humanities tasks like comparing different versions of a text to see which are similar to each other. Either the PPMI model or the tf-idf model can be used to compute word similarity, for tasks like finding word paraphrases, tracking changes in word meaning, or automatically discovering meanings of words in different corpora. For example, we can find the 10 most similar words to any target word w by computing the cosines between w and each of the V − 1 other words, sorting, and looking at the top 10. |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | nan | nan | In the previous sections we saw how to represent a word as a sparse, long vector with dimensions corresponding to words in the vocabulary or documents in a collection. We now introduce a more powerful word representation: embeddings, short dense vectors. Unlike the vectors we've seen so far, embeddings are short, with number of dimensions d ranging from 50-1000, rather than the much larger vocabulary size |V | or number of documents D we've seen. These d dimensions don't have a clear interpretation. And the vectors are dense: instead of vector entries being sparse, mostly-zero counts or functions of counts, the values will be real-valued numbers that can be negative. |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | nan | nan | It turns out that dense vectors work better in every NLP task than sparse vectors. While we don't completely understand all the reasons for this, we have some intuitions. Representing words as 300-dimensional dense vectors requires our classifiers to learn far fewer weights than if we represented words as 50,000-dimensional vectors, and the smaller parameter space possibly helps with generalization and avoiding overfitting. Dense vectors may also do a better job of capturing synonymy. For example, in a sparse vector representation, dimensions for synonyms like car and automobile dimension are distinct and unrelated; sparse vectors may thus fail to capture the similarity between a word with car as a neighbor and a word with automobile as a neighbor. |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | nan | nan | In this section we introduce one method for computing embeddings: skip-gram vocabulary. In Chapter 11 we'll introduce methods for learning dynamic contextual embeddings like the popular family of BERT representations, in which the vector for each word is different in different contexts. |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | nan | nan | The intuition of word2vec is that instead of counting how often each word w occurs near, say, apricot, we'll instead train a classifier on a binary prediction task: "Is word w likely to show up near apricot?" We don't actually care about this prediction task; instead we'll take the learned classifier weights as the word embeddings. |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | nan | nan | The revolutionary intuition here is that we can just use running text as implicitly supervised training data for such a classifier; a word c that occurs near the target word apricot acts as gold 'correct answer' to the question "Is word c likely to show up near apricot?" This method, often called self-supervision, avoids the need for self-supervision any sort of hand-labeled supervision signal. This idea was first proposed in the task of neural language modeling, when Bengio et al. (2003) and Collobert et al. 2011showed that a neural language model (a neural network that learned to predict the next word from prior words) could just use the next word in running text as its supervision signal, and could be used to learn an embedding representation for each word as part of doing this prediction task. |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | nan | nan | We'll see how to do neural networks in the next chapter, but word2vec is a much simpler model than the neural network language model, in two ways. First, word2vec simplifies the task (making it binary classification instead of word prediction). Second, word2vec simplifies the architecture (training a logistic regression classifier instead of a multi-layer neural network with hidden layers that demand more sophisticated training algorithms). The intuition of skip-gram is: |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | nan | nan | 1. Treat the target word and a neighboring context word as positive examples. 2. Randomly sample other words in the lexicon to get negative samples. 3. Use logistic regression to train a classifier to distinguish those two cases. 4. Use the learned weights as the embeddings. |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.1 | The classifier | Let's start by thinking about the classification task, and then turn to how to train. Imagine a sentence like the following, with a target word apricot, and assume we're using a window of ±2 context words: |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.1 | The classifier | ... lemon, a [tablespoon of apricot jam, a] pinch ... c1 c2 w c3 c4 |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.1 | The classifier | Our goal is to train a classifier such that, given a tuple (w, c) of a target word w paired with a candidate context word c (for example (apricot, jam), or perhaps (apricot, aardvark)) it will return the probability that c is a real context word (true for jam, false for aardvark): |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.1 | The classifier | P(+|w, c) (6.24) |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.1 | The classifier | The probability that word c is not a real context word for w is just 1 minus Eq. 6.24: |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.1 | The classifier | P(−|w, c) = 1 − P(+|w, c) (6.25) |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.1 | The classifier | How does the classifier compute the probability P? The intuition of the skipgram model is to base this probability on embedding similarity: a word is likely to occur near the target if its embedding vector is similar to the target embedding. To compute similarity between these dense embeddings, we rely on the intuition that two vectors are similar if they have a high dot product (after all, cosine is just a normalized dot product). In other words: |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.1 | The classifier | Similarity(w, c) ≈ c • w (6.26) |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.1 | The classifier | The dot product c • w is not a probability, it's just a number ranging from −∞ to ∞ (since the elements in word2vec embeddings can be negative, the dot product can be negative). To turn the dot product into a probability, we'll use the logistic or sigmoid function σ (x), the fundamental core of logistic regression: |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.1 | The classifier | σ (x) = 1 1 + exp (−x) (6.27) 6.8 • WORD2VEC 115 |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.1 | The classifier | We model the probability that word c is a real context word for target word w as: |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.1 | The classifier | P(+|w, c) = σ (c • w) = 1 1 + exp (−c • w) (6.28) |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.1 | The classifier | The sigmoid function returns a number between 0 and 1, but to make it a probability we'll also need the total probability of the two possible events (c is a context word, and c isn't a context word) to sum to 1. We thus estimate the probability that word c is not a real context word for w as: |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.1 | The classifier | P(−|w, c) = 1 − P(+|w, c) = σ (−c • w) = 1 1 + exp (c • w) (6.29) |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.1 | The classifier | Equation 6.28 gives us the probability for one word, but there are many context words in the window. Skip-gram makes the simplifying assumption that all context words are independent, allowing us to just multiply their probabilities: |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.1 | The classifier | P(+|w, c 1:L ) = L i=1 σ (c i • w) (6.30) log P(+|w, c 1:L ) = L i=1 log σ (c i • w) (6.31) |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.1 | The classifier | In summary, skip-gram trains a probabilistic classifier that, given a test target word w and its context window of L words c 1:L , assigns a probability based on how similar this context window is to the target word. The probability is based on applying the logistic (sigmoid) function to the dot product of the embeddings of the target word with each context word. To compute this probability, we just need embeddings for each target word and context word in the vocabulary. Figure 6 .13 The embeddings learned by the skipgram model. The algorithm stores two embeddings for each word, the target embedding (sometimes called the input embedding) and the context embedding (sometimes called the output embedding). The parameter θ that the algorithm learns is thus a matrix of 2|V | vectors, each of dimension d, formed by concatenating two matrices, the target embeddings W and the context+noise embeddings C. Fig. 6 .13 shows the intuition of the parameters we'll need. Skip-gram actually stores two embeddings for each word, one for the word as a target, and one for the word considered as context. Thus the parameters we need to learn are two matrices W and C, each containing an embedding for every one of the |V | words in the vocabulary V . 6 Let's now turn to learning these embeddings (which is the real goal of training this classifier in the first place). |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | The learning algorithm for skip-gram embeddings takes as input a corpus of text, and a chosen vocabulary size N. It begins by assigning a random embedding vector for each of the N vocabulary words, and then proceeds to iteratively shift the embedding of each word w to be more like the embeddings of words that occur nearby in texts, and less like the embeddings of words that don't occur nearby. Let's start by considering a single piece of training data: |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | ... lemon, a [tablespoon of apricot jam, a] pinch ... c1 c2 w c3 c4 |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | This example has a target word w (apricot), and 4 context words in the L = ±2 window, resulting in 4 positive training instances (on the left below): For training a binary classifier we also need negative examples. In fact skipgram with negative sampling (SGNS) uses more negative examples than positive examples (with the ratio between them set by a parameter k). So for each of these (w, c pos ) training instances we'll create k negative samples, each consisting of the target w plus a 'noise word' c neg . A noise word is a random word from the lexicon, constrained not to be the target word w. The right above shows the setting where k = 2, so we'll have 2 negative examples in the negative training set − for each positive example w, c pos . |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | positive examples + w c |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | The noise words are chosen according to their weighted unigram frequency p α (w), where α is a weight. If we were sampling according to unweighted frequency p(w), it would mean that with unigram probability p("the") we would choose the word the as a noise word, with unigram probability p("aardvark") we would choose aardvark, and so on. But in practice it is common to set α = .75, i.e. use the weighting p 3 4 (w): |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | P α (w) = count(w) α w count(w ) α (6.32) |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | Setting α = .75 gives better performance because it gives rare noise words slightly higher probability: for rare words, P α (w) > P(w). To illustrate this intuition, it might help to work out the probabilities for an example with two events, P(a) = .99 |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | and P(b) = .01: If we consider one word/context pair (w, c pos ) with its k noise words c neg 1 ...c neg k , we can express these two goals as the following loss function L to be minimized (hence the −); here the first term expresses that we want the classifier to assign the real context word c pos a high probability of being a neighbor, and the second term expresses that we want to assign each of the noise words c neg i a high probability of being a non-neighbor, all multiplied because we assume independence: |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | P |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | L CE = − log P(+|w, c pos ) k i=1 P(−|w, c neg i ) = − log P(+|w, c pos ) + k i=1 log P(−|w, c neg i ) = − log P(+|w, c pos ) + k i=1 log 1 − P(+|w, c neg i ) = − log σ (c pos • w) + k i=1 log σ (−c neg i • w) (6.34) |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | That is, we want to maximize the dot product of the word with the actual context words, and minimize the dot products of the word with the k negative sampled nonneighbor words. |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | We minimize this loss function using stochastic gradient descent. Fig. 6 .14 shows the intuition of one step of learning. |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | To get the gradient, we need to take the derivative of Eq. 6.34 with respect to the different embeddings. It turns out the derivatives are the following (we leave the proof as an exercise at the end of the chapter): |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | ∂ L CE ∂ c pos = [σ (c pos • w) − 1]w (6.35) ∂ L CE ∂ c neg = [σ (c neg • w)]w (6.36) ∂ L CE ∂ w = [σ (c pos • w) − 1]c pos + k i=1 [σ (c neg i • w)]c neg i (6.37) |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | The update equations going from time step t to t + 1 in stochastic gradient descent Figure 6 .14 Intuition of one step of gradient descent. The skip-gram model tries to shift embeddings so the target embeddings (here for apricot) are closer to (have a higher dot product with) context embeddings for nearby words (here jam) and further from (lower dot product with) context embeddings for noise words that don't occur nearby (here Tolstoy and matrix). |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | are thus: |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | c t+1 pos = c t pos − η[σ (c t pos • w t ) − 1]w t (6.38) c t+1 neg = c t neg − η[σ (c t neg • w t )]w t (6.39) w t+1 = w t − η [σ (c pos • w t ) − 1]c pos + k i=1 [σ (c neg i • w t )]c neg i (6.40) |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | Just as in logistic regression, then, the learning algorithm starts with randomly initialized W and C matrices, and then walks through the training corpus using gradient descent to move W and C so as to maximize the objective in Eq. 6.34 by making the updates in (Eq. 6.39)-(Eq. 6.40). |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | Recall that the skip-gram model learns two separate embeddings for each word i: the target embedding w i and the context embedding c i , stored in two matrices, the target embedding context embedding target matrix W and the context matrix C. It's common to just add them together, representing word i with the vector w i + c i . Alternatively we can throw away the C matrix and just represent each word i by the vector w i . |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.2 | Learning skip-gram embeddings | As with the simple count-based methods like tf-idf, the context window size L affects the performance of skip-gram embeddings, and experiments often tune the parameter L on a devset. |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.3 | Other kinds of static embeddings | There are many kinds of static embeddings. An extension of word2vec, fasttext fasttext (Bojanowski et al., 2017) , addresses a problem with word2vec as we have presented it so far: it has no good way to deal with unknown words -words that appear in a test corpus but were unseen in the training corpus. A related problem is word sparsity, such as in languages with rich morphology, where some of the many forms for each noun and verb may only occur rarely. Fasttext deals with these problems by using subword models, representing each word as itself plus a bag of constituent n-grams, with special boundary symbols < and > added to each word. For example, 6.9 • VISUALIZING EMBEDDINGS 119 with n = 3 the word where would be represented by the sequence <where> plus the character n-grams: <wh, whe, her, ere, re> |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.3 | Other kinds of static embeddings | Then a skipgram embedding is learned for each constituent n-gram, and the word where is represented by the sum of all of the embeddings of its constituent n-grams. Unknown words can then be presented only by the sum of the constituent n-grams. A fasttext open-source library, including pretrained embeddings for 157 languages, is available at https://fasttext.cc. |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.3 | Other kinds of static embeddings | Another very widely used static embedding model is GloVe (Pennington et al., 2014), short for Global Vectors, because the model is based on capturing global corpus statistics. GloVe is based on ratios of probabilities from the word-word cooccurrence matrix, combining the intuitions of count-based models like PPMI while also capturing the linear structures used by methods like word2vec. |
6 | Vector Semantics and Embeddings | 6.8 | Word2Vec | 6.8.3 | Other kinds of static embeddings | It turns out that dense embeddings like word2vec actually have an elegant mathematical relationships with sparse embeddings like PPMI, in which word2vec can be seen as implicitly optimizing a shifted version of a PPMI matrix (Levy and Goldberg, 2014c). |
6 | Vector Semantics and Embeddings | 6.9 | Visualizing Embeddings | nan | nan | "I see well in many dimensions as long as the dimensions are around two." |
6 | Vector Semantics and Embeddings | 6.9 | Visualizing Embeddings | nan | nan | The late economist Martin Shubik Visualizing embeddings is an important goal in helping understand, apply, and improve these models of word meaning. But how can we visualize a (for example) 100-dimensional vector? |
6 | Vector Semantics and Embeddings | 6.9 | Visualizing Embeddings | nan | nan | The simplest way to visualize the meaning of a word w embedded in a space is to list the most similar words to w by sorting the vectors for all words in the vocabulary by their cosine with the vector for w. For example the 7 closest words to frog using the GloVe embeddings are: frogs, toad, litoria, leptodactylidae, rana, lizard, and eleutherodactylus (Pennington et al., 2014). |
6 | Vector Semantics and Embeddings | 6.9 | Visualizing Embeddings | nan | nan | Yet another visualization method is to use a clustering algorithm to show a hierarchical representation of which words are similar to others in the embedding space. The uncaptioned figure on the left uses hierarchical clustering of some embedding vectors for nouns as a visualization method (Rohde et al., 2006) . |
6 | Vector Semantics and Embeddings | 6.9 | Visualizing Embeddings | nan | nan | Probably the most common visualization method, however, is to project the 100 dimensions of a word down into 2 dimensions. Fig. 6 .1 showed one such visualization, as does Fig. 6 .16, using a projection method called t-SNE (van der Maaten and Hinton, 2008). |
6 | Vector Semantics and Embeddings | 6.1 | Semantic Properties of Embeddings | nan | nan | In this section we briefly summarize some of the semantic properties of embeddings that have been studied. |
6 | Vector Semantics and Embeddings | 6.1 | Semantic Properties of Embeddings | nan | nan | Different types of similarity or association: One parameter of vector semantic models that is relevant to both sparse tf-idf vectors and dense word2vec vectors is the size of the context window used to collect counts. This is generally between 1 and 10 words on each side of the target word (for a total context of 2-20 words). |
6 | Vector Semantics and Embeddings | 6.1 | Semantic Properties of Embeddings | nan | nan | The choice depends on the goals of the representation. Shorter context windows tend to lead to representations that are a bit more syntactic, since the information is coming from immediately nearby words. When the vectors are computed from short context windows, the most similar words to a target word w tend to be semantically similar words with the same parts of speech. When vectors are computed from long context windows, the highest cosine words to a target word w tend to be words that are topically related but not similar. |
6 | Vector Semantics and Embeddings | 6.1 | Semantic Properties of Embeddings | nan | nan | For example Levy and Goldberg (2014a) showed that using skip-gram with a window of ±2, the most similar words to the word Hogwarts (from the Harry Potter series) were names of other fictional schools: Sunnydale (from Buffy the Vampire Slayer) or Evernight (from a vampire series). With a window of ±5, the most similar words to Hogwarts were other words topically related to the Harry Potter series: Dumbledore, Malfoy, and half-blood. |
6 | Vector Semantics and Embeddings | 6.1 | Semantic Properties of Embeddings | nan | nan | It's also often useful to distinguish two kinds of similarity or association between words (Schütze and Pedersen, 1993). Two words have first-order co-occurrence for solving simple analogy problems of the form a is to b as a* is to what?. In such problems, a system given a problem like apple:tree::grape:?, i.e., apple is to tree as grape is to , and must fill in the word vine. In the parallelogram model, illustrated in Fig. 6 .15, the vector from the word apple to the word tree (= # » tree − # » apple) is added to the vector for grape ( # » grape); the nearest word to that point is returned. tree apple grape vine Figure 6 .15 The parallelogram model for analogy problems (Rumelhart and Abrahamson, 1973) : the location of # » vine can be found by subtracting # » apple from # » tree and adding # » grape. |
6 | Vector Semantics and Embeddings | 6.1 | Semantic Properties of Embeddings | nan | nan | In early work with sparse embeddings, scholars showed that sparse vector mod- Rome. The embedding model thus seems to be extracting representations of relations like MALE-FEMALE, or CAPITAL-CITY-OF, or even COMPARATIVE/SUPERLATIVE, as shown in Fig. 6 .16 from GloVe. For a a : b :: a * : b * problem, meaning the algorithm is given vectors a, b, and a * and must find b * , the parallelogram method is thus: |
6 | Vector Semantics and Embeddings | 6.1 | Semantic Properties of Embeddings | nan | nan | b * = argmin x distance(x, a * − a + b) (6.41) |
6 | Vector Semantics and Embeddings | 6.1 | Semantic Properties of Embeddings | nan | nan | with some distance function, such as Euclidean distance. |
6 | Vector Semantics and Embeddings | 6.1 | Semantic Properties of Embeddings | nan | nan | There are some caveats. For example, the closest value returned by the parallelogram algorithm in word2vec or GloVe embedding spaces is usually not in fact b* but one of the 3 input words or their morphological variants (i.e., cherry:red :: potato:x returns potato or potatoes instead of brown), so these must be explicitly excluded. Furthermore while embedding spaces perform well if the task involves frequent words, small distances, and certain relations (like relating countries with their capitals or verbs/nouns with their inflected forms), the parallelogram method with embeddings doesn't work as well for other relations (Linzen 2016 , Gladkova et al. 2016 , Schluter 2018 , Ethayarajh et al. 2019a ), and indeed Peterson et al. (2020) argue that the parallelogram method is in general too simple to model the human cognitive process of forming analogies of this kind. |
6 | Vector Semantics and Embeddings | 6.1 | Semantic Properties of Embeddings | 6.10.1 | Embeddings and Historical Semantics | Embeddings can also be a useful tool for studying how meaning changes over time, by computing multiple embedding spaces, each from texts written in a particular time period. For example Fig. 6 .17 shows a visualization of changes in meaning in English words over the last two centuries, computed by building separate embed-ding spaces for each decade from historical corpora like Google n-grams (Lin et al., 2012b) and the Corpus of Historical American English (Davies, 2012). |
6 | Vector Semantics and Embeddings | 6.1 | Semantic Properties of Embeddings | 6.10.1 | Embeddings and Historical Semantics | Figure 6 .17: Two-dimensional visualization of semantic change in English using SGNS vectors (see Section 5.8 for the visualization algorithm). A, The word gay shifted from meaning "cheerful" or "frolicsome" to referring to homosexuality. A, In the early 20th century broadcast referred to "casting out seeds"; with the rise of television and radio its meaning shifted to "transmitting signals". C, Awful underwent a process of pejoration, as it shifted from meaning "full of awe" to meaning "terrible or appalling" [212] . that adverbials (e.g., actually) have a general tendency to undergo subjectification where they shift from objective statements about the world (e.g., "Sorry, the car is actually broken") to subjective statements (e.g., "I can't believe he actually did that", indicating surprise/disbelief). .17 A t-SNE visualization of the semantic change of 3 words in English using word2vec vectors. The modern sense of each word, and the grey context words, are computed from the most recent (modern) time-point embedding space. Earlier points are computed from earlier historical embedding spaces. The visualizations show the changes in the word gay from meanings related to "cheerful" or "frolicsome" to referring to homosexuality, the development of the modern "transmission" sense of broadcast from its original sense of sowing seeds, and the pejoration of the word awful as it shifted from meaning "full of awe" to meaning "terrible or appalling" (Hamilton et al., 2016b). |
6 | Vector Semantics and Embeddings | 6.11 | Bias and Embeddings | nan | nan | In addition to their ability to learn word meaning from text, embeddings, alas, also reproduce the implicit biases and stereotypes that were latent in the text. As the prior section just showed, embeddings can roughly model relational similarity: 'queen' as the closest word to 'king' -'man' + 'woman' implies the analogy man:woman::king:queen. But these same embedding analogies also exhibit gender stereotypes. For example Bolukbasi et al. (2016) find that the closest occupation to 'man' -'computer programmer' + 'woman' in word2vec embeddings trained on news text is 'homemaker', and that the embeddings similarly suggest the analogy 'father' is to 'doctor' as 'mother' is to 'nurse'. This could result in what Crawford (2017) and Blodgett et al. (2020) call an allocational harm, when a system allo-allocational harm cates resources (jobs or credit) unfairly to different groups. For example algorithms that use embeddings as part of a search for hiring potential programmers or doctors might thus incorrectly downweight documents with women's names. It turns out that embeddings don't just reflect the statistics of their input, but also amplify bias; gendered terms become more gendered in embedding space than they bias amplification were in the input text statistics (Zhao et al. 2017 , Ethayarajh et al. 2019b , Jia et al. 2020 , and biases are more exaggerated than in actual labor employment statistics (Garg et al., 2018) . |
6 | Vector Semantics and Embeddings | 6.11 | Bias and Embeddings | nan | nan | Embeddings also encode the implicit associations that are a property of human reasoning. The Implicit Association Test (Greenwald et al., 1998) measures people's associations between concepts (like 'flowers' or 'insects') and attributes (like 'pleasantness' and 'unpleasantness') by measuring differences in the latency with which they label words in the various categories. 7 Using such methods, people |
6 | Vector Semantics and Embeddings | 6.11 | Bias and Embeddings | nan | nan | in the United States have been shown to associate African-American names with unpleasant words (more than European-American names), male names more with mathematics and female names with the arts, and old people's names with unpleasant words (Greenwald et al. 1998 , Nosek et al. 2002a , Nosek et al. 2002b . Caliskan et al. 2017replicated all these findings of implicit associations using GloVe vectors and cosine similarity instead of human latencies. For example African-American names like 'Leroy' and 'Shaniqua' had a higher GloVe cosine with unpleasant words while European-American names ('Brad', 'Greg', 'Courtney') had a higher cosine with pleasant words. These problems with embeddings are an example of a representational harm (Crawford 2017, Blodgett et al. 2020), which is a harm caused by representational harm a system demeaning or even ignoring some social groups. Any embedding-aware algorithm that made use of word sentiment could thus exacerbate bias against African Americans. Recent research focuses on ways to try to remove these kinds of biases, for example by developing a transformation of the embedding space that removes gender stereotypes but preserves definitional gender (Bolukbasi et al. 2016 , Zhao et al. 2017 or changing the training procedure (Zhao et al., 2018b). However, although these sorts of debiasing may reduce bias in embeddings, they do not eliminate it Historical embeddings are also being used to measure biases in the past. Garg et al. 2018used embeddings from historical texts to measure the association between embeddings for occupations and embeddings for names of various ethnicities or genders (for example the relative cosine similarity of women's names versus men's to occupation words like 'librarian' or 'carpenter') across the 20th century. They found that the cosines correlate with the empirical historical percentages of women or ethnic groups in those occupations. Historical embeddings also replicated old surveys of ethnic stereotypes; the tendency of experimental participants in 1933 to associate adjectives like 'industrious' or 'superstitious' with, e.g., Chinese ethnicity, correlates with the cosine between Chinese last names and those adjectives using embeddings trained on 1930s text. They also were able to document historical gender biases, such as the fact that embeddings for adjectives related to competence ('smart', 'wise', 'thoughtful', 'resourceful') had a higher cosine with male than female words, and showed that this bias has been slowly decreasing since 1960. We return in later chapters to this question about the role of bias in natural language processing. |
6 | Vector Semantics and Embeddings | 6.12 | Evaluating Vector Models | nan | nan | The most important evaluation metric for vector models is extrinsic evaluation on tasks, i.e., using vectors in an NLP task and seeing whether this improves performance over some other model. |
6 | Vector Semantics and Embeddings | 6.12 | Evaluating Vector Models | nan | nan | Nonetheless it is useful to have intrinsic evaluations. The most common metric is to test their performance on similarity, computing the correlation between an algorithm's word similarity scores and word similarity ratings assigned by humans. WordSim-353 (Finkelstein et al., 2002 ) is a commonly used set of ratings from 0 (love, laughter, pleasure) and a red button for 'insects' (flea, spider, mosquito) and 'unpleasant words' (abuse, hatred, ugly) they are faster than in an incongruous condition where they push a red button for 'flowers' and 'unpleasant words' and a green button for 'insects' and 'pleasant words'. |
6 | Vector Semantics and Embeddings | 6.12 | Evaluating Vector Models | nan | nan | to 10 for 353 noun pairs; for example (plane, car) had an average score of 5.77. SimLex-999 (Hill et al., 2015 ) is a more difficult dataset that quantifies similarity (cup, mug) rather than relatedness (cup, coffee), and including both concrete and abstract adjective, noun and verb pairs. The TOEFL dataset is a set of 80 questions, each consisting of a target word with 4 additional word choices; the task is to choose which is the correct synonym, as in the example: Levied is closest in meaning to: imposed, believed, requested, correlated (Landauer and Dumais, 1997). All of these datasets present words without context. |
6 | Vector Semantics and Embeddings | 6.12 | Evaluating Vector Models | nan | nan | Slightly more realistic are intrinsic similarity tasks that include context. The Stanford Contextual Word Similarity (SCWS) dataset (Huang et al., 2012) and the Word-in-Context (WiC) dataset (Pilehvar and Camacho-Collados, 2019) offer richer evaluation scenarios. SCWS gives human judgments on 2,003 pairs of words in their sentential context, while WiC gives target words in two sentential contexts that are either in the same or different senses; see Section 18.5.3. The semantic textual similarity task (Agirre et al. 2012, Agirre et al. 2015) evaluates the performance of sentence-level similarity algorithms, consisting of a set of pairs of sentences, each pair with human-labeled similarity scores. |
6 | Vector Semantics and Embeddings | 6.12 | Evaluating Vector Models | nan | nan | Another task used for evaluation is the analogy task, discussed on page 120, where the system has to solve problems of the form a is to b as a* is to b*, given a, b, and a* and having to find b* (Turney and Littman, 2005) . A number of sets of tuples have been created for this task, (Mikolov et al. 2013a , Mikolov et al. 2013c , Gladkova et al. 2016 , covering morphology (city:cities::child:children), lexicographic relations (leg:table::spout::teapot) and encyclopedia relations (Beijing:China::Dublin:Ireland), some drawing from the SemEval-2012 Task 2 dataset of 79 different relations (Jurgens et al., 2012) . |
6 | Vector Semantics and Embeddings | 6.12 | Evaluating Vector Models | nan | nan | All embedding algorithms suffer from inherent variability. For example because of randomness in the initialization and the random negative sampling, algorithms like word2vec may produce different results even from the same dataset, and individual documents in a collection may strongly impact the resulting embeddings (Tian et al. 2016, Hellrich and Hahn 2016, Antoniak and Mimno 2018). When embeddings are used to study word associations in particular corpora, therefore, it is best practice to train multiple embeddings with bootstrap sampling over documents and average the results (Antoniak and Mimno, 2018). |
6 | Vector Semantics and Embeddings | 6.13 | Summary | nan | nan | • In vector semantics, a word is modeled as a vector-a point in high-dimensional space, also called an embedding. In this chapter we focus on static embeddings, in each each word is mapped to a fixed embedding. |
6 | Vector Semantics and Embeddings | 6.13 | Summary | nan | nan | • Vector semantic models fall into two classes: sparse and dense. In sparse models each dimension corresponds to a word in the vocabulary V and cells are functions of co-occurrence counts. The term-document matrix has a row for each word (term) in the vocabulary and a column for each document. The word-context or term-term matrix has a row for each (target) word in the vocabulary and a column for each context term in the vocabulary. Two sparse weightings are common: the tf-idf weighting which weights each cell by its term frequency and inverse document frequency, and PPMI (pointwise positive mutual information) most common for for word-context matrices. |
6 | Vector Semantics and Embeddings | 6.13 | Summary | nan | nan | • Dense vector models have dimensionality 50-1000. Word2vec algorithms like skip-gram are a popular way to compute dense embeddings. Skip-gram trains a logistic regression classifier to compute the probability that two words are 'likely to occur nearby in text'. This probability is computed from the dot product between the embeddings for the two words. • Skip-gram uses stochastic gradient descent to train the classifier, by learning embeddings that have a high dot product with embeddings of words that occur nearby and a low dot product with noise words. • Other important embedding algorithms include GloVe, a method based on ratios of word co-occurrence probabilities. • Whether using sparse or dense vectors, word and document similarities are computed by some function of the dot product between vectors. The cosine of two vectors-a normalized dot product-is the most popular such metric. |
6 | Vector Semantics and Embeddings | 6.14 | Bibliographical and Historical Notes | nan | nan | The idea of vector semantics arose out of research in the 1950s in three distinct fields: linguistics, psychology, and computer science, each of which contributed a fundamental aspect of the model. The idea that meaning is related to the distribution of words in context was widespread in linguistic theory of the 1950s, among distributionalists like Zellig Harris, Martin Joos, and J. R. Firth, and semioticians like Thomas Sebeok. As Joos (1950) put it, the linguist's "meaning" of a morpheme. . . is by definition the set of conditional probabilities of its occurrence in context with all other morphemes. |
6 | Vector Semantics and Embeddings | 6.14 | Bibliographical and Historical Notes | nan | nan | The idea that the meaning of a word might be modeled as a point in a multidimensional semantic space came from psychologists like Charles E. Osgood, who had been studying how people responded to the meaning of words by assigning values along scales like happy/sad or hard/soft. Osgood et al. (1957) proposed that the meaning of a word in general could be modeled as a point in a multidimensional Euclidean space, and that the similarity of meaning between two words could be modeled as the distance between these points in the space. |
6 | Vector Semantics and Embeddings | 6.14 | Bibliographical and Historical Notes | nan | nan | A final intellectual source in the 1950s and early 1960s was the field then called mechanical indexing, now known as information retrieval. In what became known mechanical indexing as the vector space model for information retrieval (Salton 1971 , Sparck Jones 1986 , researchers demonstrated new ways to define the meaning of words in terms of vectors (Switzer, 1965) , and refined methods for word similarity based on measures of statistical association between words like mutual information (Giuliano, 1965) and idf (Sparck Jones, 1972) , and showed that the meaning of documents could be represented in the same vector spaces used for words. Some of the philosophical underpinning of the distributional way of thinking came from the late writings of the philosopher Wittgenstein, who was skeptical of the possibility of building a completely formal theory of meaning definitions for each word, suggesting instead that "the meaning of a word is its use in the language" (Wittgenstein, 1953, PI 43) . That is, instead of using some logical language to define each word, or drawing on denotations or truth values, Wittgenstein's idea is that we should define a word by how it is used by people in speaking and understanding in their day-to-day interactions, thus prefiguring the movement toward embodied and experiential models in linguistics and NLP (Glenberg and Robertson 2000, Lake and Murphy 2021, Bisk et al. 2020, Bender and Koller 2020). |
6 | Vector Semantics and Embeddings | 6.14 | Bibliographical and Historical Notes | nan | nan | More distantly related is the idea of defining words by a vector of discrete features, which has roots at least as far back as Descartes and Leibniz (Wierzbicka 1992, Wierzbicka 1996) . By the middle of the 20th century, beginning with the work of Hjelmslev (Hjelmslev, 1969) (originally 1943) and fleshed out in early models of generative grammar (Katz and Fodor, 1963) , the idea arose of representing meaning with semantic features, symbols that represent some sort of primitive meaning. |
6 | Vector Semantics and Embeddings | 6.14 | Bibliographical and Historical Notes | nan | nan | For example words like hen, rooster, or chick, have something in common (they all describe chickens) and something different (their age and sex), representable as: |
6 | Vector Semantics and Embeddings | 6.14 | Bibliographical and Historical Notes | nan | nan | hen +female, +chicken, +adult rooster -female, +chicken, +adult chick +chicken, -adult |
6 | Vector Semantics and Embeddings | 6.14 | Bibliographical and Historical Notes | nan | nan | The dimensions used by vector models of meaning to define words, however, are only abstractly related to this idea of a small fixed number of hand-built dimensions. Nonetheless, there has been some attempt to show that certain dimensions of embedding models do contribute some specific compositional aspect of meaning like these early semantic features. |
6 | Vector Semantics and Embeddings | 6.14 | Bibliographical and Historical Notes | nan | nan | The use of dense vectors to model word meaning, and indeed the term embedding, grew out of the latent semantic indexing (LSI) model (Deerwester et al., 1988) recast as LSA (latent semantic analysis) (Deerwester et al., 1990) . In LSA singular value decomposition-SVDis applied to a term-document matrix (each SVD cell weighted by log frequency and normalized by entropy), and then the first 300 dimensions are used as the LSA embedding. Singular Value Decomposition (SVD) is a method for finding the most important dimensions of a data set, those dimensions along which the data varies the most. LSA was then quickly widely applied: as a cognitive model Landauer and Dumais (1997), and for tasks like spell checking (Jones and Martin, 1997), language modeling (Bellegarda 1997, Coccaro and Jurafsky 1998, Bellegarda 2000) morphology induction (Schone and Jurafsky 2000, Schone and Jurafsky 2001b), multiword expressions (MWEs) (Schone and Jurafsky, 2001a), and essay grading (Rehder et al., 1998) . Related models were simultaneously developed and applied to word sense disambiguation by Schütze (1992b). LSA also led to the earliest use of embeddings to represent words in a probabilistic classifier, in the logistic regression document router of Schütze et al. (1995) . The idea of SVD on the term-term matrix (rather than the term-document matrix) as a model of meaning for NLP was proposed soon after LSA by Schütze (1992b). Schütze applied the low-rank (97-dimensional) embeddings produced by SVD to the task of word sense disambiguation, analyzed the resulting semantic space, and also suggested possible techniques like dropping high-order dimensions. See Schütze (1997a). |
6 | Vector Semantics and Embeddings | 6.14 | Bibliographical and Historical Notes | nan | nan | A number of alternative matrix models followed on from the early SVD work, including Probabilistic Latent Semantic Indexing (PLSI) (Hofmann, 1999), Latent Dirichlet Allocation (LDA) (Blei et al., 2003) , and Non-negative Matrix Factorization (NMF) (Lee and Seung, 1999). |
6 | Vector Semantics and Embeddings | 6.14 | Bibliographical and Historical Notes | nan | nan | The LSA community seems to have first used the word "embedding" in Landauer et al. (1997) , in a variant of its mathematical meaning as a mapping from one space or mathematical structure to another. In LSA, the word embedding seems to have described the mapping from the space of sparse count vectors to the latent space of SVD dense vectors. Although the word thus originally meant the mapping from one space to another, it has metonymically shifted to mean the resulting dense vector in the latent space. and it is in this sense that we currently use the word. |
6 | Vector Semantics and Embeddings | 6.14 | Bibliographical and Historical Notes | nan | nan | By the next decade, Bengio et al. (2003) and Bengio et al. (2006) showed that neural language models could also be used to develop embeddings as part of the task of word prediction. Collobert and Weston 2007 See Manning et al. (2008) for a deeper understanding of the role of vectors in information retrieval, including how to compare queries with documents, more details on tf-idf, and issues of scaling to very large datasets. See Kim (2019) for a clear and comprehensive tutorial on word2vec. Cruse 2004 is a useful introductory linguistic text of lexical semantics. |
7 | Neural Networks and Neural Language Models | nan | nan | nan | nan | Neural networks are a fundamental computational tool for language processing, and a very old one. They are called neural because their origins lie in the McCulloch-Pitts neuron (McCulloch and Pitts, 1943 ), a simplified model of the human neuron as a kind of computing element that could be described in terms of propositional logic. But the modern use in language processing no longer draws on these early biological inspirations. |
7 | Neural Networks and Neural Language Models | nan | nan | nan | nan | Instead, a modern neural network is a network of small computing units, each of which takes a vector of input values and produces a single output value. In this chapter we introduce the neural net applied to classification. The architecture we introduce is called a feedforward network because the computation proceeds iterfeedforward atively from one layer of units to the next. The use of modern neural nets is often called deep learning, because modern networks are often deep (have many layers). |
Subsets and Splits