ACL-OCL / Base_JSON /prefixE /json /eacl /2021.eacl-main.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:43:26.142147Z"
},
"title": "Unsupervised Sentence-embeddings by Manifold Approximation and Projection",
"authors": [
{
"first": "Subhradeep",
"middle": [],
"last": "Kayal",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The concept of unsupervised universal sentence encoders has gained traction recently, wherein pre-trained models generate effective task-agnostic fixed-dimensional representations for phrases, sentences and paragraphs. Such methods are of varying complexity, from simple weighted-averages of word vectors to complex language-models based on bidirectional transformers. In this work we propose a novel technique to generate sentenceembeddings in an unsupervised fashion by projecting the sentences onto a fixed-dimensional manifold with the objective of preserving local neighbourhoods in the original space. To delineate such neighbourhoods we experiment with several set-distance metrics, including the recently proposed Word Mover's distance, while the fixed-dimensional projection is achieved by employing a scalable and efficient manifold approximation method rooted in topological data analysis. We test our approach, which we term EMAP or Embeddings by Manifold Approximation and Projection, on six publicly available text-classification datasets of varying size and complexity. Empirical results show that our method consistently performs similar to or better than several alternative state-of-theart approaches.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The concept of unsupervised universal sentence encoders has gained traction recently, wherein pre-trained models generate effective task-agnostic fixed-dimensional representations for phrases, sentences and paragraphs. Such methods are of varying complexity, from simple weighted-averages of word vectors to complex language-models based on bidirectional transformers. In this work we propose a novel technique to generate sentenceembeddings in an unsupervised fashion by projecting the sentences onto a fixed-dimensional manifold with the objective of preserving local neighbourhoods in the original space. To delineate such neighbourhoods we experiment with several set-distance metrics, including the recently proposed Word Mover's distance, while the fixed-dimensional projection is achieved by employing a scalable and efficient manifold approximation method rooted in topological data analysis. We test our approach, which we term EMAP or Embeddings by Manifold Approximation and Projection, on six publicly available text-classification datasets of varying size and complexity. Empirical results show that our method consistently performs similar to or better than several alternative state-of-theart approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dense vector representation of words, or wordembeddings, form the backbone of most modern NLP applications and can be constructed using context-free (Bengio et al., 2003; Mikolov et al., 2013; Pennington et al., 2014) or contextualized methods (Peters et al., 2018; Devlin et al., 2019) .",
"cite_spans": [
{
"start": 149,
"end": 170,
"text": "(Bengio et al., 2003;",
"ref_id": "BIBREF4"
},
{
"start": 171,
"end": 192,
"text": "Mikolov et al., 2013;",
"ref_id": "BIBREF24"
},
{
"start": 193,
"end": 217,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF27"
},
{
"start": 244,
"end": 265,
"text": "(Peters et al., 2018;",
"ref_id": "BIBREF28"
},
{
"start": 266,
"end": 286,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "On sentence-embeddings",
"sec_num": "1.1"
},
{
"text": "Given that practical systems often benefit from having representations for sentences and documents, in addition to word-embeddings (Palangi et al., 2016; Yan et al., 2016) , a simple trick is to use the weighted average over some or all of the embeddings of words in a sentence or document. Although sentence-embeddings constructed this way often lose information because of the disregard for word-order during averaging, they have been found to be surprisingly performant (Aldarmaki and Diab, 2018) .",
"cite_spans": [
{
"start": 131,
"end": 153,
"text": "(Palangi et al., 2016;",
"ref_id": "BIBREF25"
},
{
"start": 154,
"end": 171,
"text": "Yan et al., 2016)",
"ref_id": "BIBREF39"
},
{
"start": 473,
"end": 499,
"text": "(Aldarmaki and Diab, 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "On sentence-embeddings",
"sec_num": "1.1"
},
{
"text": "More sophisticated methods focus on jointly learning the embeddings of sentences and words using models similar to Word2Vec (Le and Mikolov, 2014; Chen, 2017) , using encoder-decoder approaches that reconstruct the surrounding sentences of an encoded passage (Kiros et al., 2015) , or training bi-directional LSTM models on large external datasets (Conneau et al., 2017) . Meaningful sentence-embeddings have also been constructed by fine-tuning pre-trained bidirectional transformers (Devlin et al., 2019 ) using a Siamese architecture (Reimers and Gurevych, 2019) .",
"cite_spans": [
{
"start": 124,
"end": 146,
"text": "(Le and Mikolov, 2014;",
"ref_id": "BIBREF18"
},
{
"start": 147,
"end": 158,
"text": "Chen, 2017)",
"ref_id": "BIBREF7"
},
{
"start": 259,
"end": 279,
"text": "(Kiros et al., 2015)",
"ref_id": "BIBREF16"
},
{
"start": 348,
"end": 370,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 485,
"end": 505,
"text": "(Devlin et al., 2019",
"ref_id": "BIBREF11"
},
{
"start": 537,
"end": 565,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "On sentence-embeddings",
"sec_num": "1.1"
},
{
"text": "In parallel to the approaches mentioned above, a stream of methods have emerged recently which exploit the inherent geometric properties of the structure of sentences, by treating them as sets or sequences of word-embeddings. For example, Arora et al. (2017) propose the construction of sentenceembeddings based on weighted word-embedding averages with the removal of the dominant singular vector, while R\u00fcckl\u00e9 et al. (2018) produce sentenceembeddings by concatenating several power-means of word-embeddings corresponding to a sentence. Very recently, spectral decomposition techniques were used to create sentence-embeddings, which produced state-of-the-art results when used in concatenation with averaging (Kayal and Tsatsaronis, 2019; Almarwani et al., 2019) .",
"cite_spans": [
{
"start": 239,
"end": 258,
"text": "Arora et al. (2017)",
"ref_id": "BIBREF2"
},
{
"start": 404,
"end": 424,
"text": "R\u00fcckl\u00e9 et al. (2018)",
"ref_id": "BIBREF33"
},
{
"start": 709,
"end": 738,
"text": "(Kayal and Tsatsaronis, 2019;",
"ref_id": "BIBREF15"
},
{
"start": 739,
"end": 762,
"text": "Almarwani et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "On sentence-embeddings",
"sec_num": "1.1"
},
{
"text": "Our work is most related to that of Wu et al. (2018) who use Random Features (Rahimi and Recht, 2008) to learn document embeddings which preserve the properties of an explicitly-defined kernel based on the Word Mover's Distance (Kusner et al., 2015) . Where Wu et al. predefine the nature of the kernel, our proposed approach can learn the similarity-preserving manifold for a given setdistance metric, offering increased flexibility.",
"cite_spans": [
{
"start": 36,
"end": 52,
"text": "Wu et al. (2018)",
"ref_id": "BIBREF38"
},
{
"start": 77,
"end": 101,
"text": "(Rahimi and Recht, 2008)",
"ref_id": "BIBREF29"
},
{
"start": 228,
"end": 249,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "On sentence-embeddings",
"sec_num": "1.1"
},
{
"text": "A simple way to form sentence-embeddings is to compute the dimension-wise arithmetic mean of the embeddings of the words in a particular sentence. Even though this approach incurs information loss by disregarding the fact that sentences are sequences (or, at the very least, sets) of word vectors, it works well in practice. This already provides an indication that there is more information in the sentences to be exploited. Kusner et al. (2015) aim to use more of the information available in a sentence by representing sentences as a weighted point cloud of embedded words. Rooted in transportation theory, their Word Mover's distance (WMD) is the minimum amount of distance that the embedded words of a sentence need to travel to reach the embedded words of another sentence. The approach achieves state-of-the-art results for sentence classification when combined with a k-NN classifier (Cover and Hart, 1967) . Since their work, other distance metrics have been suggested (Singh et al., 2019; Wang et al., 2019) , also motivated by how transportation problems are solved.",
"cite_spans": [
{
"start": 426,
"end": 446,
"text": "Kusner et al. (2015)",
"ref_id": "BIBREF17"
},
{
"start": 892,
"end": 914,
"text": "(Cover and Hart, 1967)",
"ref_id": "BIBREF10"
},
{
"start": 978,
"end": 998,
"text": "(Singh et al., 2019;",
"ref_id": "BIBREF34"
},
{
"start": 999,
"end": 1017,
"text": "Wang et al., 2019)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation and contributions",
"sec_num": "1.2"
},
{
"text": "Considering that sentences are sets of word vectors, a large variety of methods exist in literature that can be used to calculate the distance between two sets, in addition to the ones based on transport theory. Thus, as a first contribution, we compare alternative metrics to measure distances between sentences. The metrics we suggest, namely the Hausdorff distance and the Energy distance, are intuitive to explain and reasonably fast to calculate. The choice of these particular distances are motivated by their differing origins and their general usefulness in the respective application domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation and contributions",
"sec_num": "1.2"
},
{
"text": "Once calculated, these distances can be used in conjunction with k-nearest neighbours for classification tasks, and k-means for clustering tasks. However, these learning algorithms are rather simplistic and the state-of-the-art machine learning algorithms require a fixed-length feature representation as input to them. Moreover, having fixedlength representations for sentences (sentenceembeddings) also provides a large degree of flexibility for downstream tasks, as compared to hav-ing only relative distances between them. With this as motivation, the second contribution of this work is to produce sentence-embeddings that approximately preserve the topological properties of the original sentence space. We propose to do so using an efficient scalable manifold-learning algorithm termed UMAP (McInnes et al., 2018) from topological data analysis. Empirical results show that this process yields sentence-embeddings that deliver near state-of-the-art classification performance with a simple classifier.",
"cite_spans": [
{
"start": 793,
"end": 820,
"text": "UMAP (McInnes et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation and contributions",
"sec_num": "1.2"
},
{
"text": "In this work, we experiment with three different distance measures to determine the distance between sentences. The first measure (Energy distance) is motivated by a useful linkage criterion from hierarchical clustering (Rokach and Maimon, 2005) , while the second one (Hausdorff distance) is an important metric from algebraic topology that has been successfully used in document indexing (Tsatsaronis et al., 2012) . The final metric (Word Mover's distance) is a recent extension of an existing distance measure between distributions, that is particularly suited for use with word-embeddings (Kusner et al., 2015) .",
"cite_spans": [
{
"start": 220,
"end": 245,
"text": "(Rokach and Maimon, 2005)",
"ref_id": "BIBREF31"
},
{
"start": 390,
"end": 416,
"text": "(Tsatsaronis et al., 2012)",
"ref_id": "BIBREF36"
},
{
"start": 594,
"end": 615,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology 2.1 Calculating distances",
"sec_num": "2"
},
{
"text": "Prior to defining the distances that have been used in this work, we first proceed to outline the notations that we will be using to describe them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology 2.1 Calculating distances",
"sec_num": "2"
},
{
"text": "Let W \u2208 R N \u00d7d denote a word-embedding matrix, such that the vocabulary corresponding to it consists of N words, and each word in it, w i \u2208 R d , is d-dimensional. This word-embedding matrix and its constituent words may come from pre-trained representations such as Word2Vec (Mikolov et al., 2013) or GloVe (Pennington et al., 2014) , in which case d = 300.",
"cite_spans": [
{
"start": 276,
"end": 298,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF24"
},
{
"start": 308,
"end": 333,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Notations",
"sec_num": "2.1.1"
},
{
"text": "Let S be a set of sentences and s, s be two sentences from this set. Each such sentence can be viewed as a set of word-embeddings, {w} \u2208 s. Additionally, let the length of a sentence, s, be denoted as |s|, and the cardinality of the set, S , be denoted by |S |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notations",
"sec_num": "2.1.1"
},
{
"text": "Let e(w i , w j ) denote the distance between two word-embeddings, w i , w j . In the context of this paper, this distance is Euclidean:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notations",
"sec_num": "2.1.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e(w i , w j ) = w i \u2212 w j 2",
"eq_num": "(1)"
}
],
"section": "Notations",
"sec_num": "2.1.1"
},
{
"text": "Finally, D(s, s ) denotes the distance between two sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notations",
"sec_num": "2.1.1"
},
{
"text": "Energy distance is a statistical distance between probability distributions, based on the inter and intra-distribution variance, that satisfies all the criteria of being a metric (Sz\u00e9kely and Rizzo, 2013).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Energy distance",
"sec_num": "2.1.2"
},
{
"text": "Using the notations defined earlier, we write it as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Energy distance",
"sec_num": "2.1.2"
},
{
"text": "D(s, s ) = 2 |s||s | w i \u2208s w j \u2208s e(w i , w j ) \u2212 1 |s| 2 w i \u2208s w j \u2208s e(w i , w j ) \u2212 1 |s | 2 w i \u2208s w j \u2208s e(w i , w j ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Energy distance",
"sec_num": "2.1.2"
},
{
"text": "The original conception of the energy distance was inspired by gravitational potential energy of celestial objects. Looking closely at Equation 2, it can be quickly observed that it has two parts: the first term resembles the attraction or repulsion between two objects (or in our case, sentences), while the second and the third term indicate the self-coherence of the respective objects. As shown by Sz\u00e9kely and Rizzo (2013), energy distance is scale equivariant, which would make it sensitive to contextual changes in sentences, and therefore make it useful in NLP applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Energy distance",
"sec_num": "2.1.2"
},
{
"text": "Given two subsets of a metric space, the Hausdorff distance is the maximum distance of the points in one subset to the nearest point in the other. A significant work has gone into making it fast to calculate (Atallah, 1983) so that it can be applied to real-world problems, such as shape-matching in computer vision (Dubuisson and Jain, 1994) .",
"cite_spans": [
{
"start": 208,
"end": 223,
"text": "(Atallah, 1983)",
"ref_id": "BIBREF3"
},
{
"start": 316,
"end": 342,
"text": "(Dubuisson and Jain, 1994)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hausdorff distance",
"sec_num": "2.1.3"
},
{
"text": "To calculate it, the distance between each point from one set and the closest point from the other set is determined first. Then, the Hausdorff distance is calculated as the maximal point-wise distance. Considering sentences {s, s } as subsets of wordembedding space, R d\u00d7N , the directed Hausdorff distance can be given as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hausdorff distance",
"sec_num": "2.1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h(s, s ) = max w i \u2208s min w j \u2208s e(w i , w j )",
"eq_num": "(3)"
}
],
"section": "Hausdorff distance",
"sec_num": "2.1.3"
},
{
"text": "such that the symmetric Hausdorff distance is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hausdorff distance",
"sec_num": "2.1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D(s, s ) = max{h(s, s ), h(s , s)}",
"eq_num": "(4)"
}
],
"section": "Hausdorff distance",
"sec_num": "2.1.3"
},
{
"text": "2.1.4 Word Mover's distance",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hausdorff distance",
"sec_num": "2.1.3"
},
{
"text": "In addition to the representation of a sentence as a set of word-embeddings, a sentence s can also be represented as a N -dimensional normalized termfrequency vector, where n s i is the number of times word w i occurs in sentence s normalized by the total number of words in s:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hausdorff distance",
"sec_num": "2.1.3"
},
{
"text": "n s i = c s i k=N k=1 c s k (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hausdorff distance",
"sec_num": "2.1.3"
},
{
"text": "where, c s i is the number of times word w i appears in sentence s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hausdorff distance",
"sec_num": "2.1.3"
},
{
"text": "The goal of the Word Mover's distance (WMD) (Kusner et al., 2015) is to construct a sentence similarity metric based on the distances between the individual words within each sentence, given by Equation 1. In order to calculate the distance between two sentences, WMD introduces a transport matrix, T \u2208 R N \u00d7N , such that each element in it, T ij , denotes how much of n s i should be transported to n s j . Then, the WMD between two sentences is given as the solution of the following minimization problem:",
"cite_spans": [
{
"start": 44,
"end": 65,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hausdorff distance",
"sec_num": "2.1.3"
},
{
"text": "D(s, s ) = min T \u22650 N i,j=1 T ij e(i, j) subject to, N j=1 T ij = n s i and N i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hausdorff distance",
"sec_num": "2.1.3"
},
{
"text": "T ij = n s j (6) Thus, WMD between two sentences is defined as the minimum distance required to transport the words from one sentence to another.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hausdorff distance",
"sec_num": "2.1.3"
},
{
"text": "In this work, we propose to construct sentenceembeddings which preserve the neighbourhood around sentences delineated by the relative distances between them. We posit that preserving the local neighbourhoods will serve as a proxy for preserving the original topological properties.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating neighbourhood-preserving embeddings via non-linear manifold-learning",
"sec_num": "2.2"
},
{
"text": "In order to learn a topology-preserving fixeddimensional manifold, we seek inspiration from methods in non-linear dimensionality-reduction (Lee and Verleysen, 2007) and topological data analysis literature (Carlsson, 2009) . When broadly categorized, these techniques consist of methods, such as Locally Linear Embedding (Roweis and Saul, 2000) , that preserve local distances between points, or those like Stochastic Neighbour Embedding (Hinton and Roweis, 2003; van der Maaten and Hinton, 2008) that preserve the conditional probabilities of points being neighbours. However, existing manifold-learning algorithms suffer from two shortcomings: they are computationally expensive and are often restricted in the number of output dimensions. In our work we use a method termed Uniform Manifold Approximation and Projection (UMAP) , which is scalable and has no computational restrictions on the output embedding dimension.",
"cite_spans": [
{
"start": 139,
"end": 164,
"text": "(Lee and Verleysen, 2007)",
"ref_id": "BIBREF20"
},
{
"start": 206,
"end": 222,
"text": "(Carlsson, 2009)",
"ref_id": "BIBREF6"
},
{
"start": 333,
"end": 344,
"text": "Saul, 2000)",
"ref_id": "BIBREF32"
},
{
"start": 438,
"end": 463,
"text": "(Hinton and Roweis, 2003;",
"ref_id": "BIBREF14"
},
{
"start": 464,
"end": 496,
"text": "van der Maaten and Hinton, 2008)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating neighbourhood-preserving embeddings via non-linear manifold-learning",
"sec_num": "2.2"
},
{
"text": "The building block of UMAP is a particular type of a simplicial complex, known as the Vietoris-Rips complex. Recalling that a k-simplex is a kdimensional polytope which is the convex hull of its k + 1 vertices, and a simplicial complex is a set of simplices of various orders, the Vietoris-Rips simplicial complex is a collection of 0 and 1-simplices. In essence, this is a means to building a simple neighbourhood graph by connecting the original data points. On the left is the original sentencespace, approximated by the nearest neighbours graph formed by the Vietoris-Rips complex. Instead of points and edges, our simplicial complex has sets of points and edges between them, formed by one of the distance metrics mentioned in Section 2.1. In this example, four sentences, denoted by S1 through S4, form two simplices, with S4 being a 0-simplex. The sentences are denoted by colored ellipses, while the high-dimensional embedding of each word in a sentence is depicted by a point having the same color as the parent sentence ellipse. The UMAP algorithm is then employed to find a similarity-preserving Euclidean embedding-space, shown on the right, by minimizing the cross-entropy between the two representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating neighbourhood-preserving embeddings via non-linear manifold-learning",
"sec_num": "2.2"
},
{
"text": "A key difference, in this work, to the original formulation is that an individual data sample (i.e., the vertex of a simplex) is not a d-dimensional point but a set of d-dimensional words that make up a sentence. By using any of the distance metrics defined in Section 2.1, it is possible to construct the simplicial complex that UMAP needs in order to build the topological representation of the original sentence space. An illustration can be found in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 454,
"end": 462,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Generating neighbourhood-preserving embeddings via non-linear manifold-learning",
"sec_num": "2.2"
},
{
"text": "As per the formulation laid out for UMAP, the similarity between sentences s and s is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating neighbourhood-preserving embeddings via non-linear manifold-learning",
"sec_num": "2.2"
},
{
"text": "v s |s = exp \u2212(D(s, s ) \u2212 \u03c1 s ) \u03c3 s (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating neighbourhood-preserving embeddings via non-linear manifold-learning",
"sec_num": "2.2"
},
{
"text": "where \u03c3 s is a normalisation factor selected based on an empirical heuristic (See Algorithm 3 in the work of McInnes et al. 2018), D(s, s ) is the distance between two sentences as outlined by Equation 2, 4 or 6, and \u03c1 s is the distance of s from its nearest neighbour. It is worth mentioning that for scalability, v s |s is calculated only for predefined set of approximate nearest neighbours, which is a userdefined input parameter to the UMAP algorithm, using the efficient nearest-neighbour descent algorithm (Dong et al., 2011) . The similarity depicted in Equation 7 is asymmetric, and symmetrization is carried out by a fuzzy set union using the probabilistic t-conorm:",
"cite_spans": [
{
"start": 513,
"end": 532,
"text": "(Dong et al., 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating neighbourhood-preserving embeddings via non-linear manifold-learning",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v ss = (v s |s + v s|s ) \u2212 v s |s v s|s",
"eq_num": "(8)"
}
],
"section": "Generating neighbourhood-preserving embeddings via non-linear manifold-learning",
"sec_num": "2.2"
},
{
"text": "As UMAP builds a Vietoris-Rips complex governed by Equation 7, it can take advantage of the nerve theorem (Borsuk, 1948) , which makes this construction a homotope of the original topological space. In our case, this implies that we can build a simple nearest neighbours graph from a given corpus of sentences, which has certain guarantees of approximating the original topological space, as defined by the aforementioned distance metrics.",
"cite_spans": [
{
"start": 106,
"end": 120,
"text": "(Borsuk, 1948)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating neighbourhood-preserving embeddings via non-linear manifold-learning",
"sec_num": "2.2"
},
{
"text": "The next step is to define a similar nearest neighbours graph in a fixed low-dimensional Euclidean space. Let s E , s E \u2208 R d E be the corresponding d E -dimensional sentence-embeddings. Then the low dimensional similarities are given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating neighbourhood-preserving embeddings via non-linear manifold-learning",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w ss = (1 + a||s E \u2212 s E || 2 b )) \u22121",
"eq_num": "(9)"
}
],
"section": "Generating neighbourhood-preserving embeddings via non-linear manifold-learning",
"sec_num": "2.2"
},
{
"text": "where, ||s E \u2212 s E || is the Euclidean distance between the d E -dimensional embeddings, and setting a, b are input-parameters, set to 1.929 and 0.791, respectively, as per the original implementation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating neighbourhood-preserving embeddings via non-linear manifold-learning",
"sec_num": "2.2"
},
{
"text": "Algorithm 1: Constructing sentence-Embeddings by Manifold Approximation and Projection: EMAP Data: A pre-trained word-embeddings matrix, W ; a set of sentences, S ; desired dimension of the generated sentence-embeddings, d E Result: A set of sentence-embeddings, {s E } \u2208 S E 1 Calculate the distance matrix for the entire set of sentences, such that the distance between any two sentences is given by Equation 2, 4 or 6; 2 Using this distance matrix, calculate the nearest neighbour graph between all input sentences, given by Equations 7 and 8; 3 Calculate the initial guess for the low dimensional embeddings, S E \u2208 R |S |\u00d7D E , using the graph laplacian of the original nearest neighbour graph; 4 Until convergence, minimize the cross-entropy between the two representations (Equation 10) using stochastic gradient descent; 5 Return the set of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating neighbourhood-preserving embeddings via non-linear manifold-learning",
"sec_num": "2.2"
},
{
"text": "d E -dimensional sentence-embeddings, S E ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating neighbourhood-preserving embeddings via non-linear manifold-learning",
"sec_num": "2.2"
},
{
"text": "The final step of the process is to optimize the low dimensional representation to have as close a fuzzy topological representation as possible to the original space. UMAP proceeds to do so by minimizing the cross-entropy between the two representations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating neighbourhood-preserving embeddings via non-linear manifold-learning",
"sec_num": "2.2"
},
{
"text": "C = s =s v ss log v ss w ss + (1 \u2212 v ss ) log 1 \u2212 v ss 1 \u2212 w ss",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating neighbourhood-preserving embeddings via non-linear manifold-learning",
"sec_num": "2.2"
},
{
"text": "(10) usually done via stochastic gradient descent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating neighbourhood-preserving embeddings via non-linear manifold-learning",
"sec_num": "2.2"
},
{
"text": "A summary of the proposed process used to produce sentence-embeddings is provided in Algorithm 1, and pictorially presented in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 127,
"end": 135,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Generating neighbourhood-preserving embeddings via non-linear manifold-learning",
"sec_num": "2.2"
},
{
"text": "3 Datasets and resources",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating neighbourhood-preserving embeddings via non-linear manifold-learning",
"sec_num": "2.2"
},
{
"text": "Six public datasets 1 have been used to empirically validate the method proposed in this paper. These datasets are of varying sizes, tasks and complexities, and have been used widely in existing liter- ature, thereby making comparisons and reporting possible. Information about the datasets can be found in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 307,
"end": 314,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "Pre-trained word-embedding corpus: We use the pre-trained set of word-embeddings provided by Mikolov et al (2013) 2 . Software implementations: We use a variety of software packages and custom-written programs perform our experiments, the starting point being the calculation of sentence-wise distances. We calculate the Hausdorff distance using a directed implementation provided in the Scipy python library 3 , whereas the energy distance is calculated using dcor 4 . Lastly, the word mover's distance is calculated using implementation provided by Kusner et al. (2015) 5 . In order to produce the symmetric distance matrix for a dataset, we employ custom parallel implementation which distributes the calculations over all available logical cores in a machine.",
"cite_spans": [
{
"start": 551,
"end": 573,
"text": "Kusner et al. (2015) 5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Resources",
"sec_num": "3.2"
},
{
"text": "To calculate the sentence-embeddings, the implementation of UMAP provided by McInnes et al (2018) is used 6 . Finally, the classification is done via linear kernel support vector machines from the scikit-learn library (Pedregosa et al., 2011) 7 .",
"cite_spans": [
{
"start": 218,
"end": 242,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF26"
},
{
"start": 243,
"end": 244,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Resources",
"sec_num": "3.2"
},
{
"text": "All of the code and datasets have been packaged and released 8 to rerun all of the experiments. Compute infrastructure: All experiments were run on a m4.2xlarge machine on AWS-EC2 9 , which has 8 virtual CPUs and 32GB of RAM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resources",
"sec_num": "3.2"
},
{
"text": "In order to check the usefulness of our proposed approach, we benchmark its performance in two different ways. The first, and most obvious, approach is to consider the performance of the k-NN classifier as a baseline. This is motivated by the state-of-the-art k-NN based classification accuracy reported by Kusner et al. for the word mover's distance. Thus, our embeddings need to match or surpass the performance of a k-NN based approach, in order to be considered for practical use. The second approach is to compare the classification accuracies of several state-of-the-art embedding-generation algorithms on our chosen datasets. These are: dct (Almarwani et al., 2019) : embeddings are generated by employing discrete cosine transform on a set of word vectors. eigensent (Kayal and Tsatsaronis, 2019) : sentence representations produced via higher-order dynamic mode decomposition (Le Clainche and Vega, 2017) on a sequence of word vectors. wmovers (Wu et al., 2018) : a competing method which can learn sentence representations from the word mover's distance based on kernel learning, termed in the original work as word mover's embeddings. p-means (R\u00fcckl\u00e9 et al., 2018) : produces sentenceembeddings by concatenating several power-means of word-embeddings corresponding to a sentence. doc2vec (Le and Mikolov, 2014) : embeddings produced by jointly learning the representations of sentences, together with words, as a part of the word2vec procedure. s-bert (Reimers and Gurevych, 2019) : embeddings produced by fine-tuning a pre-trained BERT model using a Siamese architecture to classify two sentences as being similar or different.",
"cite_spans": [
{
"start": 648,
"end": 672,
"text": "(Almarwani et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 775,
"end": 804,
"text": "(Kayal and Tsatsaronis, 2019)",
"ref_id": "BIBREF15"
},
{
"start": 953,
"end": 970,
"text": "(Wu et al., 2018)",
"ref_id": "BIBREF38"
},
{
"start": 1154,
"end": 1175,
"text": "(R\u00fcckl\u00e9 et al., 2018)",
"ref_id": "BIBREF33"
},
{
"start": 1299,
"end": 1321,
"text": "(Le and Mikolov, 2014)",
"ref_id": "BIBREF18"
},
{
"start": 1463,
"end": 1491,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Competing methods",
"sec_num": "4.1"
},
{
"text": "Note that the results for wmovers and doc2vec are taken from Table 3 of Wu et al.'s work (2018) , while all the other algorithms are explicitly tested.",
"cite_spans": [
{
"start": 72,
"end": 95,
"text": "Wu et al.'s work (2018)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 61,
"end": 68,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Competing methods",
"sec_num": "4.1"
},
{
"text": "Extensive experiments are performed to provide a holistic overview of our neighbourhood-preserving embedding algorithm, for various sets of input parameters. The steps involved are as follows: Choose a dataset (one of the six mentioned in Section 3.1). For every word in every sentence in the train and test splits of the dataset, retrieve the corresponding word-embedding from the pretrained embedding corpus (as stated in Section 3.2). Calculate symmetric distance matrices corresponding to each of the chosen distance metrics, for all of the sets of word-embeddings from the train and test splits. Apply the UMAP algorithm on the distance matrices to generate embeddings for all sentences in the train and the test splits. Calculate embeddings for competing methods for the methods outlined in Section 4.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.2"
},
{
"text": "Embeddings are generated for various hyperparameter combinations for EMAP as well as all the compared approaches, as listed in Table 2 . Train a classifier on the produced embeddings to perform the dataset-specific task. In this work, we train a simple linear-kernel support vector machine (Cortes and Vapnik, 1995) for every competing method and every dataset tested. The classifier is trained on the train-split of a dataset and evaluated on the test-split. The only parameter tuned for the SVM is the L2 regularization strength, varied between 0.001 and 100. The overall test accuracy has been been reported as a measure of performance.",
"cite_spans": [
{
"start": 290,
"end": 315,
"text": "(Cortes and Vapnik, 1995)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 127,
"end": 134,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.2"
},
{
"text": "The results of all our experiments are in compiled in Tables 3 and 4 . All statistical tests reported are z-tests, where we compute the right-tailed p-value and call a result significantly different if p < 0.1. Performance of the distance metrics: From Table 3 it can be observed that the word mover's distance consistently performs better than the others experimented with in this paper. WMD calculates the total effort of aligning two sentences, which seems to capture more useful information compared to the hausdorff metric's worst-case effort of alignment. As for the energy distance, it calculates pairwise potentials amongst words within and between sentences, and may suffer if there are Table 2 : Hyperparameter values tested. For EMAP, n neighbours refers to the size of local neighborhood used for manifold approximation, embedding dim is the fixed dimensionality of the generated sentence-embeddings, min dist is the minimum distance apart that points are allowed to be in the low dimensional representation, spread determines the scale at which embedded points will be spread out, n iters is the number of iterations that the UMAP algorithm is allowed to run, and finally, distance is one of the metrics proposed in Section 2.1. For the spectral decomposition based algorithms, dct and eigensent, components represents the number of components to keep in the resulting decomposition, while time lag corresponds to the window-length in the dynamic mode decomposition process. For pmeans, powers represents the different powers which are used to generate the concatenated embeddings. Table 3 : Comparison versus kNN. Results shown here compare the classification accuracies of k-nearest neighbour to our proposed approach for various distance metrics. For every distance, bold indicates better accuracy, while * indicates that the winning accuracy was statistically significant with respect to the compared value (,i.e., EMAP vs kNN for a given distance metric). It can be observed that our method almost always outperforms knearest neighbour-based classification. Table 4 : Comparison versus competing methods. We compare EMAP based on word mover's distance to various state-of-the-art approaches. The best and second-best classification accuracies are highlighted in bold and italics. We perform statistical significance tests of our method (wmd-EMAP) against all other methods, for a given dataset, and denote the outcomes by \u2228 when the compared method is worse and \u2227 when our method is worse, while the absence of a symbol indicates insignificant differences. In terms of absolute accuracy, we observe that our method achieves state-of-the-art results in 2 out of 6 datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 68,
"text": "Tables 3 and 4",
"ref_id": null
},
{
"start": 696,
"end": 703,
"text": "Table 2",
"ref_id": null
},
{
"start": 1595,
"end": 1602,
"text": "Table 3",
"ref_id": null
},
{
"start": 2076,
"end": 2083,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "shared commonly-occurring words in both the sentences. However, given that energy and hausdorff distances are reasonably fast to calculate and perform respectably well, they might be worth using in applications with a large number of long sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "Comparison versus kNN: EMAP almost always outperforms k-nearest neighbours based classification, for all the tested distance metrics. The performance boost for WMD is between a relative percentage accuracy of 0.5% to 14%. This illustrates the efficiency of the proposed manifold-learning method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "Query Sentence Best Match Sentence Cosine Sim I have spent thousands of dollar's On Meyers cookware everthing from KitchenAid Anolon Prestige Faberware & Circulan just to name a few Though Meyers does manufacture very high quality pots & pans and I would recommend them to anyone it's just sad that if you have any problem with them under warranty you have to go throught the chain of command that never gets you anywhere even if you want to speak with upper management about the rudeness of the customer service department Their customer service department employees are always very rude and snotty and they act like they are doing you a favor to even talk to you about their products When I opened the box I noticed corrosion on the lid When I contacted Rival customer service via email they told me I had to purchase a new lid I called and spoke with a customer service representative and they told me that a lid was not covered under warranty When I explained that I just opened it and it was defective they told me to just return the product that there was nothing that they were going to do After being treated this way I will NOT be purchasing any more Rival products if they don't stand behind their product VERY VERY poor customer service 0.997 This movie will bring up your racial prejudices in ways that most movies just elude to It demonstrates how connected we all are as people and how seperated we are by only one thing our viewpoints The acting is superb and you get one cameo appearance after another which is a treat Of course the soundtrack is terrific The ending is intense to witness one situation after another coming to an unfortunate finish I waited years for this movie to be released in the United States As far as I was concerned it wasn't about the acting as much as it was about the feeling the actors wanted to portray in which they profoundly accomplished I would recommend this movie to anyone who can reach that one step deeper into the minds of creativity and passion and appreciate the struggles of rising above and beyond the pain of broken dreams",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "We see a phrase a lot when we visit how to sites for writers World building By this we mean the setting the characters and everything else where our story will occur For me this often means maps memories and visits since I write about where I live But if you'd like to see exactly what world building means head down to your local library and grab SALEM'S LOT by Stephen King When Stephen King mania first gripped the English speaking world I missed it I saw the film of CARRIE and hated it Years later at a guard desk on a long shift scheduled so suddenly that I hadn't had a chance to visit the library I read what was in the desk instead THINNER If I were Stephen King I'd have put a pen name on that crap as well One of King's fans brought me around She recommended THE SHINING Of course I thought of that Kubrick/Nicholson travesty No no she said read the book It's much different Yes it is It's fantastic for its perceptiveness Next up PET SEMATARY which scared the crap out of me And that my friends is not easy ON WRITING I've gushed about that enough times The films STAND BY ME and THE APT PUPIL So in the end I appreciate King and forgive him for CARRIE and I think he's forgiven himself in the possibility that Steve Berry could ever transcend his not so great debut The Amber Room Romanov Prophecy started in the right direction Third Secret was OK but I think he hit his *peak* right there 0.955 Table 5 : Examples of best-matching sentences. From the amazon reviews dataset using wmd-EMAP.",
"cite_spans": [],
"ref_spans": [
{
"start": 1410,
"end": 1417,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "0.998",
"sec_num": null
},
{
"text": "Comparison versus state-of-the-art methods: Consulting Table 4 , it seems that wmovers, pmeans and s-bert form the strongest baselines as compared to our method, wmd-EMAP (EMAP with word mover's distance). Considering the statistical significance of the differences in performance between wmd-EMAP and the others, it can be seen that it is almost always equivalent to or better than the other state-of-the-art approaches. In terms of absolute accuracy, it wins in 3 out of 6 evaluations, where it has the highest classification accuracy, and comes out second-best for the others. Compared to it's closest competitor, the word mover's embedding algorithm, the performance of wmd-EMAP is found to be on-par (or slightly better, by 0.8% in the case of the classic dataset) to slightly worse (3% relative p.p., in case of the twitter dataset). Interestingly, both of the distance-based embedding approaches, wmd-EMAP and wmovers, are found to perform better than the siamese-BERT based approach, s-bert.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "0.998",
"sec_num": null
},
{
"text": "Thus, the overall conclusion from our empirical studies is that EMAP performs favourably as compared to various state-of-the-art approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "0.998",
"sec_num": null
},
{
"text": "Examples of similar sentences with EMAP: We provide motivating examples of similar sentences from the amazon dataset, as deemed by our approach, in Table 5 . As can be seen, our method performs quite well in matching complex sentences with varying topics and sentiments to their closest pairs. The first example pair has the theme of a customer who is unhappy about poor customer service in the context of cookware warranty, while the second one is about positive reviews of deeply-moving movies. The third example, about book reviews, is particularly interesting: in the first example, a reviewer is talking about how she disliked the first Stephen King work which she was exposed to, but subsequently liked all the next ones, while in the matched sentence the reviewer talks about a similar sentiment change towards the works of another author, Steve Berry. Thus in the last example, the similarity between sentences is the change of sentiment, from negative to positive, towards the works of books of particular authors.",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 155,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "0.998",
"sec_num": null
},
{
"text": "In this work, we propose a novel mechanism to construct unsupervised sentence-embeddings by preserving properties of local neighbourhoods in the original space, as delineated by set-distance metrics. This method, which we term, EMAP or Embeddings by Manifold Approximation and Projection leverages a method from topological data analysis can be used as a framework with any distance metric that can discriminate between sets, three of which we test in this paper. Using both quantitative empirical studies, where we compare with state-of-the-art approaches, and qualitative probing, where we retrieve similar sentences based on our generated embeddings, we illustrate the efficiency of our proposed approach to be on-par or exceeding in-use methods. This work demonstrates the successful application of topological data analysis in sentence embedding creation, and we leave the design of better distance metrics and manifold approximation algorithms, particularly targeted towards NLP, for future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "https://drive.google.com/file/d/ 0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit 3 https://docs.scipy.org/doc/scipy/ reference/generated/scipy.spatial. distance.directed_hausdorff.html 4 https://dcor.readthedocs.io/en/ latest/functions/dcor.energy_distance. html#dcor.energy_distance 5 https://github.com/mkusner/wmd 6 https://umap-learn.readthedocs.io/en/ latest/api.html 7 https://scikit-learn.org/stable/ modules/generated/sklearn.svm.SVC.html 8 https://github.com/DeepK/ distance-embed 9 https://aws.amazon.com/ec2/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Evaluation of unsupervised compositional representations",
"authors": [
{
"first": "Hanan",
"middle": [],
"last": "Aldarmaki",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2666--2677",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanan Aldarmaki and Mona Diab. 2018. Evaluation of unsupervised compositional representations. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2666-2677, Santa Fe, New Mexico, USA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Efficient sentence embedding using discrete cosine transform",
"authors": [
{
"first": "Nada",
"middle": [],
"last": "Almarwani",
"suffix": ""
},
{
"first": "Hanan",
"middle": [],
"last": "Aldarmaki",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3672--3678",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1380"
]
},
"num": null,
"urls": [],
"raw_text": "Nada Almarwani, Hanan Aldarmaki, and Mona Diab. 2019. Efficient sentence embedding using discrete cosine transform. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 3672-3678, Hong Kong, China.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A simple but tough-to-beat baseline for sentence embeddings",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings. In International Conference on Learning Representations.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A linear time algorithm for the hausdorff distance between convex polygons",
"authors": [
{
"first": "Mikhail",
"middle": [
"J"
],
"last": "Atallah",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikhail J. Atallah. 1983. A linear time algorithm for the hausdorff distance between convex polygons. Technical report, Department of Computer Science, Purdue University.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Janvin",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. Journal of Machine Learning Re- search, 3:1137-1155.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "On the imbedding of systems of compacta in simplicial complexes",
"authors": [
{
"first": "Karol",
"middle": [],
"last": "Borsuk",
"suffix": ""
}
],
"year": 1948,
"venue": "Fundamenta Mathematicae",
"volume": "35",
"issue": "",
"pages": "217--234",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karol Borsuk. 1948. On the imbedding of systems of compacta in simplicial complexes. In Fundamenta Mathematicae, volume 35, pages 217-234.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Topology and data. Bulletin of the",
"authors": [
{
"first": "Gunnar",
"middle": [],
"last": "Carlsson",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "46",
"issue": "",
"pages": "255--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gunnar Carlsson. 2009. Topology and data. Bulletin of the American Mathematical Society, 46(2):255-308.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Efficient vector representation for documents through corruption",
"authors": [
{
"first": "Minmin",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minmin Chen. 2017. Efficient vector representation for documents through corruption. 5th International Conference on Learning Representations.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Supervised learning of universal sentence representations from natural language inference data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "670--680",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1070"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680, Copen- hagen, Denmark.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Supportvector networks",
"authors": [
{
"first": "Corinna",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Machine Learning",
"volume": "20",
"issue": "",
"pages": "273--297",
"other_ids": {
"DOI": [
"10.1023/A:1022627411411"
]
},
"num": null,
"urls": [],
"raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support- vector networks. Machine Learning, 20(3):273- 297.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Nearest neighbor pattern classification",
"authors": [
{
"first": "T",
"middle": [],
"last": "Cover",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hart",
"suffix": ""
}
],
"year": 1967,
"venue": "IEEE Transactions on Information Theory",
"volume": "13",
"issue": "1",
"pages": "21--27",
"other_ids": {
"DOI": [
"10.1109/TIT.1967.1053964"
]
},
"num": null,
"urls": [],
"raw_text": "T. Cover and P. Hart. 1967. Nearest neighbor pattern classification. IEEE Transactions on Information Theory, 13(1):21-27.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Efficient k-nearest neighbor graph construction for generic similarity measures",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Charikar",
"middle": [],
"last": "Moses",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 20th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "577--586",
"other_ids": {
"DOI": [
"10.1145/1963405.1963487"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Dong, Charikar Moses, and Kai Li. 2011. Efficient k-nearest neighbor graph construction for generic similarity measures. In Proceedings of the 20th In- ternational Conference on World Wide Web, pages 577-586.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A modified hausdorff distance for object matching",
"authors": [
{
"first": "M",
"middle": [],
"last": "Dubuisson",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Jain",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of 12th International Conference on Pattern Recognition",
"volume": "1",
"issue": "",
"pages": "566--568",
"other_ids": {
"DOI": [
"10.1109/ICPR.1994.576361"
]
},
"num": null,
"urls": [],
"raw_text": "M. . Dubuisson and A. K. Jain. 1994. A modified haus- dorff distance for object matching. In Proceedings of 12th International Conference on Pattern Recog- nition, volume 1, pages 566-568 vol.1.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Stochastic neighbor embedding",
"authors": [
{
"first": "E",
"middle": [],
"last": "Geoffrey",
"suffix": ""
},
{
"first": "Sam",
"middle": [
"T"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roweis",
"suffix": ""
}
],
"year": 2003,
"venue": "Advances in Neural Information Processing Systems",
"volume": "15",
"issue": "",
"pages": "857--864",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey E Hinton and Sam T. Roweis. 2003. Stochas- tic neighbor embedding. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Informa- tion Processing Systems 15, pages 857-864.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "EigenSent: Spectral sentence embeddings using higher-order dynamic mode decomposition",
"authors": [
{
"first": "Subhradeep",
"middle": [],
"last": "Kayal",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Tsatsaronis",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4536--4546",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1445"
]
},
"num": null,
"urls": [],
"raw_text": "Subhradeep Kayal and George Tsatsaronis. 2019. EigenSent: Spectral sentence embeddings using higher-order dynamic mode decomposition. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4536- 4546, Florence, Italy.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Skip-thought vectors",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Urtasun",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fidler",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "28",
"issue": "",
"pages": "3294--3302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Kiros, Yukun Zhu, Russ R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Informa- tion Processing Systems 28, pages 3294-3302.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "From word embeddings to document distances",
"authors": [
{
"first": "Matt",
"middle": [
"J"
],
"last": "Kusner",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [
"I"
],
"last": "Kolkin",
"suffix": ""
},
{
"first": "Kilian",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 32nd International Conference on International Conference on Machine Learning",
"volume": "37",
"issue": "",
"pages": "957--966",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kil- ian Q. Weinberger. 2015. From word embeddings to document distances. In Proceedings of the 32nd In- ternational Conference on International Conference on Machine Learning -Volume 37, ICML'15, pages 957-966.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 31st International Conference on International Conference on Machine Learning",
"volume": "32",
"issue": "",
"pages": "1188--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed repre- sentations of sentences and documents. In Proceed- ings of the 31st International Conference on Inter- national Conference on Machine Learning -Volume 32, ICML'14, pages II-1188-II-1196.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Higher order dynamic mode decomposition",
"authors": [
{
"first": "Soledad",
"middle": [],
"last": "Le Clainche",
"suffix": ""
},
{
"first": "Jos\u00e9",
"middle": [
"M"
],
"last": "Vega",
"suffix": ""
}
],
"year": 2017,
"venue": "SIAM Journal on Applied Dynamical Systems",
"volume": "16",
"issue": "2",
"pages": "882--925",
"other_ids": {
"DOI": [
"10.1137/15M1054924"
]
},
"num": null,
"urls": [],
"raw_text": "Soledad Le Clainche and Jos\u00e9 M. Vega. 2017. Higher order dynamic mode decomposition. SIAM Journal on Applied Dynamical Systems, 16(2):882-925.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Nonlinear Dimensionality Reduction",
"authors": [
{
"first": "John",
"middle": [
"A"
],
"last": "Lee",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Verleysen",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/978-0-387-39351-3"
]
},
"num": null,
"urls": [],
"raw_text": "John A. Lee and Michel Verleysen. 2007. Nonlinear Dimensionality Reduction, 1st edition.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Visualizing data using t-SNE",
"authors": [
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Machine Learning Research",
"volume": "9",
"issue": "",
"pages": "2579--2605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579-2605.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction",
"authors": [
{
"first": "L",
"middle": [],
"last": "Mcinnes",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Healy",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Melville",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. McInnes, J. Healy, and J. Melville. 2018. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. ArXiv e-prints.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Umap: Uniform manifold approximation and projection",
"authors": [
{
"first": "Leland",
"middle": [],
"last": "Mcinnes",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Healy",
"suffix": ""
},
{
"first": "Nathaniel",
"middle": [],
"last": "Saul",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Grossberger",
"suffix": ""
}
],
"year": 2018,
"venue": "The Journal of Open Source Software",
"volume": "3",
"issue": "29",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leland McInnes, John Healy, Nathaniel Saul, and Lukas Grossberger. 2018. Umap: Uniform manifold approximation and projection. The Journal of Open Source Software, 3(29):861.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems",
"volume": "2",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of the 26th International Con- ference on Neural Information Processing Systems - Volume 2, NIPS'13, pages 3111-3119.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Deep sentence embedding using long short-term memory networks: Analysis and application to information retrieval",
"authors": [
{
"first": "H",
"middle": [],
"last": "Palangi",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ward",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing",
"volume": "24",
"issue": "4",
"pages": "694--707",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Palangi, L. Deng, Y. Shen, J. Gao, X. He, J. Chen, X. Song, and R. Ward. 2016. Deep sentence em- bedding using long short-term memory networks: Analysis and application to information retrieval. IEEE/ACM Transactions on Audio, Speech, and Lan- guage Processing, 24(4):694-707.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Scikit-learn: Machine learning in python",
"authors": [
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Bertrand",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "Jake",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "Duchesnay",
"middle": [],
"last": "And\u00e9douard",
"suffix": ""
}
],
"year": 2011,
"venue": "J. Mach. Learn. Res",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexan- dre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and\u00c9douard Duchesnay. 2011. Scikit-learn: Machine learning in python. J. Mach. Learn. Res., 12:2825-2830.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227- 2237.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Random features for large-scale kernel machines",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "Rahimi",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Recht",
"suffix": ""
}
],
"year": 2008,
"venue": "Advances in Neural Information Processing Systems",
"volume": "20",
"issue": "",
"pages": "1177--1184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali Rahimi and Benjamin Recht. 2008. Random fea- tures for large-scale kernel machines. In J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, editors, Ad- vances in Neural Information Processing Systems 20, pages 1177-1184.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3982--3992",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1410"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Clustering methods",
"authors": [
{
"first": "Lior",
"middle": [],
"last": "Rokach",
"suffix": ""
},
{
"first": "Oded",
"middle": [],
"last": "Maimon",
"suffix": ""
}
],
"year": 2005,
"venue": "The Data Mining and Knowledge Discovery Handbook",
"volume": "",
"issue": "",
"pages": "321--352",
"other_ids": {
"DOI": [
"10.1007/0-387-25465-X_15"
]
},
"num": null,
"urls": [],
"raw_text": "Lior Rokach and Oded Maimon. 2005. Clustering methods. In The Data Mining and Knowledge Dis- covery Handbook, pages 321-352.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Nonlinear dimensionality reduction by locally linear embedding",
"authors": [
{
"first": "T",
"middle": [],
"last": "Sam",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [
"K"
],
"last": "Roweis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Saul",
"suffix": ""
}
],
"year": 2000,
"venue": "Science",
"volume": "290",
"issue": "",
"pages": "2323--2326",
"other_ids": {
"DOI": [
"10.1126/science.290.5500.2323"
]
},
"num": null,
"urls": [],
"raw_text": "Sam T. Roweis and Lawrence K. Saul. 2000. Nonlin- ear dimensionality reduction by locally linear em- bedding. Science, 290:2323-2326.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Concatenated p-mean word embeddings as universal cross-lingual sentence representations",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "R\u00fcckl\u00e9",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "Maxime",
"middle": [],
"last": "Peyrard",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas R\u00fcckl\u00e9, Steffen Eger, Maxime Peyrard, and Iryna Gurevych. 2018. Concatenated p-mean word embeddings as universal cross-lingual sentence rep- resentations. CoRR, abs/1803.01400.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Context mover's distance & barycenters: Optimal transport of contexts for building representations",
"authors": [
{
"first": "",
"middle": [],
"last": "Sidak Pal",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Aymeric",
"middle": [],
"last": "Hug",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Dieuleveut",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jaggi",
"suffix": ""
}
],
"year": 2019,
"venue": "Deep Generative Models for Highly Structured Data, ICLR 2019 Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sidak Pal Singh, Andreas Hug, Aymeric Dieuleveut, and Martin Jaggi. 2019. Context mover's distance & barycenters: Optimal transport of contexts for build- ing representations. In Deep Generative Models for Highly Structured Data, ICLR 2019 Workshop, New Orleans, Louisiana, United States, May 6, 2019.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Energy statistics: A class of statistics based on distances",
"authors": [
{
"first": "J",
"middle": [],
"last": "G\u00e1bor",
"suffix": ""
},
{
"first": "Maria",
"middle": [
"L"
],
"last": "Sz\u00e9kely",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rizzo",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Statistical Planning and Inference",
"volume": "143",
"issue": "8",
"pages": "1249--1272",
"other_ids": {
"DOI": [
"10.1016/j.jspi.2013.03.018"
]
},
"num": null,
"urls": [],
"raw_text": "G\u00e1bor J. Sz\u00e9kely and Maria L. Rizzo. 2013. En- ergy statistics: A class of statistics based on dis- tances. Journal of Statistical Planning and Infer- ence, 143(8):1249 -1272.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Semafor: Semantic document indexing using semantic forests",
"authors": [
{
"first": "George",
"middle": [],
"last": "Tsatsaronis",
"suffix": ""
},
{
"first": "Iraklis",
"middle": [],
"last": "Varlamis",
"suffix": ""
},
{
"first": "Kjetil",
"middle": [],
"last": "N\u00f8rv\u00e5g",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM '12",
"volume": "",
"issue": "",
"pages": "1692--1696",
"other_ids": {
"DOI": [
"10.1145/2396761.2398499"
]
},
"num": null,
"urls": [],
"raw_text": "George Tsatsaronis, Iraklis Varlamis, and Kjetil N\u00f8rv\u00e5g. 2012. Semafor: Semantic document in- dexing using semantic forests. In Proceedings of the 21st ACM International Conference on Informa- tion and Knowledge Management, CIKM '12, page 1692-1696, New York, NY, USA.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Wasserstein-fisher-rao document distance",
"authors": [
{
"first": "Zihao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Datong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Chenglong",
"middle": [],
"last": "Bao",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zihao Wang, Datong Zhou, Yong Zhang, Hao Wu, and Chenglong Bao. 2019. Wasserstein-fisher-rao docu- ment distance. CoRR, abs/1904.10294.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Word mover's embedding: From Word2Vec to document embedding",
"authors": [
{
"first": "Lingfei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "En-Hsu Yen",
"suffix": ""
},
{
"first": "Kun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Fangli",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Avinash",
"middle": [],
"last": "Balakrishnan",
"suffix": ""
},
{
"first": "Pin-Yu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Ravikumar",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Witbrock",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4524--4534",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1482"
]
},
"num": null,
"urls": [],
"raw_text": "Lingfei Wu, Ian En-Hsu Yen, Kun Xu, Fangli Xu, Avinash Balakrishnan, Pin-Yu Chen, Pradeep Ravikumar, and Michael J. Witbrock. 2018. Word mover's embedding: From Word2Vec to document embedding. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 4524-4534, Brussels, Belgium.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Doc-Chat: An information retrieval approach for chatbot engines using unstructured documents",
"authors": [
{
"first": "Zhao",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Junwei",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zhoujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jianshe",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "516--525",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1049"
]
},
"num": null,
"urls": [],
"raw_text": "Zhao Yan, Nan Duan, Junwei Bao, Peng Chen, Ming Zhou, Zhoujun Li, and Jianshe Zhou. 2016. Doc- Chat: An information retrieval approach for chatbot engines using unstructured documents. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 516-525, Berlin, Germany.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Figure showinga simple example of the embedding algorithm.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "https://drive.google.com/open?id= 1sGgAo2SBoYKhQQK_kilUp8KSToCI55jl",
"num": null
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "Dataset information: Metadata describing the datasets used in our experiments."
}
}
}
}