Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S13-1022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:42:35.283789Z"
},
"title": "SRIUBC-Core: Multiword Soft Similarity Models for Textual Similarity",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Yeh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International",
"location": {
"settlement": "Menlo Park",
"region": "CA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Basque Country Donostia",
"location": {
"settlement": "Basque Country"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this year's Semantic Textual Similarity evaluation, we explore the contribution of models that provide soft similarity scores across spans of multiple words, over the previous year's system. To this end, we explored the use of neural probabilistic language models and a TF-IDF weighted variant of Explicit Semantic Analysis. The neural language model systems used vector representations of individual words, where these vectors were derived by training them against the context of words encountered, and thus reflect the distributional characteristics of their usage. To generate a similarity score between spans, we experimented with using tiled vectors and Restricted Boltzmann Machines to identify similar encodings. We find that these soft similarity methods generally outperformed our previous year's systems, albeit they did not perform as well in the overall rankings. A simple analysis of the soft similarity resources over two word phrases is provided, and future areas of improvement are described.",
"pdf_parse": {
"paper_id": "S13-1022",
"_pdf_hash": "",
"abstract": [
{
"text": "In this year's Semantic Textual Similarity evaluation, we explore the contribution of models that provide soft similarity scores across spans of multiple words, over the previous year's system. To this end, we explored the use of neural probabilistic language models and a TF-IDF weighted variant of Explicit Semantic Analysis. The neural language model systems used vector representations of individual words, where these vectors were derived by training them against the context of words encountered, and thus reflect the distributional characteristics of their usage. To generate a similarity score between spans, we experimented with using tiled vectors and Restricted Boltzmann Machines to identify similar encodings. We find that these soft similarity methods generally outperformed our previous year's systems, albeit they did not perform as well in the overall rankings. A simple analysis of the soft similarity resources over two word phrases is provided, and future areas of improvement are described.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "For this year's Semantic Textual Similarity (STS) evaluation, we built upon the best performing system we deployed last year with several methods for exploring the soft similarity between windows of words, instead of relying just on single token-totoken similarities. From the previous year's evaluation, we were impressed by the performance of features derived from bigrams and skip bigrams. Bigrams capture the relationship between two concurrent words, while skip bigrams can capture longer distance relationships. We found that characterizing the overlap in skip bigrams between the sentences in a STS problem pair proved to be a major contributor to last year's system's performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Skip bigrams were matched on two criteria, lexical matches, and via part of speech (POS). Lexical matching is brittle, and even if the match were made on lemmas, we lose the ability to match against synonyms. We could rely on the token-to-token similarity methods to account for these non-lexical similarities, but these do not account for sequence nor dependencies in the sentencees. Using POS based matching allows for a level of generalization, but at a much broader level. What we would like to have is a model that can capture these long distance relationships at a level that is less broad than POS matching, but allows for a soft similarity scoring between words. In addition, the ability to encompass a larger window without having to manually insert skips would be desirable as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To this end we decided to explore the use of neural probabilistic language models (NLPM) for capturing this kind of behavior (Bengio et al., 2003) . NLPMs represent individual words as real valued vectors, often at a much lower dimensionality than the original vocabulary. By training these representations to maximize a criterion such as loglikelihood of target word given the other words in its neighborhood, the word vectors themselves can capture commonalities between words that have been used in similar contexts. In previous studies, these vectors themselves can capture distributionally derived similarities, by directly comparing the word vectors themselves using simple measures such as Euclidean distance (Collobert and Weston, 2008) .",
"cite_spans": [
{
"start": 125,
"end": 146,
"text": "(Bengio et al., 2003)",
"ref_id": "BIBREF1"
},
{
"start": 716,
"end": 744,
"text": "(Collobert and Weston, 2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition, we fielded a variant of Explicit Semantic Analysis (Gabrilovich and Markovitch, 2009) that used TF-IDF weightings, instead of using the raw concept vectors themselves. From previous experiments, we found that using TF-IDF weightings on the words in a pair gave a boost in performance over sentence length comparisons and above, so this simple modification was incorporated into our system.",
"cite_spans": [
{
"start": 64,
"end": 98,
"text": "(Gabrilovich and Markovitch, 2009)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to identify the contribution of these soft similarity methods against last year's system, we fielded three systems:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. System 1, the system from the previous year, incorporating semantic similarity resources, precision focused and Bilingual Evaluation Understudy (BLEU) overlaps (Papineni et al., 2002) , and several types of skip-bigrams.",
"cite_spans": [
{
"start": 163,
"end": 186,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. System 2, features just the new NLPM scores and TFIDF-ESA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For the rest of this system description, we briefly describe the previous year's system (System 1), the TFIDF weighted Explicit Semantic Analysis, and the NLPM systems. We then describe the experiment setup, and follow up with results and analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System 3, combines System 1 and System 2.",
"sec_num": "3."
},
{
"text": "The system we used in SemEval 2012 consisted of the following components:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System 1",
"sec_num": "2"
},
{
"text": "1. Resource based word-to-word similarities, combined using a Semantic Matrix (Fernando and Stevenson, 2008 The Semantic Matrix assesses similarity between a pair s 1 and s 2 by summing over all of the word to word similarities between the pair, subject to normalization, as given by Formula 1.",
"cite_spans": [
{
"start": 78,
"end": 107,
"text": "(Fernando and Stevenson, 2008",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System 1",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "sim(s 1 , s 2 ) = v T 1 Wv 2 v 1 v 2",
"eq_num": "(1)"
}
],
"section": "System 1",
"sec_num": "2"
},
{
"text": "The matrix W is a symmetric matrix that encodes the word to word similarities, derived from the underlying resources this is drawn from. From the previous year's assessment, we used similarities derived from Personalized PageRank (Agirre et al., 2010) over WordNet (Fellbaum, 1998) , the Explicit Semantic Analysis (Gabrilovich and Markovitch, 2009) concept vector signatures for each lemma, and the Dekang Lin Proximity-based Thesaurus 1 .",
"cite_spans": [
{
"start": 230,
"end": 251,
"text": "(Agirre et al., 2010)",
"ref_id": "BIBREF0"
},
{
"start": 265,
"end": 281,
"text": "(Fellbaum, 1998)",
"ref_id": "BIBREF5"
},
{
"start": 315,
"end": 349,
"text": "(Gabrilovich and Markovitch, 2009)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System 1",
"sec_num": "2"
},
{
"text": "The cosine-based lexical overlap measure simply measures the cosine similarity, using strict lexical overlap, between the sentence pairs. The BLEU, precision focused POS, and skip-bigrams are directional measures, which measure how well a target sentence matches a source sentence. To score pair of sentences, we simply averaged the score where one sentence is the source, the other the target, and then vice versa. These directional measures were originally used as a precision focused means to assess the quality of machine translations output against reference translations. Following (Finch et al., 2005) , these measures have also been shown to be good for assessing semantic similarity between pairs of sentences.",
"cite_spans": [
{
"start": 588,
"end": 608,
"text": "(Finch et al., 2005)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System 1",
"sec_num": "2"
},
{
"text": "For BLEU, we measured how well ngrams of order one through four were matched by the target sentence, matching solely on lexical matches, or POS matches. Skip bigrams performed similarly, except the bigrams were not contiguous. The precision focused POS features assess how well each POS tag found in the source sentence has been matched in the target sentence, where the matches are first done via a lemma match.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System 1",
"sec_num": "2"
},
{
"text": "To combine the scores from these features, we used the LIBSVM Support Vector Regression (SVR) package (Chang and Lin, 2011) , trained on the training pair gold scores. Per the previous year, we used a radial basis kernel with a degree of three.",
"cite_spans": [
{
"start": 102,
"end": 123,
"text": "(Chang and Lin, 2011)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System 1",
"sec_num": "2"
},
{
"text": "For a more in-depth description of System 1, please refer to (Yeh and Agirre, 2012) .",
"cite_spans": [
{
"start": 61,
"end": 83,
"text": "(Yeh and Agirre, 2012)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System 1",
"sec_num": "2"
},
{
"text": "This year instead of using Explicit Semantic Analysis (ESA) to populate a word-by-word similarity matrix, we used ESA to derive a similarity score between the sentences in a STS pair. For a given sentence, we basically treated it as an IR query against the ESA concept-base: we tokenized the words, extracted the ESA concept vectors, and performed a TFIDF weighted average to arrive at the sentence vector. A cutoff of the top 1000 scoring concepts was further applied, per previous experience, to improve performance. The similarity score for two sentence vectors was computed using cosine similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TFIDF-ESA",
"sec_num": "3"
},
{
"text": "Neural probabilistic language models represent words as real valued vectors, where these vectors are trained to jointly capture the distributional statistics of their context words and the positions these words occur at. These representations are usually at a much lower dimensionality than that of the original vocabulary, forcing some form of compression to occur in the vocabulary. The intent is to train a model that can account for words that have not been observed in a given context before, but that word vector has enough similarity to another word that has been encountered in that context before. Earlier models simply learnt how to model the next word in a sequence, where each word in the vocabulary is initially represented by a randomly initialized vector. For each instance, a larger vector is assembled from the concatenation of the vectors of the words observed, and act as inputs into a model. This model itself is optimized to maximize the likelihood of the next word in the observed sequence, with the errors backpropagated through the vectors, with the parameters for the vectors being tied (Bengio et al., 2003) .",
"cite_spans": [
{
"start": 1112,
"end": 1133,
"text": "(Bengio et al., 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Probabilistic Language Models",
"sec_num": "4"
},
{
"text": "In later studies, these representations are the product of training a neural network to maximize the margin between the scores it assigns to observed \"correct\" examples, which should have higher scores, and \"corrupted examples,\" where the \"heart\" dim=50 \"attack\" dim=50 \"heart attack\" dim=100",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Probabilistic Language Models",
"sec_num": "4"
},
{
"text": "Figure 1: Vector Window encoding for the phrase \"heart attack.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Probabilistic Language Models",
"sec_num": "4"
},
{
"text": "token of interest is swapped out to produce an incorrect example and preferably a lower score. As shown in (Collobert and Weston, 2008) and then (Huang et al., 2012) , simple distance measures using the representations derived from this process are both useful for assessing word similarity and relatedness. For this study, we used the contextually trained language vectors provided by (Huang et al., 2012) , which were trained to maximize the margin between training pairs and to account for document context as well. The dimensionality of these vectors was 50.",
"cite_spans": [
{
"start": 107,
"end": 135,
"text": "(Collobert and Weston, 2008)",
"ref_id": "BIBREF4"
},
{
"start": 145,
"end": 165,
"text": "(Huang et al., 2012)",
"ref_id": "BIBREF10"
},
{
"start": 386,
"end": 406,
"text": "(Huang et al., 2012)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Probabilistic Language Models",
"sec_num": "4"
},
{
"text": "As we are interested in capturing information at a level greater than individual words, we used two methods to combine these NLPM word vectors to represent an order n ngram: a Vector Window where we simply concatenated the word vectors, and one that relied on encodings learnt by Restricted Boltzmann Machines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Probabilistic Language Models",
"sec_num": "4"
},
{
"text": "For this work, we experimented with generating encodings for ngrams sized 2,3,5,10, and 21. The smaller sizes correspond to commonly those commonly used to match ngrams, while the larger ones were used to take advantage of the reduced sparsity. Similarities between a pair of ngram encodings is given similarity of their vector encodings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Probabilistic Language Models",
"sec_num": "4"
},
{
"text": "The most direct way to encode an order n ngram as a vector is to concatenate the n NLPM word vectors together, in order. For example, to encode \"heart attack\", the vectors for \"heart\" and \"attack\", both with dimensionality 50, are linked together to form a larger vector with dimensionality 100 (Figure 1) .",
"cite_spans": [],
"ref_spans": [
{
"start": 295,
"end": 305,
"text": "(Figure 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Vector Window",
"sec_num": "4.1"
},
{
"text": "For size n vector windows where the total number of tokens is less than n, we pad the left and right sides of the window with a \"negative\" token, which was selected to be a vector that, on the average, is anticorrelated with all the vectors in the vocabulary. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Window",
"sec_num": "4.1"
},
{
"text": "Although the word vectors we used were trained against a ten word context, the vector windows may not be able to describe similarities at multiword level, as the method is still performing comparisons at a word-to-word level. For example the vector window score for the related phrases heart attack and cardiac arrest is 0.35. In order to account for similarities at a multiword level, we trained Restricted Boltzmann Machines (RBM) to further encode these vector windows (Hinton, 2002) . A RBM is a bipartite undirected graphical model, where the only edges are between a layer of input variables and a layer of latent variables. The latent layer consists of sigmoid units, allowing for non-linear combinations of the inputs. The training objective is to learn a set of weights that maximize the likelihood of training observations, and given the independences inherent, in the model it can be trained quickly and effectively via Contrastive Divergence. The end effect is the system attempts to force the latent layer to learn an encoding of the input variables, usually at a lower dimensionality. In our case, by compressing their distributional representations we hope to amplify significant similarities between multiword expressions, albeit for those of the same size.",
"cite_spans": [
{
"start": 472,
"end": 486,
"text": "(Hinton, 2002)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Restricted Boltzmann Machines",
"sec_num": "4.2"
},
{
"text": "To derive a RBM based encoding, we first generate a vector window for the ngram, and then used the trained RBM to arrive at the compressed vector ( Figure 2 ). As before, we derive a similarity score between two RBM based encodings by comparing their cosine distance.",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 156,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Restricted Boltzmann Machines",
"sec_num": "4.2"
},
{
"text": "Following the above example, the vectors from an RBM trained system for heart attack and cardiac arrest score the pair at a higher similarity, 0.54. For phrases that are unrelated, comparing door key with cardiac arrest gives a score of -0.14 with the vector window, and RBM this is -0.17.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Restricted Boltzmann Machines",
"sec_num": "4.2"
},
{
"text": "To train a RBM encoder for order n ngrams, we generated n sized vector windows over ngrams drawn from the English language articles in Wikipedia. The language dump was filtered to larger sized articles, in order to avoid pages likely to be content-free, such as redirects. The training set size consisted of 35,581,434 words, which was split apart into 1,519,256 sentences using the OpenNLP sentence splitter tool 2 . The dimensionality of the encoding layer was set to 50 for window sizes 2,3,5, and 200 for the larger windows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Restricted Boltzmann Machines",
"sec_num": "4.2"
},
{
"text": "In order to produce an overall similarity score, we used a variant of the weighted variant of the similarity combination method given in (Mihalcea et al., 2006) . Here, we generated a directional similarity score from a source to target by the following,",
"cite_spans": [
{
"start": 137,
"end": 160,
"text": "(Mihalcea et al., 2006)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combining word and ngram similarity scores",
"sec_num": "4.3"
},
{
"text": "sim(S, T ) = s\u2208S maxSim(s, T ) |S| (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining word and ngram similarity scores",
"sec_num": "4.3"
},
{
"text": "where maxSim(s, T ) represents the maximum similarity between the token s and the set of tokens in the target sentence, T . In the case of ngrams with order 2 or greater, we treat each ngram as a token for the combination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining word and ngram similarity scores",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "avgsim(T 1 , T 2 ) = 1 2 (sim(T 1 , T 2 ) + sim(T 2 , T 1 ))",
"eq_num": "(3)"
}
],
"section": "Combining word and ngram similarity scores",
"sec_num": "4.3"
},
{
"text": "Unlike the original method, we treated each term equally, in order to account for ngrams with order 2 and above. We also did not filter based off of the part of speech, relying on the scores themselves to help perform the filtering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining word and ngram similarity scores",
"sec_num": "4.3"
},
{
"text": "In addition to the given word window sizes, we also directly assess the word-to-word similarity scores by comparing the word vectors directly, using a window size of one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining word and ngram similarity scores",
"sec_num": "4.3"
},
{
"text": "System 2, the TFIDF-ESA score for a pair is a feature. For each of the given ngram sizes, we treated ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Setup",
"sec_num": "5"
},
{
"text": "Training (2012) Test (2013) Surprise1 (ONWN) FNWN MSRPar Headlines Surprise1 (ONWN) ONWN Surprise2 (SMT) SMT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Setup",
"sec_num": "5"
},
{
"text": "The results of our three runs are given in the top half of Table 2 . To get a better sense of the contribution of the new components, we also ran the NLPM vector window and RBM window models and TFIDF-ESA components individually against the test sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 66,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "6"
},
{
"text": "The NLPM system was trained using the same SVR setup as the main experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "6"
},
{
"text": "In order to provide a lexical match comparison for the NLPM system, we experimented with a ngram matching system, where ngrams of size 1,2,3,5,10, and 21 were used to generate similarity scores via the same combination method as the NLPM models. Here, hard matching was performed, where matching ngrams were given a score of 1, else 0. Again, we used the main experiment SVR setup to combine the scores from the various ngram sizes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "6"
},
{
"text": "We found that overall the previous year's system did not perform adequately on the evaluation datasets, short of the headlines dataset. Oddly enough, TFIDF-ESA by itself would have arrived at a good correlation with OnWN: one possible explanation for this would be the fact that TFIDF-ESA by itself is essentially an order-free \"bag of words\" model that assesses soft token to token similarity. As the other systems incorporate either some notion of sequence and/or require strict lexical matching, it is possible that characterization does not help with the OnWN sense definitions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "6"
},
{
"text": "Combining the new features with the previous year's system gave poorer performance; a preliminary assessment over the training sets showed some degree of overfitting, likely due to high correlation between the NLPM features and last year's directional measures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "6"
},
{
"text": "When using the same combination method, ngram matching via lexical content over ngrams gave poorer results than those from NLPM models, as given in Table 2 . This would also argue for identifying better combination methods than the averaged maximum similarity method used here.",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 155,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "6"
},
{
"text": "What is interesting to note is that the NLPM and TFIDF-ESA systems do not rely on any part of speech information, nor hand-crafted semantic similarity resources. Instead, these methods are derived from large scale corpora, and generally outperformed the previous year's system which relied on that extra information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "6"
},
{
"text": "To get a better understanding of the NLPM and TFIDF-ESA models, we compared how the components would score the similarity between pairs of two word phrases, given in Table 3 . At least over this small sampling we genearted, we found that in general the RBM method tended to have a much wider range of scores than the Vector Window, although both methods were very correlated. Both systems had very low correlation with TFIDF-ESA.",
"cite_spans": [],
"ref_spans": [
{
"start": 166,
"end": 173,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "6"
},
{
"text": "One area of improvement would be to develop a better method for combining the various ngram similarity scores provided by the NLPMs. When using lexical matching of ngrams, we found that the combination method used here proved inferior to the directional measures from the previous year's systems. This would argue for a better way to use the NLPMs. As training STS pairs are available with gold scores, this would argue for some form of supervised training. For training similarities between multiword expressions, proxy measures for similarity, such as the Normalized Google Distance (Cilibrasi and Vit\u00e1nyi, 2004) , may be feasible.",
"cite_spans": [
{
"start": 585,
"end": 614,
"text": "(Cilibrasi and Vit\u00e1nyi, 2004)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "7"
},
{
"text": "Another avenue would be to allow the NLPM methods to encode arbitrary sized text spans, as the current restriction on spans being the same size is Table 3 : Cosine similarity of two input strings, as given by the vectors generated from the Vector Window size 2, RBM Window size 2, and TFIDF-ESA.",
"cite_spans": [],
"ref_spans": [
{
"start": 147,
"end": 154,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Future Work",
"sec_num": "7"
},
{
"text": "unrealistic. One possibility is to use recurrent neural network techniques to generate this type of encoding. Finally, the size of the Wikipedia dump used to train the Restricted Boltzmann Machines could be at issue, as 35 million words could be considered small compared to the full range of expressions we would wish to capture, especially for the larger window spans. A larger training corpus may be needed to fully see the benefit from RBMs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "7"
},
{
"text": "http://webdocs.cs.ualberta.ca/ lindek/downloads.htm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://opennlp.apache.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Supported by the Artificial Intelligence Center at SRI International. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Artificial Intelligence Center, or SRI International.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Exploring knowledge bases for similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Montse",
"middle": [],
"last": "Cuadros",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Soroa",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Montse Cuadros, German Rigau, and Aitor Soroa. 2010. Exploring knowledge bases for similar- ity. In Proceedings of the International Conference on Language Resources and Evaluation 2010.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Janvin",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. Journal of Machine Learning Research, 3:1137-1155.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "LIBSVM: A library for support vector machines",
"authors": [
{
"first": "Chih-Chung",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2011,
"venue": "ACM Transactions on Intelligent Systems and Technology",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Chung Chang and Chih-Jen Lin. 2011. LIBSVM: A library for support vector machines. ACM Transac- tions on Intelligent Systems and Technology, 2:27:1- 27:27.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The google similarity distance",
"authors": [
{
"first": "Rudi",
"middle": [],
"last": "Cilibrasi",
"suffix": ""
},
{
"first": "M",
"middle": [
"B"
],
"last": "Paul",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vit\u00e1nyi",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rudi Cilibrasi and Paul M. B. Vit\u00e1nyi. 2004. The google similarity distance. CoRR, abs/cs/0412098.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "R",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "International Conference on Machine Learning, ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Collobert and J. Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In International Conference on Machine Learning, ICML.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "WordNet -An Electronic Lexical Database",
"authors": [
{
"first": "Christine",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christine Fellbaum. 1998. WordNet -An Electronic Lex- ical Database. MIT Press.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A semantic similarity approach to paraphrase detection",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Fernando",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Stevenson",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics UK (CLUK 2008) 11th Annual Research Colloqium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel Fernando and Mark Stevenson. 2008. A se- mantic similarity approach to paraphrase detection. In Computational Linguistics UK (CLUK 2008) 11th An- nual Research Colloqium.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Using machine translation evaluation techniques to determine sentence-level semantic equivalence",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Finch",
"suffix": ""
},
{
"first": "Young-Sook",
"middle": [],
"last": "Hwang",
"suffix": ""
},
{
"first": "Eiichio",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Third International Workshop on Paraphrasing (IWP 2005)",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Finch, Young-Sook Hwang, and Eiichio Sumita. 2005. Using machine translation evaluation tech- niques to determine sentence-level semantic equiva- lence. In Proceedings of the Third International Work- shop on Paraphrasing (IWP 2005), pages 17-24, Jeju Island, South Korea.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Wikipedia-based semantic interpretation",
"authors": [
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Shaul",
"middle": [],
"last": "Markovitch",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Artificial Intelligence Research",
"volume": "34",
"issue": "",
"pages": "443--498",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evgeniy Gabrilovich and Shaul Markovitch. 2009. Wikipedia-based semantic interpretation. Journal of Artificial Intelligence Research, 34:443-498.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Training products of experts by minimizing contrastive divergence",
"authors": [
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2002,
"venue": "Neural Computation",
"volume": "14",
"issue": "8",
"pages": "1771--1800",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey E. Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural Com- putation, 14(8):1771-1800.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improving Word Representations via Global Context and Multiple Word Prototypes",
"authors": [
{
"first": "Eric",
"middle": [
"H"
],
"last": "Huang",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving Word Represen- tations via Global Context and Multiple Word Proto- types. In Annual Meeting of the Association for Com- putational Linguistics (ACL).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Corpus-based and knowledge-based measures of text semantic similarity",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Courtney",
"middle": [],
"last": "Corley",
"suffix": ""
},
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the American Association for Artificial Intelligence (AAAI 2006)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea, Courtney Corley, and Carlo Strappar- ava. 2006. Corpus-based and knowledge-based mea- sures of text semantic similarity. In Proceedings of the American Association for Artificial Intelligence (AAAI 2006), Boston, Massachusetts, July.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computa- tional Linguistics, ACL '02, pages 311-318, Strouds- burg, PA, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Sri and ubc: Simple similarity features for semantic textual similarity",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Yeh",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of SemEval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Yeh and Eneko Agirre. 2012. Sri and ubc: Simple similarity features for semantic textual similarity. In Proceedings of SemEval 2012.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Using a RBM trained compressor to generate a compressed encoding of \"heart attack.\"",
"type_str": "figure",
"num": null
},
"TABREF1": {
"type_str": "table",
"text": "",
"num": null,
"content": "<table><tr><td>: Train (2012) and Test (2013) sets used to train</td></tr><tr><td>the regressors.</td></tr><tr><td>the ngram similarity scores from the Vector Window</td></tr><tr><td>and RBM methods as individual features. System</td></tr><tr><td>3 combines the features from System 2 with those</td></tr><tr><td>from System 1. For Systems 2 and 3, the SVR setup</td></tr><tr><td>used by System 1 was used to develop scorers. As no</td></tr><tr><td>training immediate training sets were provided for</td></tr><tr><td>the evaluation sets, we used the train and test parti-</td></tr><tr><td>tions given in Table 1, training on both the 2012 train</td></tr><tr><td>and test data, where gold scores were available.</td></tr></table>",
"html": null
},
"TABREF3": {
"type_str": "table",
"text": "Pearson correlation of systems against the test datasets (top). The test set performance for the new Neural Probabilistic Language Model (NLPM) and TFIDF-ESA components are given, along with a lexical-only variant for comparison (bottom).",
"num": null,
"content": "<table><tr><td>String 1</td><td>String 2</td><td colspan=\"3\">Vec. Window RBM Window TFIDF-ESA</td></tr><tr><td>heart attack</td><td>cardiac arrest</td><td>0.354</td><td>0.544</td><td>0.182</td></tr><tr><td>door key</td><td>cardiac arrest</td><td>-0.14</td><td>-0.177</td><td>0</td></tr><tr><td>baby food</td><td>cat food</td><td>0.762</td><td>0.907</td><td>0.079</td></tr><tr><td>dog food</td><td>cat food</td><td>0.886</td><td>0.914</td><td>0.158</td></tr><tr><td>rotten food</td><td>baby food</td><td>0.482</td><td>0.473</td><td>0.071</td></tr><tr><td>frozen solid</td><td>thawed out</td><td>0.046</td><td>-0.331</td><td>0.102</td></tr><tr><td colspan=\"2\">severely burnt frozen stiff</td><td>-0.023</td><td>-0.155</td><td>0</td></tr><tr><td>uphill slog</td><td>raced downhill</td><td>0.03</td><td>-0.322</td><td>0.043</td></tr><tr><td>small cat</td><td>large dog</td><td>0.817</td><td>0.905</td><td>0.007</td></tr><tr><td>ran along</td><td>sprinted by</td><td>0.31</td><td>0.238</td><td>0.004</td></tr><tr><td>ran quickly</td><td>jogged rapidly</td><td>0.349</td><td>0.327</td><td>0.001</td></tr><tr><td>deathly ill</td><td>very sick</td><td>0.002</td><td>0.177</td><td>0.004</td></tr><tr><td>ran to</td><td>raced to</td><td>0.815</td><td>0.829</td><td>0.013</td></tr><tr><td>free drinks</td><td>drinks free</td><td>0.001</td><td>0.042</td><td>1</td></tr><tr><td>door key</td><td colspan=\"2\">combination lock 0.098</td><td>0.093</td><td>0.104</td></tr><tr><td>frog blast</td><td>vent core</td><td>0.003</td><td>0.268</td><td>0.004</td></tr></table>",
"html": null
}
}
}
}