Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S13-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:42:41.453923Z"
},
"title": "LIPN-CORE: Semantic Text Similarity using n-grams, WordNet, Syntactic Analysis, ESA and Information Retrieval based Features",
"authors": [
{
"first": "Davide",
"middle": [],
"last": "Buscaldi",
"suffix": "",
"affiliation": {
"laboratory": "UMR 7030",
"institution": "Universit\u00e9 Paris 13",
"location": {
"addrLine": "Sorbonne Paris Cit\u00e9",
"postCode": "F-93430",
"settlement": "Villetaneuse",
"country": "France"
}
},
"email": "[email protected]"
},
{
"first": "Joseph",
"middle": [],
"last": "Le Roux",
"suffix": "",
"affiliation": {
"laboratory": "UMR 7030",
"institution": "Universit\u00e9 Paris 13",
"location": {
"addrLine": "Sorbonne Paris Cit\u00e9",
"postCode": "F-93430",
"settlement": "Villetaneuse",
"country": "France"
}
},
"email": "[email protected]"
},
{
"first": "Jorge",
"middle": [
"J Garc\u00eda"
],
"last": "Flores",
"suffix": "",
"affiliation": {
"laboratory": "UMR 7030",
"institution": "Universit\u00e9 Paris 13",
"location": {
"addrLine": "Sorbonne Paris Cit\u00e9",
"postCode": "F-93430",
"settlement": "Villetaneuse",
"country": "France"
}
},
"email": "[email protected]"
},
{
"first": "Adrian",
"middle": [
"Popescu"
],
"last": "Cea",
"suffix": "",
"affiliation": {
"laboratory": "Vision & Content Engineering Laboratory",
"institution": "",
"location": {
"postCode": "F-91190",
"settlement": "Gif-sur-Yvette",
"country": "France"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the system used by the LIPN team in the Semantic Textual Similarity task at *SEM 2013. It uses a support vector regression model, combining different text similarity measures that constitute the features. These measures include simple distances like Levenshtein edit distance, cosine, Named Entities overlap and more complex distances like Explicit Semantic Analysis, WordNet-based similarity, IR-based similarity, and a similarity measure based on syntactic dependencies.",
"pdf_parse": {
"paper_id": "S13-1023",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the system used by the LIPN team in the Semantic Textual Similarity task at *SEM 2013. It uses a support vector regression model, combining different text similarity measures that constitute the features. These measures include simple distances like Levenshtein edit distance, cosine, Named Entities overlap and more complex distances like Explicit Semantic Analysis, WordNet-based similarity, IR-based similarity, and a similarity measure based on syntactic dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The Semantic Textual Similarity task (STS) at *SEM 2013 requires systems to grade the degree of similarity between pairs of sentences. It is closely related to other well known tasks in NLP such as textual entailment, question answering or paraphrase detection. However, as noticed in (B\u00e4r et al., 2012) , the major difference is that STS systems must give a graded, as opposed to binary, answer.",
"cite_spans": [
{
"start": 285,
"end": 303,
"text": "(B\u00e4r et al., 2012)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One of the most successful systems in *SEM 2012 STS, (B\u00e4r et al., 2012) , managed to grade pairs of sentences accurately by combining focused measures, either simple ones based on surface features (ie n-grams), more elaborate ones based on lexical semantics, or measures requiring external corpora such as Explicit Semantic Analysis, into a robust measure by using a log-linear regression model.",
"cite_spans": [
{
"start": 53,
"end": 71,
"text": "(B\u00e4r et al., 2012)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The LIPN-CORE system is built upon this idea of combining simple measures with a regression model to obtain a robust and accurate measure of textual similarity, using the individual measures as fea-tures for the global system. These measures include simple distances like Levenshtein edit distance, cosine, Named Entities overlap and more complex distances like Explicit Semantic Analysis, WordNetbased similarity, IR-based similarity, and a similarity measure based on syntactic dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows. Measures are presented in Section 2. Then the regression model, based on Support Vector Machines, is described in Section 3. Finally we discuss the results of the system in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "First of all, sentences p and q are analysed in order to extract all the included WordNet synsets. For each WordNet synset, we keep noun synsets and put into the set of synsets associated to the sentence, C p and C q , respectively. If the synsets are in one of the other POS categories (verb, adjective, adverb) we look for their derivationally related forms in order to find a related noun synset: if there is one, we put this synsets in C p (or C q ). For instance, the word \"playing\" can be associated in WordNet to synset (v)play#2, which has two derivationally related forms corresponding to synsets (n)play#5 and (n)play#6: these are the synsets that are added to the synset set of the sentence. No disambiguation process is carried out, so we take all possible meanings into account. Given C p and C q as the sets of concepts contained in sentences p and q, respectively, with |C p | \u2265 |C q |, the conceptual similarity between p and q is calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WordNet-based Conceptual Similarity (Proxigenea)",
"sec_num": "2.1"
},
{
"text": "ss(p, q) = c 1 \u2208Cp max c 2 \u2208Cq s(c 1 , c 2 ) |C p | (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WordNet-based Conceptual Similarity (Proxigenea)",
"sec_num": "2.1"
},
{
"text": "where s(c 1 , c 2 ) is a conceptual similarity measure. Concept similarity can be calculated by different ways. For the participation in the 2013 Semantic Textual Similarity task, we used a variation of the Wu-Palmer formula (Wu and Palmer, 1994) named \"ProxiGenea\" (from the french Proximit\u00e9 G\u00e9n\u00e9alogique, genealogical proximity), introduced by (Dudognon et al., 2010), which is inspired by the analogy between a family tree and the concept hierarchy in WordNet. Among the different formulations proposed by (Dudognon et al., 2010), we chose the ProxiGenea3 variant, already used in the STS 2012 task by the IRIT team (Buscaldi et al., 2012) . The ProxiGenea3 measure is defined as:",
"cite_spans": [
{
"start": 225,
"end": 246,
"text": "(Wu and Palmer, 1994)",
"ref_id": "BIBREF10"
},
{
"start": 619,
"end": 642,
"text": "(Buscaldi et al., 2012)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WordNet-based Conceptual Similarity (Proxigenea)",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s(c 1 , c 2 ) = 1 1 + d(c 1 ) + d(c 2 ) \u2212 2 \u2022 d(c 0 )",
"eq_num": "(2)"
}
],
"section": "WordNet-based Conceptual Similarity (Proxigenea)",
"sec_num": "2.1"
},
{
"text": "where c 0 is the most specific concept that is present both in the synset path of c 1 and c 2 (that is, the Least Common Subsumer or LCS). The function returning the depth of a concept is noted with d.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WordNet-based Conceptual Similarity (Proxigenea)",
"sec_num": "2.1"
},
{
"text": "This measure has been proposed by (Mihalcea et al., 2006) as a corpus-based measure which uses Resnik's Information Content (IC) and the Jiang-Conrath (Jiang and Conrath, 1997 ) similarity metric:",
"cite_spans": [
{
"start": 34,
"end": 57,
"text": "(Mihalcea et al., 2006)",
"ref_id": null
},
{
"start": 151,
"end": 175,
"text": "(Jiang and Conrath, 1997",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IC-based Similarity",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s jc (c 1 , c 2 ) = 1 IC(c 1 ) + IC(c 2 ) \u2212 2 \u2022 IC(c 0 )",
"eq_num": "(3)"
}
],
"section": "IC-based Similarity",
"sec_num": "2.2"
},
{
"text": "where IC is the information content introduced by (Resnik, 1995) as IC(c) = \u2212 log P (c). The similarity between two text segments T 1 and T 2 is therefore determined as:",
"cite_spans": [
{
"start": 50,
"end": 64,
"text": "(Resnik, 1995)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IC-based Similarity",
"sec_num": "2.2"
},
{
"text": "sim(T1, T2) = 1 2 \uf8eb \uf8ec \uf8ed w\u2208{T 1 } max w 2 \u2208{T 2 } ws(w, w2) * idf (w) w\u2208{T 1 } idf (w) + w\u2208{T 2 } max w 1 \u2208{T 1 } ws(w, w1) * idf (w) w\u2208{T 2 } idf (w) \uf8f6 \uf8f7 \uf8f8(4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IC-based Similarity",
"sec_num": "2.2"
},
{
"text": "where idf (w) is calculated as the inverse document frequency of word w, taking into account Google Web 1T (Brants and Franz, 2006) frequency counts. The semantic similarity between words is calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IC-based Similarity",
"sec_num": "2.2"
},
{
"text": "ws(w i , w j ) = max c i \u2208W i ,c j inW j s jc (c i , c j ). (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IC-based Similarity",
"sec_num": "2.2"
},
{
"text": "where W i and W j are the sets containing all synsets in WordNet corresponding to word w i and w j , respectively. The IC values used are those calculated by Ted Pedersen (Pedersen et al., 2004) on the British National Corpus 1 .",
"cite_spans": [
{
"start": 171,
"end": 194,
"text": "(Pedersen et al., 2004)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IC-based Similarity",
"sec_num": "2.2"
},
{
"text": "We also wanted for our systems to take syntactic similarity into account. As our measures are lexically grounded, we chose to use dependencies rather than constituents. Previous experiments showed that converting constituents to dependencies still achieved best results on out-of-domain texts (Le Roux et al., 2012), so we decided to use a 2-step architecture to obtain syntactic dependencies. First we parsed pairs of sentences with the LORG parser 2 . Second we converted the resulting parse trees to Stanford dependencies 3 . Given the sets of parsed dependencies D p and D q , for sentence p and q, a dependency d \u2208 D x is a triple (l, h, t) where l is the dependency label (for instance, dobj or prep), h the governor and t the dependant. We define the following similarity measure between two syntactic dependencies",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Dependencies",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d 1 = (l 1 , h 1 , t 1 ) and d 2 = (l 2 , h 2 , t 2 ): dsim(d1, d2) = Lev(l1, l2) * idf h * sW N (h1, h2) + idft * sW N (t1, t2) 2",
"eq_num": "(6)"
}
],
"section": "Syntactic Dependencies",
"sec_num": "2.3"
},
{
"text": "where idf h = max(idf (h 1 ), idf (h 2 )) and idf t = max(idf (t 1 ), idf (t 2 )) are the inverse document frequencies calculated on Google Web 1T for the governors and the dependants (we retain the maximum for each pair), and s W N is calculated using formula 2, with two differences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Dependencies",
"sec_num": "2.3"
},
{
"text": "\u2022 if the two words to be compared are antonyms, then the returned score is 0;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Dependencies",
"sec_num": "2.3"
},
{
"text": "\u2022 if one of the words to be compared is not in WordNet, their similarity is calculated using the Levenshtein distance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Dependencies",
"sec_num": "2.3"
},
{
"text": "The similarity score between p and q, is then calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Dependencies",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s SD (p, q) = max \uf8eb \uf8ec \uf8ed d i \u2208Dp max d j inDq dsim(d i , d j ) |D p | , d i \u2208Dq max d j inDp dsim(d i , d j ) |D q | \uf8f6 \uf8f7 \uf8f8",
"eq_num": "(7)"
}
],
"section": "Syntactic Dependencies",
"sec_num": "2.3"
},
{
"text": "2.4 Information Retrieval-based Similarity",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Dependencies",
"sec_num": "2.3"
},
{
"text": "Let us consider two texts p and q, an Information Retrieval (IR) system S and a document collection D indexed by S. This measure is based on the assumption that p and q are similar if the documents retrieved by S for the two texts, used as input queries, are ranked similarly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Dependencies",
"sec_num": "2.3"
},
{
"text": "Let be L p = {d p 1 , . . . , d p K } and L q = {d q 1 , . . . , d q K }, d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Dependencies",
"sec_num": "2.3"
},
{
"text": "x i \u2208 D the sets of the top K documents retrieved by S for texts p and q, respectively. Let us define s p (d) and s q (d) the scores assigned by S to a document d for the query p and q, respectively. Then, the similarity score is calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Dependencies",
"sec_num": "2.3"
},
{
"text": "sim IR (p, q) = 1 \u2212 d\u2208Lp\u2229Lq \u221a (sp(d)\u2212sq(d)) 2 max(sp(d),sq(d)) |L p \u2229 L q | (8) if |L p \u2229 L q | = \u2205, 0 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Dependencies",
"sec_num": "2.3"
},
{
"text": "For the participation in this task we indexed a collection composed by the AQUAINT-2 4 and the English NTCIR-8 5 document collections, using the Lucene 6 4.2 search engine with BM25 similarity. The K value was empirically set to 20 after some tests on the STS 2012 data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Dependencies",
"sec_num": "2.3"
},
{
"text": "Explicit Semantic Analysis (Gabrilovich and Markovitch, 2007) represents meaning as a 4 http://www.nist.gov/tac/data/data_desc.html#AQUAINT-2 5 http://metadata.berkeley.edu/NTCIR-GeoTime/ ntcir-8-databases.php 6 http://lucene.apache.org/core weighted vector of Wikipedia concepts. Weights are supposed to quantify the strength of the relation between a word and each Wikipedia concept using the tf-idf measure. A text is then represented as a high-dimensional real valued vector space spanning all along the Wikipedia database. For this particular task we adapt the research-esa implementation (Sorg and Cimiano, 2008) 7 to our own home-made weighted vectors corresponding to a Wikipedia snapshot of February 4th, 2013.",
"cite_spans": [
{
"start": 27,
"end": 61,
"text": "(Gabrilovich and Markovitch, 2007)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ESA",
"sec_num": "2.5"
},
{
"text": "This feature is based on the Clustered Keywords Positional Distance (CKPD) model proposed in (Buscaldi et al., 2009 ) for the passage retrieval task. The similarity between a text fragment p and another text fragment q is calculated as:",
"cite_spans": [
{
"start": 93,
"end": 115,
"text": "(Buscaldi et al., 2009",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based Similarity",
"sec_num": "2.6"
},
{
"text": "sim ngrams (p, q) = \u2200x\u2208Q h(x, P ) 1 d(x, x max ) n i=1 w i (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based Similarity",
"sec_num": "2.6"
},
{
"text": "Where P is the set of n-grams with the highest weight in p, where all terms are also contained in q; Q is the set of all the possible n-grams in q and n is the total number of terms in the longest passage. The weights for each term and each n-gram are calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based Similarity",
"sec_num": "2.6"
},
{
"text": "\u2022 w i calculates the weight of the term t I as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based Similarity",
"sec_num": "2.6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w i = 1 \u2212 log(n i ) 1 + log(N )",
"eq_num": "(10)"
}
],
"section": "N-gram based Similarity",
"sec_num": "2.6"
},
{
"text": "Where n i is the frequency of term t i in the Google Web 1T collection, and N is the frequency of the most frequent term in the Google Web 1T collection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based Similarity",
"sec_num": "2.6"
},
{
"text": "\u2022 the function h(x, P ) measures the weight of each n-gram and is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based Similarity",
"sec_num": "2.6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h(x, P j ) = j k=1 w k if x \u2208 P j 0 otherwise",
"eq_num": "(11)"
}
],
"section": "N-gram based Similarity",
"sec_num": "2.6"
},
{
"text": "Where w k is the weight of the k-th term (see Equation 10) and j is the number of terms that compose the n-gram x;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based Similarity",
"sec_num": "2.6"
},
{
"text": "\u2022 1 d(x,xmax)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based Similarity",
"sec_num": "2.6"
},
{
"text": "is a distance factor which reduces the weight of the n-grams that are far from the heaviest n-gram. The function d(x, x max ) determines numerically the value of the separation according to the number of words between a n-gram and the heaviest one:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based Similarity",
"sec_num": "2.6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d(x, x max ) = 1 + k\u2022 ln(1 + L)",
"eq_num": "(12)"
}
],
"section": "N-gram based Similarity",
"sec_num": "2.6"
},
{
"text": "where k is a factor that determines the importance of the distance in the similarity calculation and L is the number of words between a n-gram and the heaviest one (see Equation 11).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based Similarity",
"sec_num": "2.6"
},
{
"text": "In our experiments, k was set to 0.1, the default value in the original model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based Similarity",
"sec_num": "2.6"
},
{
"text": "In addition to the above text similarity measures, we used also the following common measures:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other measures",
"sec_num": "2.7"
},
{
"text": "Given p = (w p 1 , . . . , w pn ) and q = (w q 1 , . . . , w qn ) the vectors of tf.idf weights associated to sentences p and q, the cosine distance is calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cosine",
"sec_num": "2.7.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "sim cos (p, q) = n i=1 w p i \u00d7 w q i n i=1 w p i 2 \u00d7 n i=1 w q i 2",
"eq_num": "(13)"
}
],
"section": "Cosine",
"sec_num": "2.7.1"
},
{
"text": "The idf value was calculated on Google Web 1T.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cosine",
"sec_num": "2.7.1"
},
{
"text": "This similarity measure is calculated using the Levenshtein distance as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edit Distance",
"sec_num": "2.7.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "sim ED (p, q) = 1 \u2212 Lev(p, q) max(|p|, |q|)",
"eq_num": "(14)"
}
],
"section": "Edit Distance",
"sec_num": "2.7.2"
},
{
"text": "where Lev(p, q) is the Levenshtein distance between the two sentences, taking into account the characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edit Distance",
"sec_num": "2.7.2"
},
{
"text": "We used the Stanford Named Entity Recognizer by (Finkel et al., 2005) , with the 7 class model trained for MUC: Time, Location, Organization, Person, Money, Percent, Date. Then we calculated a per-class overlap measure (in this way, \"France\" as an Organization does not match \"France\" as a Location):",
"cite_spans": [
{
"start": 48,
"end": 69,
"text": "(Finkel et al., 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Overlap",
"sec_num": "2.7.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "O N ER (p, q) = 2 * |N p \u2229 N q | |N p | + |N q |",
"eq_num": "(15)"
}
],
"section": "Named Entity Overlap",
"sec_num": "2.7.3"
},
{
"text": "where N p and N q are the sets of NEs found, respectively, in sentences p and q.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Overlap",
"sec_num": "2.7.3"
},
{
"text": "The integration has been carried out using the \u03bd-Support Vector Regression model (\u03bd-SVR) (Sch\u00f6lkopf et al., 1999) implementation provided by LIBSVM (Chang and Lin, 2011) , with a radial basis function kernel with the standard parameters (\u03bd = 0.5).",
"cite_spans": [
{
"start": 89,
"end": 113,
"text": "(Sch\u00f6lkopf et al., 1999)",
"ref_id": "BIBREF9"
},
{
"start": 148,
"end": 169,
"text": "(Chang and Lin, 2011)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Integration of Similarity Measures",
"sec_num": "3"
},
{
"text": "In order to evaluate the impact of the different features, we carried out an ablation test, removing one feature at a time and training a new model with the reduced set of features. In Table 2 The ablation test show that the IR-based feature showed up to be the most effective one, especially for the headlines subset (as expected), and, quite surprisingly, on the OnWN data. In Table 3 we show the correlation between each feature and the result (feature values normalised between 0 and 5): from this table we can also observe that, on average, IRbased similarity was better able to capture the semantic similarity between texts. The only exception was the FNWN test set: the IR-based similarity returned a 0 score 178 times out of 189 (94.1%), indicating that the indexed corpus did not fit the content of the FNWN sentences. This result shows also the limits of the IR-based similarity score which needs a large corpus to achieve enough coverage.",
"cite_spans": [],
"ref_spans": [
{
"start": 185,
"end": 192,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 379,
"end": 386,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "One of the files submitted by INAOE-UPV, INAOE-UPV-run3 has been produced using seven features produced by different teams: INAOE, LIPN and UMCC-DLSI. We contributed to this joint submission with the IR-based, WordNet and cosine features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared submission with INAOE-UPV",
"sec_num": "4.1"
},
{
"text": "In this paper we introduced the LIPN-CORE system, which combines semantic, syntactic an lexical measures of text similarity in a linear regression model. Our system was among the best 15 runs for the STS task. According to the ablation test, the best performing feature was the IR-based one, where a sentence is considered as a query and its meaning represented as a set of documents indexed by an IR system. The second and third best-performing measures were WordNet similarity and Levenshtein's edit distance. On the other hand, worst performing similarity measures were Named Entity Overlap, Syntactic Dependencies and ESA. However, a correlation analysis calculated on the features taken one-by-one shows that the contribution of a feature on the overall regression result does not correspond to the actual capability of the measure to represent the semantic similarity between the two texts. These results raise the methodological question of how to combine semantic, syntactic and lexical similarity measures in order to estimate the impact of the different strategies used on each dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Further Work",
"sec_num": "5"
},
{
"text": "Further work will include richer similarity measures, like quasi-synchronous grammars (Smith and Eisner, 2006) and random walks (Ramage et al., 2009) . Quasi-synchronous grammars have been used successfully for paraphrase detection (Das and Smith, 2009) , as they provide a fine-grained modeling of the alignment of syntactic structures, in a very flexible way, enabling partial alignments and the inclusion of external features, like Wordnet lexical relations for example. Random walks have been used effectively for paraphrase recognition and as a feature for recognizing textual entailment. Finally, we will continue analyzing the question of how to combine a wide variety of similarity measures in such a way that they tackle the semantic variations of each dataset.",
"cite_spans": [
{
"start": 86,
"end": 110,
"text": "(Smith and Eisner, 2006)",
"ref_id": null
},
{
"start": 128,
"end": 149,
"text": "(Ramage et al., 2009)",
"ref_id": "BIBREF8"
},
{
"start": 232,
"end": 253,
"text": "(Das and Smith, 2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Further Work",
"sec_num": "5"
},
{
"text": "http://www.d.umn.edu/\u02dctpederse/similarity.html 2 https://github.com/CNGLdlab/LORG-Release3 We used the default built-in converter provided with the Stanford Parser (2012-11-12 revision).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://code.google.com/p/research-esa/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the Quaero project and the LabEx EFL 8 for their support to this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Ukp: Computing semantic textual similarity by combining multiple content similarity measures",
"authors": [
{
"first": "[",
"middle": [],
"last": "References",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "B\u00e4r",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 6th International Workshop on Semantic Evaluation, held in conjunction with the 1st Joint Conference on Lexical and Computational Semantics",
"volume": "34",
"issue": "",
"pages": "113--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References [B\u00e4r et al.2012] Daniel B\u00e4r, Chris Biemann, Iryna Gurevych, and Torsten Zesch. 2012. Ukp: Computing semantic textual similarity by combining multiple content similarity measures. In Proceedings of the 6th International Workshop on Semantic Evaluation, held in conjunction with the 1st Joint Conference on Lexical and Computational Semantics, pages 435-440, Montreal, Canada, June. [Brants and Franz2006] Thorsten Brants and Alex Franz. 2006. Web 1t 5-gram corpus version 1.1. [Buscaldi et al.2009] Davide Buscaldi, Paolo Rosso, Jos\u00e9 Manuel G\u00f3mez, and Emilio Sanchis. 2009. An- swering questions with an n-gram based passage re- trieval engine. Journal of Intelligent Information Sys- tems (JIIS), 34(2):113-134.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Irit: Textual similarity combining conceptual similarity with an n-gram comparison method",
"authors": [
{
"first": "[",
"middle": [],
"last": "Buscaldi",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 6th International Workshop on Semantic Evaluation",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Buscaldi et al.2012] Davide Buscaldi, Ronan Tournier, Nathalie Aussenac-Gilles, and Josiane Mothe. 2012. 8 http://www.labex-efl.org Irit: Textual similarity combining conceptual simi- larity with an n-gram comparison method. In Pro- ceedings of the 6th International Workshop on Se- mantic Evaluation (SemEval 2012), Montreal, Que- bec, Canada.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "LIBSVM: A library for support vector machines",
"authors": [
{
"first": "Chih-Chung",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2011,
"venue": "and Lin2011",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "and Lin2011] Chih-Chung Chang and Chih-Jen Lin. 2011. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1-27:27. Software available at http://www.csie.ntu.edu.tw/\u02dccjlin/ libsvm.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Paraphrase identification as probabilistic quasisynchronous recognition",
"authors": [
{
"first": "Smith2009] Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith ; Dudognon",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Colloque Veille Strat\u00e9gique Scientifique et Technologique (VSST 2010)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "and Smith2009] Dipanjan Das and Noah A. Smith. 2009. Paraphrase identification as probabilistic quasi- synchronous recognition. In Proc. of ACL-IJCNLP. [Dudognon et al.2010] Damien Dudognon, Gilles Hubert, and Bachelin Jhonn Victorino Ralalason. 2010. Proxig\u00e9n\u00e9a : Une mesure de similarit\u00e9 conceptuelle. In Proceedings of the Colloque Veille Strat\u00e9gique Sci- entifique et Technologique (VSST 2010).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Incorporating nonlocal information into information extraction systems by gibbs sampling",
"authors": [
{
"first": "",
"middle": [],
"last": "Finkel",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05",
"volume": "",
"issue": "",
"pages": "1606--1611",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Finkel et al.2005] Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non- local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd An- nual Meeting on Association for Computational Lin- guistics, ACL '05, pages 363-370, Stroudsburg, PA, USA. Association for Computational Linguistics. [Gabrilovich and Markovitch2007] Evgeniy Gabrilovich and Shaul Markovitch. 2007. Computing seman- tic relatedness using wikipedia-based explicit semantic analysis. In Proceedings of the 20th international joint conference on Artifical intelligence, IJCAI'07, pages 1606-1611, San Francisco, CA, USA. Morgan Kauf- mann Publishers Inc.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Semantic similarity based on corpus statistics and lexical taxonomy",
"authors": [
{
"first": "[",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "] J",
"middle": [
"J"
],
"last": "Conrath1997",
"suffix": ""
},
{
"first": "D",
"middle": [
"W"
],
"last": "Jiang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Conrath",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. of the Int'l. Conf. on Research in Computational Linguistics",
"volume": "",
"issue": "",
"pages": "19--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Jiang and Conrath1997] J.J. Jiang and D.W. Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In Proc. of the Int'l. Conf. on Research in Computational Linguistics, pages 19-33.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Rada Mihalcea, Courtney Corley, and Carlo Strapparava. 2006. Corpus-based and knowledge-based measures of text semantic similarity",
"authors": [
{
"first": "",
"middle": [],
"last": "[le Roux",
"suffix": ""
}
],
"year": 2012,
"venue": "The NAACL 2012 First Workshop on Syntactic Analysis of Non-Canonical Language (SANCL)",
"volume": "1",
"issue": "",
"pages": "775--780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Le Roux et al.2012] Joseph Le Roux, Jennifer Foster, Joachim Wagner, Rasul Samad Zadeh Kaljahi, and Anton Bryl. 2012. DCU-Paris13 Systems for the SANCL 2012 Shared Task. In The NAACL 2012 First Workshop on Syntactic Analysis of Non-Canonical Language (SANCL), pages 1-4, Montr\u00e9al, Canada, June. [Mihalcea et al.2006] Rada Mihalcea, Courtney Corley, and Carlo Strapparava. 2006. Corpus-based and knowledge-based measures of text semantic similarity. In Proceedings of the 21st national conference on Ar- tificial intelligence -Volume 1, AAAI'06, pages 775- 780. AAAI Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Wordnet::similarity: measuring the relatedness of concepts",
"authors": [
{
"first": "",
"middle": [],
"last": "Pedersen",
"suffix": ""
}
],
"year": 2004,
"venue": "Demonstration Papers at HLT-NAACL 2004, HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Pedersen et al.2004] Ted Pedersen, Siddharth Patward- han, and Jason Michelizzi. 2004. Wordnet::similarity: measuring the relatedness of concepts. In Demon- stration Papers at HLT-NAACL 2004, HLT-NAACL-",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Random walks for text semantic similarity",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Ramage",
"suffix": ""
},
{
"first": "Anna",
"middle": [
"N"
],
"last": "Rafferty",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "23--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Demonstrations '04, pages 38-41, Stroudsburg, PA, USA. Association for Computational Linguistics. [Ramage et al.2009] Daniel Ramage, Anna N. Rafferty, and Christopher D. Manning. 2009. Random walks for text semantic similarity. In Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing, pages 23-31. The Association for Computer Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Quasi-synchronous grammars: Alignment by soft projection of syntactic dependencies",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": ". ; Bernhard",
"middle": [],
"last": "Sch\u00f6lkopf",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Bartlett",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Williamson",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 1998 conference on Advances in neural information processing systems II",
"volume": "1",
"issue": "",
"pages": "23--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Resnik. 1995. Using information content to evaluate semantic similarity in a taxonomy. In Proceedings of the 14th international joint confer- ence on Artificial intelligence -Volume 1, IJCAI'95, pages 448-453, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. [Sch\u00f6lkopf et al.1999] Bernhard Sch\u00f6lkopf, Peter Bartlett, Alex Smola, and Robert Williamson. 1999. Shrinking the tube: a new support vector regression algorithm. In Proceedings of the 1998 conference on Advances in neural information processing systems II, pages 330-336, Cambridge, MA, USA. MIT Press. [Smith and Eisner2006] David A. Smith and Jason Eisner. 2006. Quasi-synchronous grammars: Alignment by soft projection of syntactic dependencies. In Proceed- ings of the HLT-NAACL Workshop on Statistical Ma- chine Translation, pages 23-30, New York, June. [Sorg and Cimiano2008] Philipp Sorg and Philipp Cimi- ano. 2008. Cross-lingual Information Retrieval with Explicit Semantic Analysis. In Working Notes for the CLEF 2008 Workshop.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Verbs semantics and lexical selection",
"authors": [
{
"first": "Palmer1994] Zhibiao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 32nd annual meeting on Association for Computational Linguistics, ACL '94",
"volume": "",
"issue": "",
"pages": "133--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "and Palmer1994] Zhibiao Wu and Martha Palmer. 1994. Verbs semantics and lexical selection. In Pro- ceedings of the 32nd annual meeting on Association for Computational Linguistics, ACL '94, pages 133- 138, Stroudsburg, PA, USA. Association for Compu- tational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"num": null,
"text": "we show the results of the ablation test for each subset of the *SEM 2013 test set; in Table 1 we show the same test on the whole test set. Note: the results have been calculated as the Pearson correlation test on the whole test set and not as an average of the correlation scores calculated over the composing test sets.",
"html": null,
"content": "<table><tr><td colspan=\"3\">Feature Removed Pearson Loss</td></tr><tr><td>None</td><td>0.597</td><td>0</td></tr><tr><td>N-grams</td><td>0.596</td><td>0.10%</td></tr><tr><td>WordNet</td><td>0.563</td><td>3.39%</td></tr><tr><td>SyntDeps</td><td colspan=\"2\">0.602 \u22120.43%</td></tr><tr><td>Edit</td><td>0.584</td><td>1.31%</td></tr><tr><td>Cosine</td><td>0.596</td><td>0.10%</td></tr><tr><td>NE Overlap</td><td colspan=\"2\">0.603 \u22120.53%</td></tr><tr><td>IC-based</td><td colspan=\"2\">0.598 \u22120.10%</td></tr><tr><td>IR-Similarity</td><td>0.510</td><td>8.78%</td></tr><tr><td>ESA</td><td colspan=\"2\">0.601 \u22120.38%</td></tr></table>",
"type_str": "table"
},
"TABREF1": {
"num": null,
"text": "Ablation test for the different features on the whole 2013 test set.",
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">FNWN</td><td>Headlines</td><td/><td colspan=\"2\">OnWN</td><td>SMT</td><td/></tr><tr><td>Feature</td><td colspan=\"2\">Pearson Loss</td><td>Pearson Loss</td><td/><td colspan=\"2\">Pearson Loss</td><td colspan=\"2\">Pearson Loss</td></tr><tr><td>None</td><td>0.404</td><td>0</td><td>0.706</td><td>0</td><td>0.694</td><td>0</td><td>0.301</td><td>0</td></tr><tr><td>N-grams</td><td>0.379</td><td>2.49%</td><td colspan=\"2\">0.705 0.12%</td><td colspan=\"2\">0.698 \u22120.44%</td><td>0.289</td><td>1.16%</td></tr><tr><td>WordNet</td><td>0.376</td><td>2.80%</td><td colspan=\"2\">0.695 1.09%</td><td>0.682</td><td>1.17%</td><td>0.278</td><td>2.28%</td></tr><tr><td>SyntDeps</td><td>0.403</td><td>0.08%</td><td colspan=\"2\">0.699 0.70%</td><td>0.679</td><td>1.49%</td><td>0.284</td><td>1.62%</td></tr><tr><td>Edit</td><td>0.402</td><td>0.19%</td><td colspan=\"2\">0.689 1.70%</td><td>0.667</td><td>2.72%</td><td>0.286</td><td>1.50%</td></tr><tr><td>Cosine</td><td>0.393</td><td>1.03%</td><td colspan=\"2\">0.683 2.38%</td><td>0.676</td><td>1.80%</td><td colspan=\"2\">0.303 \u22120.24%</td></tr><tr><td>NE Overlap</td><td colspan=\"2\">0.410 \u22120.61%</td><td colspan=\"2\">0.700 0.67%</td><td>0.680</td><td>1.37%</td><td>0.285</td><td>1.58%</td></tr><tr><td>IC-based</td><td>0.391</td><td>1.26%</td><td colspan=\"2\">0.699 0.75%</td><td>0.669</td><td>2.50%</td><td>0.283</td><td>1.76%</td></tr><tr><td>IR-Similarity</td><td colspan=\"2\">0.426 \u22122.21%</td><td colspan=\"2\">0.633 7.33%</td><td colspan=\"2\">0.589 10.46%</td><td>0.249</td><td>5.19%</td></tr><tr><td>ESA</td><td>0.391</td><td>1.22%</td><td colspan=\"2\">0.691 1.57%</td><td colspan=\"2\">0.702 \u22120.81%</td><td>0.275</td><td>2.54%</td></tr></table>",
"type_str": "table"
},
"TABREF2": {
"num": null,
"text": "Ablation test for the different features on the different parts of the 2013 test set.",
"html": null,
"content": "<table><tr><td/><td colspan=\"3\">FNWN Headlines OnWN</td><td>SMT</td><td>ALL</td></tr><tr><td>N-grams</td><td>0.285</td><td>0.532</td><td colspan=\"2\">0.459 0.280 0.336</td></tr><tr><td>WordNet</td><td>0.395</td><td>0.606</td><td colspan=\"2\">0.552 0.282 0.477</td></tr><tr><td>SyntDeps</td><td>0.233</td><td>0.409</td><td colspan=\"2\">0.345 0.323 0.295</td></tr><tr><td>Edit</td><td>0.220</td><td>0.536</td><td colspan=\"2\">0.089 0.355 0.230</td></tr><tr><td>Cosine</td><td>0.306</td><td>0.573</td><td colspan=\"2\">0.541 0.244 0.382</td></tr><tr><td>NE Overlap</td><td>0.000</td><td>0.216</td><td colspan=\"2\">0.000 0.013 0.020</td></tr><tr><td>IC-based</td><td>0.413</td><td>0.540</td><td colspan=\"2\">0.642 0.285 0.421</td></tr><tr><td>IR-based</td><td>0.067</td><td>0.598</td><td colspan=\"2\">0.628 0.241 0.541</td></tr><tr><td>ESA</td><td>0.328</td><td>0.546</td><td colspan=\"2\">0.322 0.289 0.390</td></tr></table>",
"type_str": "table"
},
"TABREF3": {
"num": null,
"text": "Pearson correlation calculated on individual features.",
"html": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}