Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S14-2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:33:16.663992Z"
},
"title": "BUAP: Evaluating Compositional Distributional Semantic Models on Full Sentences through Semantic Relatedness and Textual Entailment",
"authors": [
{
"first": "Sa\u00fal",
"middle": [],
"last": "Le\u00f3n",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Darnes",
"middle": [],
"last": "Vilari\u00f1o",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "David",
"middle": [],
"last": "Pinto",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Mireya",
"middle": [],
"last": "Tovar",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Beatriz",
"middle": [],
"last": "Beltr\u00e1n",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The results obtained by the BUAP team at Task 1 of SemEval 2014 are presented in this paper. The run submitted is a supervised version based on two classification models: 1) We used logistic regression for determining the semantic relatedness between a pair of sentences, and 2) We employed support vector machines for identifying textual entailment degree between the two sentences. The behaviour for the second subtask (textual entailment) obtained much better performance than the one evaluated at the first subtask (relatedness), ranking our approach in the 7th position of 18 teams that participated at the competition. This work is licensed under a Creative Commons Attribution 4.0 International Licence.",
"pdf_parse": {
"paper_id": "S14-2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The results obtained by the BUAP team at Task 1 of SemEval 2014 are presented in this paper. The run submitted is a supervised version based on two classification models: 1) We used logistic regression for determining the semantic relatedness between a pair of sentences, and 2) We employed support vector machines for identifying textual entailment degree between the two sentences. The behaviour for the second subtask (textual entailment) obtained much better performance than the one evaluated at the first subtask (relatedness), ranking our approach in the 7th position of 18 teams that participated at the competition. This work is licensed under a Creative Commons Attribution 4.0 International Licence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The Compositional Distributional Semantic Models (CDSM) applied to sentences aim to approximate the meaning of those sentences with vectors summarizing their patterns of co-occurrence in corpora. In the Task 1 of SemEval 2014, the organizers aimed to evaluate the performance of this kind of models through the following two tasks: semantic relatedness and textual entailment. Semantic relatedness captures the degree of semantic similarity, in this case, between a pair of sentences, whereas textual entailment allows to determine the entailment relation holding between two sentences. This document is a description paper, therefore, we focus the rest of it on the features and models we used for carrying out the experiments. A complete description of the task and the dataset used are given in Marelli et al. (2014a) and in Marelli et al. (2014b) , respectively.",
"cite_spans": [
{
"start": 798,
"end": 824,
"text": "Marelli et al. (2014a) and",
"ref_id": null
},
{
"start": 825,
"end": 850,
"text": "in Marelli et al. (2014b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remaining of this paper is structured as follows. In Section 2 we describe the general model we used for comparing two sentences and the set of the features used for constructing the vectorial representation for each sentence. Section 3 shows how we integrate the features calculated in a single vector which fed a supervised classifier aiming to construct a classication model that solves the two aforementioned problems: semantic relatedness and textual entailment. In the same section we show the obtained results. Finally, in Section 4 we present our findings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given a sentence S = w 1 w 2 \u2022 \u2022 \u2022 w |S| , with w i a sentence word, we have calculated different correlated terms (t i,j ) or a numeric vector (V i ) for each word w i as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the Distributional Semantic Model Used",
"sec_num": "2"
},
{
"text": "1. {t i,j |relation(t i,j , w i )} such as \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the Distributional Semantic Model Used",
"sec_num": "2"
},
{
"text": "relation\" is one the following dependency relations: \"object\", \"subject\" or \"property\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the Distributional Semantic Model Used",
"sec_num": "2"
},
{
"text": "2. {t i,j |t i,j = c k \u2022 \u2022 \u2022 c k+n } with n = 2, \u2022 \u2022 \u2022 , 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the Distributional Semantic Model Used",
"sec_num": "2"
},
{
"text": ", and c k \u2208 w i ; these tokens are also known as ngrams of length n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the Distributional Semantic Model Used",
"sec_num": "2"
},
{
"text": "3. {t i,j |t i,j = c k \u2022 \u2022 \u2022 c k+((n\u22121) * r) } with n = 2, \u2022 \u2022 \u2022 , 5, r = 2, \u2022 \u2022 \u2022 , 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the Distributional Semantic Model Used",
"sec_num": "2"
},
{
"text": ", and c k \u2208 w i ; these tokens are also known as skip-grams of length n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the Distributional Semantic Model Used",
"sec_num": "2"
},
{
"text": "4. V i is obtained by applying the Latent Semantic Analysis (LSA) algorithm implemented in the R software environment for statistical computing and graphics. V i is basically a vector of values that represent relation of the word w i with it context, calculated by using a corpus constructed by us, by integrating information from Europarl, Project-Gutenberg and Open Office Thesaurus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the Distributional Semantic Model Used",
"sec_num": "2"
},
{
"text": "Once each sentence has been represented by means of a vectorial representation of patterns, we constructed a single vector, \u2212 \u2192 u , for each pair of sentences with the aim of capturing the semantic relatedness on the basis of a training corpus. The entries of this representation vector are calculated by obtaining the semantic similarity between each pair of sentences, using each of the DSM shown in the previous section. In order to calculate each entry, we have found the maximum similarity between each word of the first sentence with respect to the second sentence and, thereafter, we have added all these values, thus,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Classification Model for Semantic Relatedness and Textual Entailment based on DSM",
"sec_num": "3"
},
{
"text": "\u2212 \u2192 u = {f 1 , \u2022 \u2022 \u2022 , f 9 }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Classification Model for Semantic Relatedness and Textual Entailment based on DSM",
"sec_num": "3"
},
{
"text": "Given a pair of sentences",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Classification Model for Semantic Relatedness and Textual Entailment based on DSM",
"sec_num": "3"
},
{
"text": "S 1 = w 1,1 w 2,1 \u2022 \u2022 \u2022 w |S 1 |,1 and S 2 = w 1,2 w 2,2 \u2022 \u2022 \u2022 w |S 2 |,2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Classification Model for Semantic Relatedness and Textual Entailment based on DSM",
"sec_num": "3"
},
{
"text": ", such as each w i,k is represented according to the correlated terms or numeric vectors established at Section 2, the entry f i of \u2212 \u2192 u is calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Classification Model for Semantic Relatedness and Textual Entailment based on DSM",
"sec_num": "3"
},
{
"text": "f l = |S 1 | i=1 max{sim(w i,1 , w j,2 )}, with j = 1, \u2022 \u2022 \u2022 , |S 2 |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Classification Model for Semantic Relatedness and Textual Entailment based on DSM",
"sec_num": "3"
},
{
"text": "The specific similarity measure (sim()) and the correlated term or numeric vector used for each f l is described as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Classification Model for Semantic Relatedness and Textual Entailment based on DSM",
"sec_num": "3"
},
{
"text": "1. f 1 : w i,k is the \"object\" of w i (as defined in 2), and sim() is the maximum similarity obtained by using the following six Word-Net similarity metrics offered by NLTK: Leacock & Chodorow (Leacock and Chodorow, 1998) , Lesk (Lesk, 1986) , Wu & Palmer (Wu and Palmer, 1994) , Resnik (Resnik, 1995) , Lin (Lin, 1998) , and Jiang & Conrath 1 (Jiang and Conrath, 1997).",
"cite_spans": [
{
"start": 193,
"end": 221,
"text": "(Leacock and Chodorow, 1998)",
"ref_id": "BIBREF1"
},
{
"start": 224,
"end": 241,
"text": "Lesk (Lesk, 1986)",
"ref_id": null
},
{
"start": 244,
"end": 277,
"text": "Wu & Palmer (Wu and Palmer, 1994)",
"ref_id": null
},
{
"start": 280,
"end": 301,
"text": "Resnik (Resnik, 1995)",
"ref_id": null
},
{
"start": 308,
"end": 319,
"text": "(Lin, 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Classification Model for Semantic Relatedness and Textual Entailment based on DSM",
"sec_num": "3"
},
{
"text": "2. f 2 : w i,k is the \"subject\" of w i , and sim() is the maximum similarity obtained by using the same six WordNet similarity metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Classification Model for Semantic Relatedness and Textual Entailment based on DSM",
"sec_num": "3"
},
{
"text": "3. f 3 : w i,k is the \"property\" of w i , and sim() is the maximum similarity obtained by using the same six WordNet similarity metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Classification Model for Semantic Relatedness and Textual Entailment based on DSM",
"sec_num": "3"
},
{
"text": "4. f 4 : w i,k is an n-gram containing w i , and sim() is the cosine similarity measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Classification Model for Semantic Relatedness and Textual Entailment based on DSM",
"sec_num": "3"
},
{
"text": "5. f 5 : w i,k is an skip-gram containing w i , and sim() is the cosine similarity measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Classification Model for Semantic Relatedness and Textual Entailment based on DSM",
"sec_num": "3"
},
{
"text": "6. f 6 : w i,k is numeric vector obtained with LSA, and sim() is the Rada Mihalcea semantic similarity measure (Mihalcea et al., 2006) .",
"cite_spans": [
{
"start": 111,
"end": 134,
"text": "(Mihalcea et al., 2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Classification Model for Semantic Relatedness and Textual Entailment based on DSM",
"sec_num": "3"
},
{
"text": "7. f 7 : w i,k is numeric vector obtained with LSA, and sim() is the cosine similarity measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Classification Model for Semantic Relatedness and Textual Entailment based on DSM",
"sec_num": "3"
},
{
"text": "8. f 8 : w i,k is numeric vector obtained with LSA, and sim() is the euclidean distance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Classification Model for Semantic Relatedness and Textual Entailment based on DSM",
"sec_num": "3"
},
{
"text": "9. f 9 : w i,k is numeric vector obtained with LSA, and sim() is the Chebyshev distance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Classification Model for Semantic Relatedness and Textual Entailment based on DSM",
"sec_num": "3"
},
{
"text": "All these 9 features were introduced to a logistic regression classifier in order to obtain a classification model which allows us to determine the value of relatedness between a new pair of sentences 2 . Here, we use as supervised class, the value of relatedness given to each pair of sentences on the training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Classification Model for Semantic Relatedness and Textual Entailment based on DSM",
"sec_num": "3"
},
{
"text": "The obtained results for the relatedness subtask are given in Table 1 . In columns 2, 3 and 5, a large value signals a more efficient system, but a large MSE (column 4) means a less efficient system. As can be seen, our run obtained the rank 12 of 17, with values slightly below the overall average.",
"cite_spans": [],
"ref_spans": [
{
"start": 62,
"end": 69,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "A Classification Model for Semantic Relatedness and Textual Entailment based on DSM",
"sec_num": "3"
},
{
"text": "In order to calculate the textual entailment judgment, we have enriched the vectorial representation previously mentioned with synonyms, antonyms and cue- words (\"no\", \"not\", \"nobody\" and \"none\") for detecting negation at the sentences 3 . Thus, if some of these new features exist on the training pair of sentences, we add a boolean value of 1, otherwise we set the feature to zero. This new set of vectors is introduced to a support vector machine classifier 4 , using as class the textual entailment judgment given on the training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Textual Entailment",
"sec_num": "3.1"
},
{
"text": "The obtained results for the textual entailment subtask are given in Table 2 . Our run obtained the rank 7 of 18, with values above the overall average. We consider that this improvement over the relatedness task was a result of using other features that are quite important for semantic relatedness, such as lexical relations (synonyms and antonyms), and the consideration of the negation phenomenon in the sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 69,
"end": 76,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Textual Entailment",
"sec_num": "3.1"
},
{
"text": "This paper describes the use of compositional distributional semantic models for solving the problems of semantic relatedness and textual entailment. We proposed different features and measures for that purpose. The obtained results show a competitive approach that may be further improved by considering more lexical relations or other type of semantic similarity measures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
},
{
"text": "In general, we obtained the 7th place in the official ranking list from a total of 18 teams that participated at the textual entailment subtask. The result at the semantic relatedness subtask could be improved if we were considered to add the new features taken into consideration at the textual entailment subtask, an idea that we will implement in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
},
{
"text": "Natural Language Toolkit of Python; http://www.nltk.org/ 2 We have employed the Weka tool with the default settings for this purpose",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Synonyms were extracted from WordNet, whereas the antonyms were collected from Wikipedia.4 Again, we have employed the weka tool with the default settings for this purpose.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semantic similarity based on corpus statistics and lexical taxonomy",
"authors": [
{
"first": "J",
"middle": [],
"last": "Jay",
"suffix": ""
},
{
"first": "David",
"middle": [
"W"
],
"last": "Jiang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Conrath",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc of 10th International Conference on Research in Computational Linguistics, RO-CLING'97",
"volume": "",
"issue": "",
"pages": "19--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jay J. Jiang and David W. Conrath. Semantic simi- larity based on corpus statistics and lexical taxon- omy. In Proc of 10th International Conference on Research in Computational Linguistics, RO- CLING'97, pages 19-33, 1997.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Combining local context and wordnet similarity for word sense identification",
"authors": [
{
"first": "Claudia",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Chodorow",
"suffix": ""
}
],
"year": 1998,
"venue": "Christiane Fellfaum, editor",
"volume": "",
"issue": "",
"pages": "265--283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claudia Leacock and Martin Chodorow. Combin- ing local context and wordnet similarity for word sense identification. In Christiane Fellfaum, edi- tor, MIT Press, pages 265-283, 1998.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"content": "<table><tr><td>TEAM ID</td><td colspan=\"2\">PEARSON SPEARMAN</td><td>MSE</td><td>Rank</td></tr><tr><td>ECNU run1</td><td>0.82795</td><td>0.76892</td><td>0.32504</td><td>1</td></tr><tr><td>StanfordNLP run5</td><td>0.82723</td><td>0.75594</td><td>0.32300</td><td>2</td></tr><tr><td>The Meaning Factory run1</td><td>0.82680</td><td>0.77219</td><td>0.32237</td><td>3</td></tr><tr><td>UNAL-NLP run1</td><td>0.80432</td><td>0.74582</td><td>0.35933</td><td>4</td></tr><tr><td>Illinois-LH run1</td><td>0.79925</td><td>0.75378</td><td>0.36915</td><td>5</td></tr><tr><td>CECL ALL run1</td><td>0.78044</td><td>0.73166</td><td>0.39819</td><td>6</td></tr><tr><td>SemantiKLUE run1</td><td>0.78019</td><td>0.73598</td><td>0.40347</td><td>7</td></tr><tr><td>CNGL run1</td><td>0.76391</td><td>0.68769</td><td>0.42906</td><td>8</td></tr><tr><td>UTexas run1</td><td>0.71455</td><td>0.67444</td><td>0.49900</td><td>9</td></tr><tr><td>UoW run1</td><td>0.71116</td><td>0.67870</td><td>0.51137</td><td>10</td></tr><tr><td>FBK-TR run3</td><td>0.70892</td><td>0.64430</td><td>0.59135</td><td>11</td></tr><tr><td>BUAP run1</td><td>0.69698</td><td>0.64524</td><td>0.52774</td><td>12</td></tr><tr><td>UANLPCourse run2</td><td>0.69327</td><td>0.60269</td><td>0.54225</td><td>13</td></tr><tr><td>UQeResearch run1</td><td>0.64185</td><td>0.62565</td><td>0.82252</td><td>14</td></tr><tr><td>ASAP run1</td><td>0.62780</td><td>0.59709</td><td>0.66208</td><td>15</td></tr><tr><td>Yamraj run1</td><td>0.53471</td><td>0.53561</td><td>2.66520</td><td>16</td></tr><tr><td>asjai run5</td><td>0.47952</td><td>0.46128</td><td>1.10372</td><td>17</td></tr><tr><td>overall average</td><td>0.71876</td><td>0.67159</td><td>0.63852</td><td>8-9</td></tr><tr><td>Our difference against the overall average</td><td>-2%</td><td>-3%</td><td>11%</td><td>-</td></tr></table>",
"num": null,
"type_str": "table",
"text": "Results obtained at the substask \"Relatedness\" of the Semeval 2014 Task 1"
}
}
}
}