Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S07-1043",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:23:00.605685Z"
},
"title": "JU-SKNSB: Extended WordNet Based WSD on the English All-Words Task at SemEval-1",
"authors": [
{
"first": "Sudip",
"middle": [],
"last": "Kumar Naskar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Jadavpur University",
"location": {
"settlement": "Kolkata",
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Sivaji",
"middle": [],
"last": "Bandyopadhyay",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Jadavpur University",
"location": {
"settlement": "Kolkata",
"country": "India"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents an Extended WordNet based word sense disambiguation system using a major modification to the Lesk algorithm. The algorithm tries to disambiguate nouns, verbs and adjectives. The algorithm relies on the POS-sense tagged synset glosses provided by the Extended WordNet. The basic unit of disambiguation of our algorithm is the entire sentence under consideration. It takes a global approach where all the words in the target sentence are simultaneously disambiguated. The context includes previous and next sentence. The system assigns the default WordNet first sense to a word when the algorithm fails to predict the sense of the word. The system produces a precision and recall of .402 on the SemEval-2007 English All-Words test data.",
"pdf_parse": {
"paper_id": "S07-1043",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents an Extended WordNet based word sense disambiguation system using a major modification to the Lesk algorithm. The algorithm tries to disambiguate nouns, verbs and adjectives. The algorithm relies on the POS-sense tagged synset glosses provided by the Extended WordNet. The basic unit of disambiguation of our algorithm is the entire sentence under consideration. It takes a global approach where all the words in the target sentence are simultaneously disambiguated. The context includes previous and next sentence. The system assigns the default WordNet first sense to a word when the algorithm fails to predict the sense of the word. The system produces a precision and recall of .402 on the SemEval-2007 English All-Words test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In Senseval 1, most of the systems disambiguating English words, were outperformed by a Lesk variant serving as baseline (Kilgariff & Rosenzweig, 2000) . On the other hand, during Senseval 2 and Senseval 3, Lesk baselines were outperformed by most of the systems in the lexical sample track (Edmonds, 2002) .",
"cite_spans": [
{
"start": 121,
"end": 151,
"text": "(Kilgariff & Rosenzweig, 2000)",
"ref_id": null
},
{
"start": 291,
"end": 306,
"text": "(Edmonds, 2002)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we explore variants of the Lesk algorithm on the English All Words SemEval 2007 test data (465 instances), as well as on the first 10 Semcor 2.0 files (9642 instances). The proposed WSD algorithm is POS-sense-tagged gloss (from Extended WordNet) based and is a major modification of the original Lesk algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The eXtended WordNet (Harabagiu et al., 1999) project aims to transform the WordNet glosses into a format that allows the derivation of additional semantic and logic relations. It intends to syntactically parse the glosses, transform glosses into logical forms and tag semantically the nouns, verbs, adjectives and adverbs of the glosses automatically. The last release of the Extended WordNet is based on WordNet 2.0 and has three stages: POS tagging and parsing, logic form transformation, and semantic disambiguation. Banerjee and Pedersen (2002) reports an adaptation of Lesk's dictionary-based WSD algorithm which makes use of WordNet glosses and tests on English lexical sample from SENSEVAL-2. They define overlap as the longest sequence of one or more consecutive content words that occurs in both glosses. Each overlap contributes a score equal to the square of the number of words in the overlap.",
"cite_spans": [
{
"start": 21,
"end": 45,
"text": "(Harabagiu et al., 1999)",
"ref_id": "BIBREF7"
},
{
"start": 521,
"end": 549,
"text": "Banerjee and Pedersen (2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extended WordNet",
"sec_num": "2"
},
{
"text": "A version of Lesk algorithm in combination with WordNet has been reported for achieving good results in (Ramakrishnan et al., 2004) . Vasilescu et al. (2004) carried on a series of experiments on the Lesk algorithm, adapted to WordNet, and on some variants. They studied the effect of varying the number of words in the contexts, centered around the target word.",
"cite_spans": [
{
"start": 104,
"end": 131,
"text": "(Ramakrishnan et al., 2004)",
"ref_id": "BIBREF2"
},
{
"start": 134,
"end": 157,
"text": "Vasilescu et al. (2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "3"
},
{
"text": "But till now no work has been reported which makes use of Extended WordNet for Lesk-like gloss-oriented approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "3"
},
{
"text": "The proposed sense disambiguation algorithm is a major modification of the Lesk algorithm (Lesk, 1986) . WordNet and Extended WordNet are the main resources.",
"cite_spans": [
{
"start": 90,
"end": 102,
"text": "(Lesk, 1986)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Sense Disambiguation Algorithm",
"sec_num": "4"
},
{
"text": "We modify the Lesk algorithm (Lesk, 1986) in several ways to create our baseline algorithm. The Lesk algorithm relies on glosses found in traditional dictionaries which often do not have enough words for the algorithm to work well. We choose the lexical database WordNet, to take advantage of the highly inter-connected set of relations among different words that WordNet offers, and Extended WordNet to capitalize on its (POS and sense) tagged glosses. The Lesk algorithm takes a local approach for sense disambiguation. The disambiguation of the various words in a sentence is a series of independent problems and has no effect on each other. We propose a global approach where all the words (we mean by word, an open-class lemma) in the context window are simultaneously disambiguated in a bid to get the best combination of senses for all the words in the window instead of only the target word. The process can be thought of as sense disambiguation of the whole context, instead of a word.",
"cite_spans": [
{
"start": 29,
"end": 41,
"text": "(Lesk, 1986)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modifications to the Lesk Algorithm",
"sec_num": "4.1"
},
{
"text": "The Lesk algorithm disambiguates words in short phrases. But, the basic unit of disambiguation of our algorithm is the entire sentence under consideration. We later modify the context to include the previous and next sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modifications to the Lesk Algorithm",
"sec_num": "4.1"
},
{
"text": "Another major change is that the dictionary definition or gloss of each of its senses is compared to the glosses of every other word in the context by the Lesk algorithm. But in the present work, the words themselves are compared with the glosses of every other word in the context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modifications to the Lesk Algorithm",
"sec_num": "4.1"
},
{
"text": "While Lesk's algorithm restricts its comparisons to the dictionary meanings of the words being disambiguated, our choice of dictionary allows us to also compare the meanings (i.e., glosses) of the words, as well as the words that are related to them through various relationships defined in WordNet. For each POS we choose a relation if links of its kind form at least 5% of the total number of links for that part of speech, with two exceptions. We use the attribute relation although there are not many links of its kind. But this relation links adjectives, which are not well developed in WordNet, to nouns which have a lot of data about them. This potential to tap into the rich noun data prompted us to use this relation. Another exception is the antonymy relationship. Although there are sufficient antonymy links for adjectives and adverbs, we have not utilized these relations. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choice of Which Glosses to Use",
"sec_num": "4.2"
},
{
"text": "The gloss bag is constructed for every sense of every word in the sentence. The gloss-bag is constructed from the POS and sense tagged glosses of synsets, obtained from the Extended WordNet. For any synset, the words forming the synset and the gloss definition contribute to the gloss-bag. The non-content words are left out. Example sentences do not contribute to the gloss bag since they are not (POS and sense) tagged. Each word along with its POS and sense-tag are stored in the gloss bag. For words with different POS, different relations are taken into account (according to Table 1) for building the corresponding gloss-bag. This gloss-bag creation process can be performed offline or online. It can be performed dynamically on a as-when-needed basis. Or, glossbags can be created for all WordNet entries only once and stored in a data file in prior. The issue is time versus space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm",
"sec_num": "4.3"
},
{
"text": "Once, this gloss-bag creation process is over, the comparison process starts. Each word (say W i ) in the context is compared with each word in the gloss-bag for every sense (say S k ) of every other word (say W j ) in the context. If a match is found, they are checked further for part-of-speech match. If the words match in part-of-speech as well, a score is assigned to both the words: the word being matched (W i ) and the word whose gloss-bag contains the match (W j ). This matching event indicates mutual confidence towards each other, so both words are rewarded for this event. Two twodimensional (one for word index and the other for sense index) vectors are maintained: sense_vote for the word in context, and sense_score for the word in gloss-bag. Say, for example, the context word (W i # noun) matches with gloss word (W n # noun # m) (i.e., W i = W n ) in the gloss bag for k th sense of W j . Then, a score of 1/(gloss bag size of (W jk )) is assigned to both sense_vote [i] [m] and sense_score [j] [k] . Scores are normalized before assigning because of huge discrepancy in gloss-bag sizes. This process continues until each context word is matched against all gloss-bag words for each sense of every other context words.",
"cite_spans": [
{
"start": 986,
"end": 989,
"text": "[i]",
"ref_id": null
},
{
"start": 1010,
"end": 1013,
"text": "[j]",
"ref_id": null
},
{
"start": 1014,
"end": 1017,
"text": "[k]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm",
"sec_num": "4.3"
},
{
"text": "Once all the comparisons have been made, we add sense_vote value with the sense_score linearly value for each sense of every word to arrive at the combination score for this word-sense pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm",
"sec_num": "4.3"
},
{
"text": "The algorithm assigns a word the n th sense for which the corresponding sense_vote and sense_score produces the maximum sum, and it does not assign a word any sense when the corresponding sense_vote and sense_score values are 0, even if the word has only one sense. In the event of a tie, we choose the one that is more frequent, as specified by WordNet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm",
"sec_num": "4.3"
},
{
"text": "Assuming that there are N words in the window of context (i.e. the sentence), and that, on an average there are S senses per word, and G number of gloss words in each gloss bag per sense, N * S gloss bags need to be constructed, giving rise to a total of N * S * G gloss words. Now these many gloss words are compared against each of the N context words. Thus, N 2 * S * G pairs of word comparisons need to be performed. Both, S and G vary heavily.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm",
"sec_num": "4.3"
},
{
"text": "The algorithm discussed thus far is our baseline algorithm. We made some changes, as described in the following two subsections, to investigate whether the performance of the algorithm can be improved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variants of the Algorithm",
"sec_num": "5"
},
{
"text": "The poor performance of the algorithm perhaps suggests that sentential context is not enough for this algorithm to work. So we went for a larger context: a context window containing the current sentence under consideration (target sentence), its preceding sentence and the succeeding sentence. This increment in context size indeed performed better than the baseline algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Increasing the Context Size",
"sec_num": "5.1"
},
{
"text": "When constructing the gloss-bags for a word-sense pair, some words may appear in more than one gloss (by gloss we mean to say synonyms as well as gloss). So, we added another parameter with every (word#pos#sense) in a gloss bag: noc -the number of occurrence of this (word#pos#sense) combination in this gloss-bag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assigning Different Scores",
"sec_num": "5.2"
},
{
"text": "And, in case of a match of context word (say W i ) with a gloss-bag word (of say k th sense of word W j ), we scored the words in four ways to see if this phenomenon has any effect on the sense disambiguation process. Say, for example, the context word (W i # noun) matches with gloss word (W n # noun # m # noc) in the gloss bag for k th sense of W j (i.e., the particular word appears noc times in the said gloss-bag) and the gloss bag size is gbs. Then, we reward W i and W j for this event in four ways given below. The results of this four-way scoring proved that this indeed has influence on the disambiguation process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assigning Different Scores",
"sec_num": "5.2"
},
{
"text": "The WSD system is based on Extended Word-Net version 2.0-1.1 (the latest release), which is in turn based on WordNet version 2.0. So, the system returns WordNet 2.0 sense indexes. These Word-Net sense indexes are then mapped to WordNet 2.1 sense indexes using sensemap 2.0 to 2.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assigning Different Scores",
"sec_num": "5.2"
},
{
"text": "The system has been evaluated on the SemEval-2007 English All-Words Tasks (465 test in-stances), as well as on the first 10 Semcor 2.0 files, which are manually disambiguated text corpora using WordNet senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluations",
"sec_num": "6"
},
{
"text": "We compute F-Score as 2*P*R / (P+R). Table 2 shows the performance of the four variants of the system (with a context size of 3 sentences) on the first 10 Semcor 2.0 files. From table 2, it is clearly evident that model C produces the best result (precision -.621, recall -.533) When default WordNet first senses were assigned to the (40) words for which the algorithm failed to predict senses, both the precision and recall values went up to .402 (this result has been submitted in SemEval-2007). The WSD system stood 10 th in the SemEval-2007 English All-Words task.",
"cite_spans": [
{
"start": 247,
"end": 278,
"text": "(precision -.621, recall -.533)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluations",
"sec_num": "6"
},
{
"text": "We believe that this somewhat poor showing can be partially attributed to the brevity of definitions in WordNet in particular and dictionaries in general. The Lesk algorithm is crucially dependent on the lengths of glosses. However lexicographers aim to create short and precise definitions which, though a desirable quality in dictionaries, is disadvantageous to this algorithm. Nouns have the longest average glosses in WordNet, and indeed the highest recall obtained is on nouns. The characteristics of the gloss bags need to be further investigated. Again many of the sense tagged gloss words in Extended WordNet, which are determinant factors in this algorithm, are of \"silver\" or \"normal\" quality. And finally, since the system returns WordNet 2.0 sense indexes which are mapped to WordNet 2.1 indexes with certain amount of confidence using sensemap 2.0 to 2.1, there may be some loss of information during this mapping process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "7"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Framework and Results for English SENSEVAL",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Rosenzweig",
"suffix": ""
}
],
"year": 2000,
"venue": "Computers and the Humanities",
"volume": "34",
"issue": "",
"pages": "15--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Kilgarriff, and J. Rosenzweig. 2000. Framework and Results for English SENSEVAL. Computers and the Humanities, 34, 15-48.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Evaluating Variants of the Lesk Approach for Disambiguating Words. LREC",
"authors": [
{
"first": "Florentina",
"middle": [],
"last": "Vasilescu",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Langlais",
"suffix": ""
},
{
"first": "Guy",
"middle": [],
"last": "Lapalme",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Florentina Vasilescu, Philippe Langlais, and Guy La- palme. 2004. Evaluating Variants of the Lesk Ap- proach for Disambiguating Words. LREC, Portugal.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Gloss Centered Algorithm for Word Sense Disambiguation",
"authors": [
{
"first": "G",
"middle": [],
"last": "Ramakrishnan",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Prithviraj",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the ACL SEN-SEVAL",
"volume": "",
"issue": "",
"pages": "217--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Ramakrishnan, B. Prithviraj, and P. Bhattacharyya. 2004. A Gloss Centered Algorithm for Word Sense Disambiguation. Proceedings of the ACL SEN- SEVAL 2004, Barcelona, Spain, 217-221.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatic sense disambiguation using machine readable dictionaries",
"authors": [
{
"first": "M",
"middle": [],
"last": "Lesk",
"suffix": ""
}
],
"year": 1986,
"venue": "Proceedings of SIGDOC '86",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from a ice cream cone. Proceedings of SIGDOC '86.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "SENSEVAL : The Evaluation of Word Sense Disambiguation Systems",
"authors": [
{
"first": "P",
"middle": [],
"last": "Edmonds",
"suffix": ""
}
],
"year": 2002,
"venue": "ELRA Newsletter",
"volume": "7",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Edmonds. 2002. SENSEVAL : The Evaluation of Word Sense Disambiguation Systems, ELRA News- letter, Vol. 7, No. 3.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Adapting the Lesk Algorithm for Word Sense Disambiguation to WordNet",
"authors": [
{
"first": "S",
"middle": [],
"last": "Banerjee",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Banerjee. 2002. Adapting the Lesk Algorithm for Word Sense Disambiguation to WordNet. MS Thesis, University of Minnesota.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An Adapted Lesk Algorithm for Word Sense Disambiguation Using WordNet",
"authors": [
{
"first": "S",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Pedersen",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Banerjee, and T. Pedersen. 2002. An Adapted Lesk Algorithm for Word Sense Disambiguation Using WordNet. CICLing, Mexico.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "WordNet2 -a morphologically and semantically enhanced resource",
"authors": [
{
"first": "S",
"middle": [],
"last": "Harabagiu",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Moldovan",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of SIGLEX-99",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Harabagiu, G. Miller, and D. Moldovan. 1999. WordNet2 -a morphologically and semantically en- hanced resource. Proceedings of SIGLEX-99, Univ of Mariland. 1-8.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td/><td>Verb</td><td>Adjective</td></tr><tr><td>Hypernym</td><td>Hyponym</td><td>Attribute</td></tr><tr><td>Hyponym</td><td>Troponym</td><td>Also see</td></tr><tr><td>Holonym</td><td>Also see</td><td>Similar to</td></tr><tr><td>Meronym</td><td/><td>Pertainym of</td></tr><tr><td>Attribute</td><td/><td/></tr></table>",
"type_str": "table",
"num": null,
"text": "WordNet relations chosen for the disambiguation algorithm",
"html": null
}
}
}
}