Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S01-1018",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:35:37.918822Z"
},
"title": "The UNED systems at SENSEVAL-2",
"authors": [
{
"first": "David",
"middle": [],
"last": "Fermindez-Amor6s",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Julio",
"middle": [],
"last": "Gonzalo",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Felisa",
"middle": [],
"last": "Verdejo",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We have participated in the SENSEVAL-2 English tasks (all words and lexical sample) with an unsupervised system based on mutual information measured over a large corpus (277 million words) and some additional heuristics. A supervised extension of the system was also presented to the lexical sample task. Our system scored first among unsupervised systems in both tasks: 56.9% recall in all words, 40.2% in lexical sample. This is slightly worse than the first sense heuristic for all. words and 3.6% better for the lexical sample, a strong indication that unsupervised Word Sense Disambiguation remains being a strong challenge.",
"pdf_parse": {
"paper_id": "S01-1018",
"_pdf_hash": "",
"abstract": [
{
"text": "We have participated in the SENSEVAL-2 English tasks (all words and lexical sample) with an unsupervised system based on mutual information measured over a large corpus (277 million words) and some additional heuristics. A supervised extension of the system was also presented to the lexical sample task. Our system scored first among unsupervised systems in both tasks: 56.9% recall in all words, 40.2% in lexical sample. This is slightly worse than the first sense heuristic for all. words and 3.6% better for the lexical sample, a strong indication that unsupervised Word Sense Disambiguation remains being a strong challenge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We advocate researching unsupervised techniques for Word Sense Disambiguation (WSD). Supervised techniques offer better results in general but the setbacks, such as the problem of developing reliable training data, are very considerable. Also there's probably more to WSD than blind machine learning (a typical approach, although such systems produce interesting baselines).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Within the unsupervised paradigm, we are interested in performing in-depth measures of the disambiguation potential of different sources of information. We have previously investigated the informational value of semantic distance measures in ) . For SENSEVAL-2, we have turned to investigate pure coocurrence information as a source of disambiguation evidence. In essence, our system computes a matrix of mutual information for a fixed vocabulary and applies it to weight coocurrence counting between sense and context characteristic vectors.",
"cite_spans": [
{
"start": 242,
"end": 243,
"text": ")",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the next section we describe the process of constructing the relevance matrix. In section 3 we present the particular heuristics used for the competing systems. In section 4 we show the results by system and heuristic and some baselines for comparison. Finally in the last sections we draw some conclusions about the exercise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2.1 Corpus processing Before building our systems we have developed a resource we've called the relevance matrix. The raw data used to build the matrix comes from the Project Gutenberg (PG) 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Relevance matrix",
"sec_num": "2"
},
{
"text": "At the time of the creation of the matrix the PG consisted of more than 3000 books of diverse genres. We have adapted these books for our purpose : First, language identification was used to filter books written in English; Then we stripped off the disclaimers. The result is a collection of around 1.3Gb of plain text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Relevance matrix",
"sec_num": "2"
},
{
"text": "Finally we tokenize, lemmatize, strip punctuation and stop words and detect numbers and proper nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Relevance matrix",
"sec_num": "2"
},
{
"text": "We have built a vocabulary of the 20000 most frequent words (or labels, as we have changed all the proper nouns detected to the label PROPER_NOUN and all numbers detected to NUMBER) in the text and a symmetric coocurrence matrix between these words within a context of 61 words (we thought a broad context of radius 30 would be appropriate since we are trying to capture vague semantic relations).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coocurrence matrix",
"sec_num": "2.2"
},
{
"text": "In a second step, we have built another symmetric matrix, which we have called relevance matrix, using a mutual information measure between the words (or labels), so that for two words a and b, the entry for them would be ~i)~~l), where P(a) is the probability of finding the word a in a random context of a given size. P(a n b) is the probability of finding both a and b in a random context of the fixed size. We've introduced a threshold of 2 below which we set the entry to zero for practical purposes. We think that this is a valuable resource that could be of interest for many other applications other than WSD. Also, it can only grow in quality since at the time of making this report the data in the PG has almost doubled in size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance matrix",
"sec_num": "2.3"
},
{
"text": "We have developed a very simple language in order to systematize the experiments. This language allows the construction of WSD systems composed of different heuristics that are applied in cascade so that each word to be disambiguated is presented to the first heuristic, and if it fails to disambiguate, then the word is passed on to the second heuristic and so on. We can have several such systems running in parallel for efficiency reasons (the matrix has high memory requirements). Next we show the heuristics we have considered to build the systems \u2022 Monosemous expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cascade of heuristics",
"sec_num": "3"
},
{
"text": "Monosemous expressions are simply unambiguous words in the case of the all words English task. In the case of the lexical sample English task, however, the annotations include multiword expressions. We have implemented a multiword term detector that considers the multiword terms from WordNet's index.sense file and detects them in the test file using a multilevel backtracking algorithm that takes account of the inflected and base forms of the components of a particular multiword in order to maximize multiword detection. We tested this algorithm against the PG and found millions of these multiword terms. We restricted ourselves to the multiwords already present in the training file since there are, apparently, multiword expressions that where overlooked during manual tagging (for instance the WordNet expression 'the_good_old_days' is not hand-tagged",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cascade of heuristics",
"sec_num": "3"
},
{
"text": "as such in the test files)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "76",
"sec_num": null
},
{
"text": "\u2022 Statistical filter WordNet comes with a file, cntlist, literally 'file listing number of times each tagged sense occurs in a semantic concordance' so we use this to compute the relative probability of a sense given a word ( approximate in the case of collections other than SemCor). Using this information, we eliminated the senses that had a probability under 10% and if only one sense remains we choose it. Otherwise we go on to the next heuristic. In other words, we didn't apply complex techniques with words which are highly skewed in meaning 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "76",
"sec_num": null
},
{
"text": "\u2022 Relevance filter This heuristic makes use of the relevance matrix. In order to assign a score to a sense, we count the coocurrences of words in the context of the word to be disambiguated with the words in the definition of the senses (the WordNet gloss tokenized, lemmatized and stripped out of stop words and punctuation signs) weighting each coocurrence by the entry in the relevance matrix for the word to be disambiguated and the word whose coocurrences are being counted, i.e., if s is a sense of the word a whose definition is Sand C is the context in which a is to be disambiguated, then the score for s would be:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "76",
"sec_num": null
},
{
"text": "L Rwafreq(w, C)freq(w, S)idf(w, a)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "76",
"sec_num": null
},
{
"text": "wEC Where idf(w, a) = log !i.e, with N being the number of senses for word a and dw the number of sense glosses in which w appears. freq(w, C) is the frequency of word win the context C and freq ( w, S) is the frequency of w in the sense gloss S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "76",
"sec_num": null
},
{
"text": "The idea is to prime the occurrences of words that are relevant to the word being disambiguated and give low credit (possibly none) to the words that are incidentally used in the context. Also, in the all words task (where POS tags from the TreeBank are provided) we have considered only the context words that have a POS tag compatible with that of the word being disambiguated. By compatible we mean nouns and nouns, nouns and verbs, nouns and adjectives, verbs and verbs, verbs and adverbs and vice versa. Roughly speaking, words that can have an intra-phrase relation. We also filtered out senses with low values in the cntlist file, and in any case we only considered at most the first six senses of a word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "76",
"sec_num": null
},
{
"text": "\u2022 Enriching sense characteristic vectors The relevance filter provided very good results in our experiments with SemCor and SENSEVAL-1 data as far as precision is concerned, but the problem is that there is little overlapping between the definitions of the senses and the contexts in terms of coocurrence (after removing stop words and computing idf) which means that the previous heuristic didn't disambiguate many words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "76",
"sec_num": null
},
{
"text": "To overcome this problem, we enrich the senses characteristic vectors adding for each word in the vector the words related to it via the relevance matrix weights. This corresponds to the algebraic notion of multiplying the matrix and the characteristic vector. In other words, if R is the relevance matrix and v our characteristic vector we would finally use Rv",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "76",
"sec_num": null
},
{
"text": "+ v.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "76",
"sec_num": null
},
{
"text": "This should increase the number of words disambiguated provided we eliminate the idf factor (which would be zero in most cases because now the sense characteristics vectors are not as sparse as before). When we also discard senses with low relative frequency in SemCor we call this heuristic mixed filter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "76",
"sec_num": null
},
{
"text": "\u2022 back off strategies For those cases that couldn't be covered by other heuristics we employed the first sense heuristic. In the case of the supervised system for the English lexical sample task we thought of using the most frequent sense but didn't implement it due to lack of time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "76",
"sec_num": null
},
{
"text": "\u2022 UNED-AW-U2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems and Results",
"sec_num": "4"
},
{
"text": "We won't delve into UNED-AW-U system as it is very similar to this one. This is an (arguably) unsupervised system for the English all words task. The heuristics we used and the results obtained for each of them are shown in Table 1 Table 2 : UNED-AW-U2 vs baselines In the lexical sample task, we weren't able to multiply by the relevance matrix due to time constraints, so in order to increase the coverage for the relevance filter heuristic we expanded the definitions of the senses with those of the first 5 levels of hyponyms. Also, we selected the radius of the context to be considered depending on the POS of the word being disambiguated. For nouns and verbs we used 25 words radius neighbourhood and for adjectives 5 words at each side.",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 231,
"text": "Table 1",
"ref_id": null
},
{
"start": 232,
"end": 239,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Systems and Results",
"sec_num": "4"
},
{
"text": "\u2022 UNED-LS-U This is essentially the same system as UNED-AW-U2, in this case applied to the lexical sample task. The results are displayed in Table 3 . We've put a lot of effort into making the relevance matrix but its performance in the WSD task is striking. The matrix is interesting and its application in the relevance filter heuristic is slightly better than simple coocurrence counting, which proves that it doesn't discard relevant words. The problem seems to lie in the fact that irrelevant words (with respect to the word to be disambiguated) rarely occur both in the context of the word and in the definition of the senses (if they appeared in the definition they wouldn't be so irrelevant) so the direct impact of the information in the matrix is very weak. Likewise, relevant (via the matrix) words with respect to the word to be disambiguated occur often both in the context and in the definitions so the final result is very similar to simple coocurrence counting. This problem only showed up in the lexical sample task systems. In the all words systems we were to enrich the sense definitions to make a more advantageous use of the matrix.",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 148,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Systems and Results",
"sec_num": "4"
},
{
"text": "We were very confident that the relevance filter would yield good results as we have al-ready evaluated it against the SENSEVAL-1 and SemCor data. We felt however that we could improve the coverage of the heuristic enriching the definitions multiplying by the matrix. A similar approach was used by Yarowsky (Yarowsky, 1992) and Schiitze (Schiitze and Pedersen, 1995) and it worked for them. This wasn't the case for us; still, we think the resource is well worth researching other ways of using it.",
"cite_spans": [
{
"start": 308,
"end": 324,
"text": "(Yarowsky, 1992)",
"ref_id": "BIBREF2"
},
{
"start": 338,
"end": 367,
"text": "(Schiitze and Pedersen, 1995)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Systems and Results",
"sec_num": "4"
},
{
"text": "As for the overall scores, the unsupervised lexical sample obtained the highest recall of the unsupervised systems, which proves that carefully implementing simple techniques still pays off. In the all words task the UNED-WS-U2 had also the highest recall among the unsupervised systems (as characterized in the SENSEVAL-2 web descriptions), and the fourth overall. We'll train it with the examples in Semcor 1.6 and see how much we can gain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems and Results",
"sec_num": "4"
},
{
"text": "Our system scored first among unsupervised systems in both tasks: 56.9% recall in all words, 40.2% in lexical sample. This is slightly worse than the first sense heuristic for all words and 3.6% better for the lexical sample, a strong indication that unsupervised Word Sense Disambiguation remains being a strong challenge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "http://promo.net/pg",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Some people may argue that this is a supervised approach. In our opinion, the cntlist information does not make a system supervised per se, because a) It is standard information provided as part of the dictionary and b) We don't use the examples to feed or train any procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The role of conceptual relations in word sense disambiguation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Fernandez-Amor6s",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Gonzalo",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Verdejo",
"suffix": ""
}
],
"year": null,
"venue": "Applications of Natural Language to Information Systems (NLDB)'Ol",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Fernandez-Amor6s, J. Gonzalo, and F. Verdejo. The role of conceptual relations in word sense disambiguation. In Applica- tions of Natural Language to Information Systems (NLDB)'Ol, Madrid.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Information retrieval based on word senses",
"authors": [
{
"first": "H",
"middle": [],
"last": "Schiitze",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pedersen",
"suffix": ""
}
],
"year": 1995,
"venue": "Fourth Annual Symposium on Document Analysis and Information Retrieval",
"volume": "",
"issue": "",
"pages": "161--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Schiitze and J. Pedersen. 1995. Information retrieval based on word senses. In Fourth An- nual Symposium on Document Analysis and Information Retrieval, Las Vegas NV, pages 161-175.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Using statistical models of roget's categories trained on large corpora",
"authors": [
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1992,
"venue": "COLING'92",
"volume": "",
"issue": "",
"pages": "454--460",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Yarowsky. 1992. Using statistical models of roget's categories trained on large corpora. In COLING'92, Nantes, pages 454-460.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"text": "",
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"5\">: Unsupervised heuristics for English</td></tr><tr><td colspan=\"2\">lexical sample task</td><td/><td/><td/></tr><tr><td colspan=\"2\">\u2022 UNED-LS-T</td><td/><td/><td/></tr><tr><td colspan=\"5\">This is a supervised variant of the previous</td></tr><tr><td colspan=\"5\">systems. We have added the training ex-</td></tr><tr><td colspan=\"5\">amples to the definitions of the senses giv-</td></tr><tr><td colspan=\"5\">ing the same weight to the definition and</td></tr><tr><td colspan=\"5\">to all the examples as a whole (i.e. defini-</td></tr><tr><td colspan=\"5\">tions are considered more interesting than</td></tr><tr><td>examples)</td><td/><td/><td/><td/></tr><tr><td>Heuristic</td><td>Att.</td><td>Score</td><td>Prec</td><td>Recall</td></tr><tr><td colspan=\"5\">Relevance filt 4116 206150 50.1% 47.6%</td></tr><tr><td>First sense</td><td>208</td><td>9300</td><td>44.7%</td><td>2.1%</td></tr><tr><td>Total</td><td colspan=\"4\">4324 215450 49.8% 49.8%</td></tr></table>",
"html": null
},
"TABREF3": {
"text": "",
"num": null,
"type_str": "table",
"content": "<table><tr><td>: Supervised heuristics for English lexi-</td></tr><tr><td>cal sample task</td></tr><tr><td>5 Discussion and conclusions</td></tr></table>",
"html": null
}
}
}
}