Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S07-1006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:22:45.589480Z"
},
"title": "SemEval-2007 Task 07: Coarse-Grained English All-Words Task",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Kenneth",
"middle": [
"C"
],
"last": "Litkowski",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Orin",
"middle": [],
"last": "Hargraves",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents the coarse-grained English all-words task at SemEval-2007. We describe our experience in producing a coarse version of the WordNet sense inventory and preparing the sense-tagged corpus for the task. We present the results of participating systems and discuss future directions.",
"pdf_parse": {
"paper_id": "S07-1006",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents the coarse-grained English all-words task at SemEval-2007. We describe our experience in producing a coarse version of the WordNet sense inventory and preparing the sense-tagged corpus for the task. We present the results of participating systems and discuss future directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "It is commonly thought that one of the major obstacles to high-performance Word Sense Disambiguation (WSD) is the fine granularity of sense inventories. State-of-the-art systems attained a disambiguation accuracy around 65% in the Senseval-3 all-words task (Snyder and Palmer, 2004) , where WordNet (Fellbaum, 1998) was adopted as a reference sense inventory. Unfortunately, WordNet is a fine-grained resource, encoding sense distinctions that are difficult to recognize even for human annotators (Edmonds and Kilgarriff, 2002) . Making WSD an enabling technique for end-to-end applications clearly depends on the ability to deal with reasonable sense distinctions. The aim of this task was to explicitly tackle the granularity issue and study the performance of WSD systems on an all-words basis when a coarser set of senses is provided for the target words. Given the need of the NLP community to work on freely available resources, the solution of adopting a different computational lexicon is not viable. On the other hand, the production of a coarse-grained sense inventory is not a simple task. The main issue is certainly the subjectivity of sense clusters. To overcome this problem, different strategies can be adopted. For instance, in the OntoNotes project (Hovy et al., 2006) senses are grouped until a 90% inter-annotator agreement is achieved. In contrast, as we describe in this paper, our approach is based on a mapping to a previously existing inventory which encodes sense distinctions at different levels of granularity, thus allowing to induce a sense clustering for the mapped senses.",
"cite_spans": [
{
"start": 257,
"end": 282,
"text": "(Snyder and Palmer, 2004)",
"ref_id": "BIBREF7"
},
{
"start": 299,
"end": 315,
"text": "(Fellbaum, 1998)",
"ref_id": null
},
{
"start": 497,
"end": 527,
"text": "(Edmonds and Kilgarriff, 2002)",
"ref_id": "BIBREF1"
},
{
"start": 1267,
"end": 1286,
"text": "(Hovy et al., 2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We would like to mention that another SemEval-2007 task dealt with the issue of sense granularity for WSD, namely Task 17 (subtask #1): Coarsegrained English Lexical Sample WSD. In this paper, we report our experience in organizing Task 07.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The task required participating systems to annotate open-class words (i.e. nouns, verbs, adjectives, and adverbs) in a test corpus with the most appropriate sense from a coarse-grained version of the WordNet sense inventory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Setup",
"sec_num": "2"
},
{
"text": "The test data set consisted of 5,377 words of running text from five different articles: the first three (in common with Task 17) were obtained from the WSJ corpus, the fourth was the Wikipedia entry for computer programming 1 , the fifth was an excerpt of Amy Steedman's Knights of the Art, biographies of Italian painters 2 . We decided to add the last two article domain words annotated d001 JOURNALISM 951 368 d002 BOOK REVIEW 987 379 d003 TRAVEL 1311 500 d004 COMPUTER SCIENCE 1326 677 d005 BIOGRAPHY 802 345 total 5377 2269 Table 1 : Statistics about the five articles in the test data set.",
"cite_spans": [],
"ref_spans": [
{
"start": 359,
"end": 574,
"text": "article domain words annotated d001 JOURNALISM 951 368 d002 BOOK REVIEW 987 379 d003 TRAVEL 1311 500 d004 COMPUTER SCIENCE 1326 677 d005 BIOGRAPHY 802 345 total 5377 2269 Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Test Corpus",
"sec_num": "2.1"
},
{
"text": "texts to the initial dataset as we wanted the corpus to have a size comparable to that of previous editions of all-words tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test Corpus",
"sec_num": "2.1"
},
{
"text": "In Table 1 we report the domain, number of running words, and number of annotated words for the five articles. We observe that articles d003 and d004 are the largest in the corpus (they constitute 51.87% of it).",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Test Corpus",
"sec_num": "2.1"
},
{
"text": "To tackle the granularity issue, we produced a coarser-grained version of the WordNet sense inventory 3 based on the procedure described by Navigli (2006) . The method consists of automatically mapping WordNet senses to top level, numbered entries in the Oxford Dictionary of English (ODE, (Soanes and Stevenson, 2003) ). The semantic mapping between WordNet and ODE entries was obtained in two steps: first, we disambiguated with the SSI algorithm (Navigli and Velardi, 2005 ) the definitions of the two dictionaries, together with additional information (hypernyms and domain labels); second, for each WordNet sense, we determined the best matching ODE coarse entry. As a result, WordNet senses mapped to the same ODE entry were assigned to the same sense cluster. WordNet senses with no match were associated with a singleton sense. In contrast to the automatic method above, the sense mappings for all the words in our test corpus were manually produced by the third author, an expert lexicographer, with the aid of a mapping interface. Not all the words in the corpus could be mapped directly for several reasons: lacking entries in ODE (e.g. adjectives underlying and shivering), 3 We adopted WordNet 2.1, available from: http://wordnet.princeton.edu different spellings (e.g. after-effect vs. aftereffect, halfhearted vs. half-hearted, etc.), derivatives (e.g. procedural, gambler, etc.). In most of the cases, we asked the lexicographer to map senses of the original word to senses of lexically-related words (e.g. WordNet senses of procedural were mapped to ODE senses of procedure, etc.). When this mapping was not straightforward, we just adopted the WordNet sense inventory for that word.",
"cite_spans": [
{
"start": 140,
"end": 154,
"text": "Navigli (2006)",
"ref_id": "BIBREF6"
},
{
"start": 284,
"end": 318,
"text": "(ODE, (Soanes and Stevenson, 2003)",
"ref_id": null
},
{
"start": 449,
"end": 475,
"text": "(Navigli and Velardi, 2005",
"ref_id": "BIBREF5"
},
{
"start": 1186,
"end": 1187,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Creation of a Coarse-Grained Sense Inventory",
"sec_num": "2.2"
},
{
"text": "We released the entire sense groupings (those induced from the manual mapping for words in the test set plus those automatically derived on the other words) and made them available to the participants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Creation of a Coarse-Grained Sense Inventory",
"sec_num": "2.2"
},
{
"text": "All open-class words (i.e. nouns, verbs, adjectives, and adverbs) with an existing sense in the WordNet inventory were manually annotated by the third author. Multi-word expressions were explicitly identified in the test set and annotated as such (this was made to allow a fair comparison among systems independent of their ability to identify multi-word expressions).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sense Annotation",
"sec_num": "2.3"
},
{
"text": "We excluded auxiliary verbs, uncovered phrasal and idiomatic verbs, exclamatory uses, etc. The annotator was allowed to tag words with multiple coarse senses, but was asked to make a single sense assignment whenever possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sense Annotation",
"sec_num": "2.3"
},
{
"text": "The lexicographer annotated an overall number of 2,316 content words. 47 (2%) of them were excluded because no WordNet sense was deemed appropriate. The remaining 2,269 content words thus constituted the test data set. Only 8 of them were assigned more than one sense: specifically, two coarse senses were assigned to a single word instance 4 and two distinct fine-grained senses were assigned to 7 word instances. This was a clear hint that the sense clusters were not ambiguous for the vast majority of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sense Annotation",
"sec_num": "2.3"
},
{
"text": "In Table 2 we report information about the polysemy of the word instances in the test set. Overall, 29.88% (678/2269) of the word instances were monosemous (according to our coarse sense inventory). The average polysemy of the test set with the coarse-grained sense inventory was 3.06 compared to an average polysemy with the WordNet inventory polysemy N V A R all monosemous 358 86 141 93 678 polysemous 750 505 221 115 1591 total 1108 591 362 208 2269 Table 2 : Statistics about the test set polysemy (N = nouns, V = verbs, A = adjectives, R = adverbs).",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": null
},
{
"start": 353,
"end": 476,
"text": "N V A R all monosemous 358 86 141 93 678 polysemous 750 505 221 115 1591 total 1108 591 362 208 2269 Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Sense Annotation",
"sec_num": "2.3"
},
{
"text": "of 6.18.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sense Annotation",
"sec_num": "2.3"
},
{
"text": "Recent estimations of the inter-annotator agreement when using the WordNet inventory report figures of 72.5% agreement in the preparation of the English all-words test set at Senseval-3 (Snyder and Palmer, 2004) and 67.3% on the Open Mind Word Expert annotation exercise (Chklovski and Mihalcea, 2002) .",
"cite_spans": [
{
"start": 186,
"end": 211,
"text": "(Snyder and Palmer, 2004)",
"ref_id": "BIBREF7"
},
{
"start": 271,
"end": 301,
"text": "(Chklovski and Mihalcea, 2002)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-Annotator Agreement",
"sec_num": "2.4"
},
{
"text": "As the inter-annotator agreement is often considered an upper bound for WSD systems, it was desirable to have a much higher number for our task, given its coarse-grained nature. To this end, beside the expert lexicographer, a second author independently performed part of the manual sense mapping (590 word senses) described in Section 2.2. The pairwise agreement was 86.44%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-Annotator Agreement",
"sec_num": "2.4"
},
{
"text": "We repeated the same agreement evaluation on the sense annotation task of the test corpus. A second author independently annotated part of the test set (710 word instances). The pairwise agreement between the two authors was 93.80%. This figure, compared to those in the literature for fine-grained human annotations, gives us a clear indication that the agreement of human annotators strictly depends on the granularity of the adopted sense inventory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-Annotator Agreement",
"sec_num": "2.4"
},
{
"text": "We calculated two baselines for the test corpus: a random baseline, in which senses are chosen at random, and the most frequent baseline (MFS), in which we assign the first WordNet sense to each word in the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3"
},
{
"text": "Formally, the accuracy of the random baseline was calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3"
},
{
"text": "BL Rand = 1 |T | |T | i=1 1 |CoarseSenses(w i )|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3"
},
{
"text": "where T is our test corpus, w i is the i-th word instance in T , and CoarseSenses(w i ) is the set of coarse senses for w i according to the sense clustering we produced as described in Section 2.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3"
},
{
"text": "The accuracy of the MFS baseline was calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3"
},
{
"text": "BL M F S = 1 |T | |T | i=1 \u03b4(w i , 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3"
},
{
"text": "where \u03b4(w i , k) equals 1 when the k-th sense of word w i belongs to the cluster(s) manually associated by the lexicographer to word w i (0 otherwise). Notice that our calculation of the MFS is based on the frequencies in the SemCor corpus (Miller et al., 1993) , as we exploit WordNet sense rankings.",
"cite_spans": [
{
"start": 240,
"end": 261,
"text": "(Miller et al., 1993)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3"
},
{
"text": "12 teams submitted 14 systems overall (plus two systems from a 13 th withdrawn team that we will not report). According to the SemEval policy for task organizers, we remark that the system labelled as UOR-SSI was submitted by the first author (the system is based on the Structural Semantic Interconnections algorithm (Navigli and Velardi, 2005) with a lexical knowledge base composed by Word-Net and approximately 70,000 relatedness edges). Even though we did not specifically enrich the algorithm's knowledge base on the task at hand, we list the system separately from the overall ranking.",
"cite_spans": [
{
"start": 318,
"end": 345,
"text": "(Navigli and Velardi, 2005)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "The results are shown in Table 3 . We calculated a MFS baseline of 78.89% and a random baseline of 52.43%. In Table 4 we report the F1 measures for all systems where we used the MFS as a backoff strategy when no sense assignment was attempted (this possibly reranked 6 systems -marked in bold in the table -which did not assign a sense to all word instances in the test set). Compared to previous results on fine-grained evaluation exercises (Edmonds and Kilgarriff, 2002; Snyder and Palmer, 2004) , the systems' results are much higher. On the other hand, the difference in performance between the MFS baseline and state-of-the-art systems (around 5%) on coarse-grained disambiguation is comparable to that of the Senseval-3 all-words exercise. However, given the novelty of the task we believe that systems can achieve even better perfor- Table 3 : System scores sorted by F1 measure (A = attempted, P = precision, R = recall, F1 = F1 measure, \u2020 : system from one of the task organizers). mance by heavily exploiting the coarse nature of the sense inventory.",
"cite_spans": [
{
"start": 442,
"end": 472,
"text": "(Edmonds and Kilgarriff, 2002;",
"ref_id": "BIBREF1"
},
{
"start": 473,
"end": 497,
"text": "Snyder and Palmer, 2004)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 3",
"ref_id": null
},
{
"start": 110,
"end": 117,
"text": "Table 4",
"ref_id": "TABREF2"
},
{
"start": 841,
"end": 848,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "In Table 5 we report the results for each of the five articles. The interesting aspect of the table is that documents from some domains seem to have predominant senses different from those in Sem-Cor. Specifically, the MFS baseline performs more poorly on documents d004 and d005, from the COMPUTER SCIENCE and BIOGRAPHY domains respectively. We believe this is due to the fact that these documents have specific predominant senses, which correspond less often to the most frequent sense in SemCor than for the other three documents. It is also interesting to observe that different systems perform differently on the five documents (we highlight in bold the best performing systems on each article).",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Finally, we calculated the systems' performance by part of speech. The results are shown in Table 6 . Again, we note that different systems show different performance depending on the part-of-speech tag. Another interesting aspect is that the performance of the MFS baseline is very close to state-ofthe-art systems for adjectives and adverbs, whereas it is more than 3 points below for verbs, and around 5 for nouns. Table 6 : System scores by part-of-speech tag (N = nouns, V = verbs, A = adjectives, R = adverbs) sorted by overall F1 measure (best scores are marked in bold, \u2020 : system from one of the task organizers). Table 5 : System scores by article (best scores are marked in bold, \u2020 : system from one of the task organizers).",
"cite_spans": [],
"ref_spans": [
{
"start": 92,
"end": 100,
"text": "Table 6",
"ref_id": null
},
{
"start": 419,
"end": 426,
"text": "Table 6",
"ref_id": null
},
{
"start": 624,
"end": 631,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "In order to allow for a critical and comparative inspection of the system results, we asked the participants to answer some questions about their systems. These included information about whether: 1. the system used semantically-annotated and unannotated resources; 2. the system used the MFS as a backoff strategy; 3. the system used the coarse senses provided by the organizers; 4. the system was trained on some corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems Description",
"sec_num": "5"
},
{
"text": "We believe that this gives interesting information to provide a deeper understanding of the results. We summarize the participants' answers to the questionnaires in Table 7 . We report about the use of semantic resources as well as semantically annotated corpora (SC = SemCor, DSO = Defence Science Organisation Corpus, SE = Senseval corpora, OMWE = Open Mind Word Expert, XWN = eXtended Word-Net, WN = WordNet glosses and/or relations, WND = WordNet Domains), as well as information about the use of unannotated corpora (UC), training (TR), MFS (based on the SemCor sense frequencies), and the coarse senses provided by the organizers (CS). As expected, several systems used lexico-semantic information from the WordNet semantic network and/or were trained on the SemCor semanticallyannotated corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 165,
"end": 172,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Systems Description",
"sec_num": "5"
},
{
"text": "Finally, we point out that all the systems performing better than the MFS baseline adopted it as a backoff strategy when they were not able to output a sense assignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems Description",
"sec_num": "5"
},
{
"text": "It is commonly agreed that Word Sense Disambiguation needs emerge and show its usefulness in endto-end applications: after decades of research in the field it is still unclear whether WSD can provide a relevant contribution to real-world applications, such as Information Retrieval, Question Answering, etc. In previous Senseval evaluation exercises, stateof-the-art systems achieved performance far below 70% and even the agreement between human annotators was discouraging. As a result of the discussion at the Senseval-3 workshop in 2004, one of the aims of SemEval-2007 was to tackle the problems at the roots of WSD. In this task, we dealt with the granularity issue which is a major obstacle to both system and human annotators. In the hope of overcoming the current performance upper bounds, we",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Directions",
"sec_num": "6"
},
{
"text": "http://en.wikipedia.org/wiki/Computer programming 2 http://www.gutenberg.org/etext/529",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "d005.s004.t015",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partially funded by the Interop NoE (508011), 6 th European Union FP. We would like to thank Martha Palmer for providing us the first three texts of the test corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": " Table 7 : Information about participating systems (SC = SemCor, DSO = Defence Science Organisation Corpus, SE = Senseval corpora, OMWE = Open Mind Word Expert, XWN = eXtended WordNet, WN = WordNet glosses and/or relations, WND = WordNet Domains, UC = use of unannotated corpora, TR = use of training, MFS = most frequent sense backoff strategy, CS = use of coarse senses from the organizers, \u2020 : system from one of the task organizers).proposed the adoption of a coarse-grained sense inventory. We found the results of participating systems interesting and stimulating. However, some questions arise. First, it is unclear whether, given the novelty of the task, systems really achieved the state of the art or can still improve their performance based on a heavier exploitation of coarse-and finegrained information from the adopted sense inventory. We observe that, on a technical domain such as computer science, most supervised systems performed worse due to the nature of their training set. Second, we still need to show that coarse senses can be useful in real applications. Third, a full coarse sense inventory is not yet available: this is a major obstacle to large-scale in vivo evaluations. We believe that these aspects deserve further investigation in the years to come.",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 8,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Building a sense tagged corpus with open mind word expert",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Chklovski",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ACL 2002 Workshop on WSD: Recent Successes and Future Directions",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Chklovski and Rada Mihalcea. 2002. Building a sense tagged corpus with open mind word expert. In Proc. of ACL 2002 Workshop on WSD: Recent Successes and Future Di- rections. Philadelphia, PA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Introduction to the special issue on evaluating word sense disambiguation systems",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Edmonds",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Natural Language Engineering",
"volume": "8",
"issue": "4",
"pages": "279--291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Edmonds and Adam Kilgarriff. 2002. Introduction to the special issue on evaluating word sense disambiguation sys- tems. Journal of Natural Language Engineering, 8(4):279- 291.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "WordNet: an Electronic Lexical Database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet: an Electronic Lexical Database. MIT Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Ontonotes: The 90% solution",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Comp. Volume",
"volume": "",
"issue": "",
"pages": "57--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: The 90% solution. In Proceedings of the Human Language Tech- nology Conference of the NAACL, Comp. Volume, pages 57- 60, New York City, USA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A semantic concordance",
"authors": [
{
"first": "George",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "Randee",
"middle": [],
"last": "Tengi",
"suffix": ""
},
{
"first": "Ross",
"middle": [
"T"
],
"last": "Bunker",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the ARPA Workshop on Human Language Technology",
"volume": "",
"issue": "",
"pages": "303--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller, Claudia Leacock, Randee Tengi, and Ross T. Bunker. 1993. A semantic concordance. In Proceedings of the ARPA Workshop on Human Language Technology, pages 303-308, Princeton, NJ, USA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Structural semantic interconnections: a knowledge-based approach to word sense disambiguation",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Velardi",
"suffix": ""
}
],
"year": 2005,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI)",
"volume": "27",
"issue": "",
"pages": "1063--1074",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli and Paola Velardi. 2005. Structural seman- tic interconnections: a knowledge-based approach to word sense disambiguation. IEEE Transactions on Pattern Analy- sis and Machine Intelligence (PAMI), 27(7):1063-1074.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Meaningful clustering of senses helps boost word sense disambiguation performance",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of the 44th Annual Meeting of the Association for Computational Linguistics joint with the 21st International Conference on Computational Linguistics (COLING-ACL 2006)",
"volume": "",
"issue": "",
"pages": "105--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli. 2006. Meaningful clustering of senses helps boost word sense disambiguation performance. In Proc. of the 44th Annual Meeting of the Association for Computa- tional Linguistics joint with the 21st International Confer- ence on Computational Linguistics (COLING-ACL 2006), pages 105-112. Sydney, Australia.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The english allwords task",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Snyder",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of ACL 2004 SENSEVAL-3 Workshop",
"volume": "",
"issue": "",
"pages": "41--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Snyder and Martha Palmer. 2004. The english all- words task. In Proc. of ACL 2004 SENSEVAL-3 Workshop, pages 41-43. Barcelona, Spain.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Oxford Dictionary of English",
"authors": [],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Catherine Soanes and Angus Stevenson, editors. 2003. Oxford Dictionary of English. Oxford University Press.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"type_str": "table",
"text": "System scores sorted by F1 measure with MFS adopted as a backoff strategy when no sense assignment is attempted ( \u2020 : system from one of the task organizers). Systems affected are marked in bold.",
"num": null,
"html": null,
"content": "<table><tr><td>System</td><td>N</td><td>V</td><td>A</td><td>R</td></tr><tr><td>NUS-PT</td><td colspan=\"4\">82.31 78.51 85.64 89.42</td></tr><tr><td>NUS-ML</td><td colspan=\"4\">81.41 78.17 82.60 90.38</td></tr><tr><td>LCC-WSD</td><td colspan=\"4\">80.69 78.17 85.36 87.98</td></tr><tr><td>GPLSI</td><td colspan=\"4\">80.05 74.45 82.32 86.54</td></tr><tr><td>BLMF S</td><td colspan=\"4\">77.44 75.30 84.25 87.50</td></tr><tr><td>UPV-WSD</td><td colspan=\"4\">79.33 72.76 84.53 81.25</td></tr><tr><td>TKB-UO</td><td colspan=\"4\">70.76 62.61 78.73 74.04</td></tr><tr><td>PU-BCD</td><td colspan=\"4\">71.41 59.69 66.57 55.67</td></tr><tr><td colspan=\"5\">RACAI-SYNWSD 64.02 62.10 71.55 75.00</td></tr><tr><td>SUSSX-FR</td><td colspan=\"4\">68.09 51.02 57.38 49.38</td></tr><tr><td>USYD</td><td colspan=\"4\">56.06 60.43 58.00 54.31</td></tr><tr><td>UOFL</td><td colspan=\"4\">57.65 48.82 25.87 60.80</td></tr><tr><td>SUSSX-C-WD</td><td colspan=\"4\">52.18 35.64 42.95 46.30</td></tr><tr><td>SUSSX-CR</td><td colspan=\"4\">51.87 35.44 42.95 46.30</td></tr><tr><td>UOR-SSI \u2020</td><td colspan=\"4\">84.12 78.34 85.36 88.46</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"text": "88.32 88.32 88.13 88.13 83.40 83.40 76.07 76.07 81.45 81.45 NUS-ML 86.14 86.14 88.39 88.39 81.40 81.40 76.66 76.66 79.13 79.13 LCC-WSD 87.50 87.50 87.60 87.60 81.40 81.40 75.48 75.48 80.00 80.00 GPLSI 83.42 83.42 86.54 86.54 80.40 80.40 73.71 73.71 77.97 77.97 SYNWSD 71.47 71.47 72.82 72.82 66.80 66.80 60.86 60.86 59.71 59.71 SUSSX-FR 79.10 57.61 73.72 53.30 74.86 52.40 67.97 48.89 65.20 51.59 USYD 62.53 61.69 59.78 57.26 60.97 57.80 60.57 56.28 47.15 45.51 UOFL 61.41 59.24 55.93 52.24 48.00 45.60 53.42 47.27 44.38 41.16 SUSSX-C-WD 66.42 48.37 61.31 44.33 55.14 38.60 50.72 36.48 42.13 33.33 SUSSX-CR 66.05 48.10 60.58 43.80 59.14 41.40 48.67 35.01 40.29 31.88 UOR-SSI \u2020 86.14 86.14 85.49 85.49 79.60 79.60 86.85 86.85 75.65 75.65",
"num": null,
"html": null,
"content": "<table><tr><td/><td>d001</td><td/><td>d002</td><td/><td>d003</td><td/><td>d004</td><td/><td>d005</td><td/></tr><tr><td>System</td><td>P</td><td>R</td><td>P</td><td>R</td><td>P</td><td>R</td><td>P</td><td>R</td><td>P</td><td>R</td></tr><tr><td>NUS-PT</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>BL M F S</td><td colspan=\"10\">85.60 85.60 84.70 84.70 77.80 77.80 75.19 75.19 74.20 74.20</td></tr><tr><td>UPV-WSD</td><td colspan=\"10\">84.24 84.24 80.74 80.74 76.00 76.00 77.11 77.11 77.10 77.10</td></tr><tr><td>TKB-UO</td><td colspan=\"10\">78.80 78.80 72.56 72.56 69.40 69.40 70.75 70.75 58.55 58.55</td></tr><tr><td>PU-BCD</td><td colspan=\"10\">77.16 67.94 75.52 67.55 64.96 58.20 68.86 61.74 64.42 60.87</td></tr><tr><td>RACAI-</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>"
}
}
}
}