|
{ |
|
"paper_id": "S01-1001", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:35:32.485456Z" |
|
}, |
|
"title": "SENSEV AL-2: Overview", |
|
"authors": [ |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Edmonds", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "SENSEV AL-2: The Second International Workshop on Evaluating Word Sense Disambiguation Systems was held on July 5-6, 2001. This paper gives an overview of SENSEV AL-2, discussing the evaluation exercise, the tasks, the scoring system, and the results. It ends with some recommendations for future evaluation exercises.", |
|
"pdf_parse": { |
|
"paper_id": "S01-1001", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "SENSEV AL-2: The Second International Workshop on Evaluating Word Sense Disambiguation Systems was held on July 5-6, 2001. This paper gives an overview of SENSEV AL-2, discussing the evaluation exercise, the tasks, the scoring system, and the results. It ends with some recommendations for future evaluation exercises.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Word sense disambiguation (WSD) is the problem of automatically deciding which sense a word has in any particular context. The success of any project in WSD is clearly tied to the evaluation of WSD systems. SENSEV AL was started in 1997, under the auspices of ACL-SIGLEX, to bring together researchers to discuss and solve the WSD-evaluation problem. Its aim is to evaluate the strengths and weaknesses of WSD algorithms and systems with respect to different words, different varieties of language, and different languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "SENSEV AL is independent from other evaluation programs in the language technology community, such as TREC and MUC. Unlike these programs, SENSEV AL is a 'freelance' program is run entirely by volunteers. We'd like to remind everyone that while SENSEV AL takes the guise of a competition, its main function is not to determine a winner but to explore the scientific aspects of word sense disambiguation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "SENSEV AL held its first evaluation exercise in the summer of 1998, culminating in a workshop at Herstmonceux Castle, England on September 2-4 (Kilgarriff and Palmer 2000). Following the success of the first workshop, SENSEV AL-2, supported by EURALEX, ACL-2001 on July 5-6, 2001 in Toulouse.", |
|
"cite_spans": [ |
|
{ |
|
"start": 253, |
|
"end": 261, |
|
"text": "ACL-2001", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 262, |
|
"end": 279, |
|
"text": "on July 5-6, 2001", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper gives an overview of SENSEV AL-2, discussing the evaluation exercise, the tasks, the scoring system, and the results. It ends with some recommendations for future evaluation exercises.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A main goal of SENSEV AL-2 was to encourage new languages to participate. We were successful: SENSEV AL-2 evaluated WSD systems on three types of task on 12 languages as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks and participants", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "All-words Czech, Dutch, English, Estonian Lexical", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks and participants", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Basque, English, Italian, sample Japanese, Korean, Spanish, Swedish Translation Japanese", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks and participants", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the all-words task, systems must tag almost all of the content words in a sample of running text. In the lexical sample task, we first carefully select a sample of words from the lexicon; systems must then tag several instances of the sample words in short extracts of text. The translation task (Japanese only) is a lexical sample task in which word sense is defined according to translation distinction. Task design is discussed in section 3 below. 93 systems were submitted from 34 different research teams. Table 1 gives a breakdown of the number of submissions and teams who participated in each task. Note that some teams submitted multiple systems to the same task, and some submitted systems to multiple tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 514, |
|
"end": 521, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tasks and participants", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Several tasks had no submissions: the Chinese and Danish tasks could not find enough time to complete the data in time for the exercise, and the available Dutch data was misplaced in the process of making it public. The Dutch data is available, and the Chinese and Danish data will be prepared in due course.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks and participants", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "No A task in SENSEV AL consists of three types of data: 1) A lexicon of word-to-sense mappings, with possibly extra information to explain, define, or distinguish the senses (e.g., WordNet); 2) A corpus of manually tagged text or samples of text that acts as the Gold Standard, and that is split into an optional training corpus and test corpus; and 3) An optional sense hierarchy or sense grouping to allow for fine or coarse grained sense distinctions to be used in scoring. Regardless of the type of task, each system is required to tag the words specified in the test corpus with one or more tags in the lexicon. Supervised systems can train on the training corpus, if one is available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Task", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The SENSEV AL committee issued general guidelines for designing a task (Edmonds 2000). But it was up to the individual task organisers, to design their own tasks since each had different constraints on resource availability (both human and data). Everyone, however, used a common XML data encoding format developed for SENSEV AL-2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Task", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Specific issues in choosing and designing the resources for each task are described in the papers in this proceedings, and, more generally, by Kilgarriff and Rosenzweig (2000) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 143, |
|
"end": 175, |
|
"text": "Kilgarriff and Rosenzweig (2000)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Each task organiser chose the lexicon for their task. Notably, WordNet was used for the first time in SENSEV AL.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexicon and lexical samples", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Version 1.7 for the English tasks, and versions of Euro WordNet for Spanish, Italian, and Estonian.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexicon and lexical samples", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For the lexical sample tasks, the guidelines suggests that words be chosen from different parts of speech, different frequencies in the corpus, and different polysemies (i.e., number of senses). The number of words depended on the available resources. The sample words were kept secret from the wider community until the training data was released; however, the organisers consulted each other so that translations of some of the sample words could be used across tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexicon and lexical samples", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For the all-words tasks, the guidelines suggest that at least 5000 words of running text be selected, and that all content words be tagged.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagged corpora", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For the lexical sample tasks, it was suggested that for each sample word, at least 75+15n corpus instances be chosen, where n is the number of senses of the word. Again, lack of resources might have precluded this much tagged data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagged corpora", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The Gold Standard corpus must be replicable; the goal is to have human taggers agree at least 90% of the time. Thus, at least two human taggers were required to tag every instance of a word. Taggers are allowed to tag with multiple tags and to use special tags for proper names, and unassignable senses. See the papers in this proceedings for more details.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagged corpora", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For the evaluation, the corpus had to be divided into a training set and a test set. The training set is a random subset of the Gold Standard corpus, which allows supervised systems to train. Not all tasks supplied training data, so only 'unsupervised' systems could participate (e.g., in the English all-words taskalthough many systems trained on other corpora such as Semcor). The test set is the rest of the corpus, with tags removed, on which the systems would be evaluated. It was suggested that a 2:1 ratio of training to test data be used. Although somewhat different from what is normally used in machine learning, the committee felt that having more te~t data would give a more realistic indication of a system's performance (since more varied contexts per word would be tested), and, moreover, unsupervised systems would be less 'short-changed'.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagged corpora", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "All data sets are now in the public domain (on the SENSEV AL website).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagged corpora", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Since some sense inventories are two finegrained for plausible sense disambiguation, the scoring program can take into account sense hierarchies or sense groupings. Optionally, a task could provide such a grouping of senses, so that choosing any sense within the group or higher in the hierarchy would count towards a system's overall score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense groupings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "For example, the WordNet hierarchy was used for English nouns, whereas a separate 'grouping' was specially constructed for the English verbs (since the verbs do not have a useful hierarchy in WordNet for scoring purposes). See the paper on the English tasks for more detail.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense groupings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "All tasks used a specially defined common data format for encoding the tagged and untagged corpus examples. Specifically, it accommodated the multi-lingual nature of the data by using an XML document type definition which allowed for a flexible mapping from lexical items to their textual instances. Using XML also allowed for arbitrary character encodings in the corpora. The structure was designed so that individual instances of lexical items could be associated with multiple sense tags, and allowed for discontinuous phrasal lexical items. It did not, however allow for multiple phrasal items with overlapping portions in the surface string.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Common data fonnat", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Another requirement was simplicity. This quality would not only facilitate the logistics of designing a task, but would also ease any hand annotation that may have been necessary. As a result, a standoff annotation system was not feasible. This restricted the format in such a way as to limit the feasibility of embedding extant annotation of the corpora and to require that participants use standoff annotation in submitting their answers for reasons of space efficiency.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Common data fonnat", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "The use of the common data format simplified many system's participation in multiple tasks, consequently furthering research into the comparison of WSD in different languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "3", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Evaluation procedure", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The evaluation was run centrally from a single website at the University of Pennsylvania and followed the same procedure as used in the first SENSEV AL. For each task, data was released in three stages:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Trial data: A small set of data so that participants can design their systems to use the data formats. No 'real' data was released. \u2022 Training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Test data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Each team would register their system, and then download the data sets according to the schedule. After running their system on the test data, each team submitted their answers to the website for automatic scoring. Each team's results were returned to the team before the workshop, but the overall results were unveiled at the workshop.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A schedule was set up for task organisers to prepare and submit their data to the central website, while participants followed a separate, more rigid (and in the end very tight), schedule for downloads and submissions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Schedule", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Task organisers started preparing their data as far back as September 2000, but the real push occurred in the three months proceeding the competition period.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Schedule", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The competition period ran April 17-June 18.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Schedule", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Within this period, each task had a critical window defined to be the period from when the training data was first made available to the last day for answer submissions to that task. The critical window had to be a minimum of 21 days.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Schedule", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Participants could download and submit answers at any time during the critical window of a particular task, subject to the following constraints. A submission of answers must:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Schedule", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 not have occurred more than 7 days after downloading the test data,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Schedule", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 not have occurred more than 21 days after downloading the training data, and \u2022 have occurred before the end of the critical window for the particular task This set up allowed participants to have sufficient time to participate in several tasks over the whole competition period, while ensuring that on any particular task, a participant had a maximum of one week to run their system (and 3 weeks to train their system), which we hope did not give any time for tailoring systems to the specific words or the corpora of the competition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Schedule", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Data for the tasks was distributed via a website at University of Pennsylvania Participants were required to register for tasks in order to download the trial, training, and test data for the tasks, and to upload their answers. Each of these operations required authentication via a password chosen at the time of registration. Additionally, timestamps were recorded for each of these operations in order to enforce the timing constraints on a per-participant basis. The system was not secure, as a participant could register multiple times under different names and use the data from the first registration to perform the task at hand. However, there were no signs of security problems in the use of the website.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data distribution", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Use of the distribution center was recommended, not required, of the task organizers. All the tasks with the exception of the Japanese tasks used the distribution center. A nice by-product of this process in concert with the common data format was the development an overarching organization of all the SENSEV AL data, which is evident in the data available to the public domain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data distribution", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The same answer format and scoring program was used for SENSEV AL-2 as was used in the first SENSEV AL.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring and evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Systems were allowed to tag a word with as many senses as appropriate, giving probabilities, if desired. If the task had a sense hierarchy or grouping, then fine-and coarse-grained scoring was done. In fine-grained scoring, a system had to give at least one of the Gold Standard senses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring and evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In coarse-grained scoring, all senses in the answer key and in system output are collapsed to their highest parent or group identifier. For sense hierarchies, mixed-grained scoring was also done: a system is given partial credit for choosing a sense that is a parent of the required sense according to Melamed and Resnik's (1997) scheme.", |
|
"cite_spans": [ |
|
{ |
|
"start": 302, |
|
"end": 329, |
|
"text": "Melamed and Resnik's (1997)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring and evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Systems were not required to tag all instances of a word, or even all words, thus, as in SENSEV AL-l, we used precision and recall to score the systems, although the metrics are not completely analogous to IR evaluation. Recall (percentage of right answers on all instances in the test set) is the basic measurement of accuracy in this task, because it shows how many correct disambiguations the system achieved overall. Precision (percentage of right answers in the set of answered instances) favours systems that are very accurate if only on a small subset of cases that the system chose to give answers to; the cases might be particularly easy to disambiguate, but this can be determined by comparing the answers to the baseline on the same subset (a type of analysis that has yet to be done). Coverage, the percentage of instances that a system gives any answer to, is also reported. Where available, baseline and intertagger agreement numbers are given.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring and evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "No further data analysis was done. Thus, the question of who 'won' depends on your perspective, but, in fact, that is not the relevant question. The important thing is to examine how each system achieved the performance that it shows. Some of this analysis is given in the papers of this proceedings. (Note that in the results, where appropriate, we distinguished between supervised and unsupervised systems.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring and evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "When the results were unveiled at the workshop, it soon became apparent that bugs in the scoring software had potentially affected the results. It was decided by everyone present (on the first day) that all systems should be rescored. Also, owing to the tight schedule, some teams had made serious inadvertent errors in formatting their answers. Thus, it was also agreed that any team could resubmit their (corrected) answers before 31 July 2001. In so doing, the team would have to include an explanation about the modifications and only reasons of 'egregious' bugs would be allowed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring and evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The official results list all original submissions scored with the debugged scorer, and all of the resubmissions, clearly identified. This compromise maintains the professionalism of SENSEV AL, as it does not devalue any team that met the original deadline, while encouraging the scientific purpose of the exercise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring and evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Because the results were released so close.to the workshop, there had been no time for detatled analysis. Thus, the workshop was structured around a series of panels about WSD and evaluation. Panels were held on domain-specific disambiguation, task design for new languages to SENSEV AL, sense distinctions, applications of WSD, and standardizing WordNets. Ideally, the majority of the workshop content should have been about the analysis of WSD algorithms, so the major recommendation for future exercises is to allow at least one month for analysis before the workshop. Part of this recommendation is to have a proceedings at the workshop, rather than post-workshop as this one. A related recommendation is to gather information about systems (e.g., supervised I unsupervised, knowledge source, etc.) as they are registered.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recommendations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Second, the use of different granularities and groupings for the lexicons in question yielded some unnecessary inconsistency across tasks. For example, the English tasks used a grouping which invalidated the mixed-grained scores, whereas the Swedish task used a hierarchy which yielded vacuous coarse-grained scores. This is actually a central issue in WSD, which should be addressed before the next SENSEV AL exercise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recommendations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The data from SENSEV AL-2 should be invaluable in this research.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recommendations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Finally, it was felt by some that the SENSEV AL organization up to now has been somewhat autocratic, which is true. This might have been suitable in the past, but we would all like SENSEV AL to become as open and scientifically professional an activity as possible, without sacrificing its grassroots quality. Notably, it's the only 'freelance' evaluation activity in the computational linguistics community, and so we recommend that a more democratic organization should be sought,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recommendations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "which should include an official executive committee to oversee the future of SENSEV AL.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Many people contributed to SENSEV AL-2. The preface to this volume acknowledges everyone's contributions. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": "4" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": {}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td colspan=\"4\">The Second International</td></tr><tr><td>Workshop</td><td colspan=\"5\">on Evaluating Word Sense</td></tr><tr><td colspan=\"2\">Disambiguation</td><td>Systems</td><td>was</td><td>held</td><td>in</td></tr><tr><td colspan=\"2\">conjunction with</td><td/><td/><td/></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Scott Cotton Computer and Information ScienceUniversity of Pennsylvania Philadelphia, PA 19104, USA cotton@ linc.cis.upenn.edu ELSNET, EPSRC, and ELRA, was organized in 2000-2001." |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"2\">5 References</td><td/><td/></tr><tr><td colspan=\"4\">Phil Edmonds (2000). Designing a task for</td></tr><tr><td colspan=\"2\">SENSEVAL-2.</td><td colspan=\"2\">Technical Note. Senseval-2</td></tr><tr><td>website.</td><td/><td/><td/></tr><tr><td colspan=\"4\">Adam Kilgarriff and Martha Palmer (2000) Guest</td></tr><tr><td colspan=\"4\">editors. Special Issue on SENSEV AL: Evaluating</td></tr><tr><td>Word</td><td>Sense</td><td>Disambiguation</td><td>Programs.</td></tr><tr><td colspan=\"3\">Computers and the Humanities 34( 1-2).</td><td/></tr><tr><td colspan=\"4\">Adam Kilgarriff and Joseph Rosenzweig (2000)</td></tr><tr><td colspan=\"4\">Framework and results for English SENSEV AL.</td></tr><tr><td colspan=\"4\">Computers and the Humanities 34( 1-2):15-48.</td></tr><tr><td colspan=\"4\">Dan Melamed and Phil Resnik (2000) Tagger</td></tr><tr><td colspan=\"4\">evaluation given hierarchical tag sets. Computers</td></tr><tr><td colspan=\"3\">and the Humanities 34( 1-2).</td><td/></tr><tr><td colspan=\"2\">SENSEV AL Website:</td><td/><td/></tr><tr><td>http:/</td><td/><td/><td/></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "/www.itri.bton.ac.uk/events/senseval SENSEV AL-2 Website: www.sle.sharp.co.uk/senseval2" |
|
} |
|
} |
|
} |
|
} |