|
{ |
|
"paper_id": "S07-1007", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:23:06.455823Z" |
|
}, |
|
"title": "SemEval-2007 Task 08: Metonymy Resolution at SemEval-2007", |
|
"authors": [ |
|
{ |
|
"first": "Katja", |
|
"middle": [], |
|
"last": "Markert", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Leeds", |
|
"location": { |
|
"country": "UK" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Malvina", |
|
"middle": [], |
|
"last": "Nissim", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Bologna", |
|
"location": { |
|
"country": "Italy" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We provide an overview of the metonymy resolution shared task organised within SemEval-2007. We describe the problem, the data provided to participants, and the evaluation measures we used to assess performance. We also give an overview of the systems that have taken part in the task, and discuss possible directions for future work.", |
|
"pdf_parse": { |
|
"paper_id": "S07-1007", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We provide an overview of the metonymy resolution shared task organised within SemEval-2007. We describe the problem, the data provided to participants, and the evaluation measures we used to assess performance. We also give an overview of the systems that have taken part in the task, and discuss possible directions for future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Both word sense disambiguation and named entity recognition have benefited enormously from shared task evaluations, for example in the Senseval, MUC and CoNLL frameworks. Similar campaigns have not been developed for the resolution of figurative language, such as metaphor, metonymy, idioms and irony. However, resolution of figurative language is an important complement to and extension of word sense disambiguation as it often deals with word senses that are not listed in the lexicon. For example, the meaning of stopover in the sentence He saw teaching as a stopover on his way to bigger things is a metaphorical sense of the sense \"stopping place in a physical journey\", with the literal sense listed in WordNet 2.0 but the metaphorical one not being listed. 1 The same holds for the metonymic reading of rattlesnake (for the animal's meat) in Roast rattlesnake tastes like chicken. 2 Again, the meat read-ing of rattlesnake is not listed in WordNet whereas the meat reading for chicken is.", |
|
"cite_spans": [ |
|
{ |
|
"start": 765, |
|
"end": 766, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As there is no common framework or corpus for figurative language resolution, previous computational works (Fass, 1997; Hobbs et al., 1993; Barnden et al., 2003, among others) carry out only smallscale evaluations. In recent years, there has been growing interest in metaphor and metonymy resolution that is either corpus-based or evaluated on larger datasets (Martin, 1994; Nissim and Markert, 2003; Mason, 2004; Peirsman, 2006; Birke and Sarkaar, 2006; Krishnakamuran and Zhu, 2007) . Still, apart from (Nissim and Markert, 2003; Peirsman, 2006) who evaluate their work on the same dataset, results are hardly comparable as they all operate within different frameworks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 119, |
|
"text": "(Fass, 1997;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 120, |
|
"end": 139, |
|
"text": "Hobbs et al., 1993;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 140, |
|
"end": 175, |
|
"text": "Barnden et al., 2003, among others)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 360, |
|
"end": 374, |
|
"text": "(Martin, 1994;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 375, |
|
"end": 400, |
|
"text": "Nissim and Markert, 2003;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 401, |
|
"end": 413, |
|
"text": "Mason, 2004;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 429, |
|
"text": "Peirsman, 2006;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 430, |
|
"end": 454, |
|
"text": "Birke and Sarkaar, 2006;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 455, |
|
"end": 484, |
|
"text": "Krishnakamuran and Zhu, 2007)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 505, |
|
"end": 531, |
|
"text": "(Nissim and Markert, 2003;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 532, |
|
"end": 547, |
|
"text": "Peirsman, 2006)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This situation motivated us to organise the first shared task for figurative language, concentrating on metonymy. In metonymy one expression is used to refer to the referent of a related one, like the use of an animal name for its meat. Similarly, in Ex. 1, Vietnam, the name of a location, refers to an event (a war) that happened there.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1) Sex, drugs, and Vietnam have haunted Bill Clinton's campaign.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In Ex. 2 and 3, BMW, the name of a company, stands for its index on the stock market, or a vehicle manufactured by BMW, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(2) BMW slipped 4p to 31p", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(3) His BMW went on to race at Le Mans", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The importance of resolving metonymies has been shown for a variety of NLP tasks, such as ma-chine translation (Kamei and Wakao, 1992) , question answering (Stallard, 1993) , anaphora resolution (Harabagiu, 1998; Markert and Hahn, 2002) and geographical information retrieval (Leveling and Hartrumpf, 2006) . Although metonymic readings are, like all figurative readings, potentially open ended and can be innovative, the regularity of usage for word groups helps in establishing a common evaluation framework. Many other location names, for instance, can be used in the same fashion as Vietnam in Ex. 1. Thus, given a semantic class (e.g. location), one can specify several regular metonymic patterns (e.g. place-for-event) that instances of the class are likely to undergo. In addition to literal readings, regular metonymic patterns and innovative metonymic readings, there can also be so-called mixed readings, similar to zeugma, where both a literal and a metonymic reading are evoked (Nunberg, 1995) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 111, |
|
"end": 134, |
|
"text": "(Kamei and Wakao, 1992)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 156, |
|
"end": 172, |
|
"text": "(Stallard, 1993)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 195, |
|
"end": 212, |
|
"text": "(Harabagiu, 1998;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 213, |
|
"end": 236, |
|
"text": "Markert and Hahn, 2002)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 276, |
|
"end": 306, |
|
"text": "(Leveling and Hartrumpf, 2006)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 990, |
|
"end": 1005, |
|
"text": "(Nunberg, 1995)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The metonymy task is a lexical sample task for English, consisting of two subtasks, one concentrating on the semantic class location, exemplified by country names, and another one concentrating on organisation, exemplified by company names. Participants had to automatically classify preselected country/company names as having a literal or non-literal meaning, given a four-sentence context. Additionally, participants could attempt finer-grained interpretations, further specifying readings into prespecified metonymic patterns (such as place-for-event) and recognising innovative readings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We distinguish between literal, metonymic, and mixed readings for locations and organisations. In the case of a metonymic reading, we also specify the actual patterns. The annotation categories were motivated by prior linguistic research by ourselves (Markert and Nissim, 2006) , and others (Fass, 1997; Lakoff and Johnson, 1980) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 277, |
|
"text": "(Markert and Nissim, 2006)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 303, |
|
"text": "(Fass, 1997;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 304, |
|
"end": 329, |
|
"text": "Lakoff and Johnson, 1980)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Categories", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Literal readings for locations comprise locative (Ex. 4) and political entity interpretations (Ex. 5). 4coral coast of Papua New Guinea.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Locations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "(5) Britain's current account deficit.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Locations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "-place-for-people a place stands for any persons/organisations associated with it. These can be governments (Ex. 6), affiliated organisations, incl. sports teams (Ex. 7), or the whole population (Ex. 8).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metonymic readings encompass four types:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Often, the referent is underspecified (Ex. 9).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metonymic readings encompass four types:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(6) America did once try to ban alcohol. 7England lost in the semi-final. 8[. . . ] the incarnation was to fulfil the promise to Israel and to reconcile the world with God. 9The G-24 group expressed readiness to provide Albania with food aid.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metonymic readings encompass four types:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-place-for-event a location name stands for an event that happened in the location (see Ex. 1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metonymic readings encompass four types:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-place-for-product a place stands for a product manufactured in the place, as Bordeaux in Ex. 10.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metonymic readings encompass four types:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(10) a smooth Bordeaux that was gutsy enough to cope with our food -othermet a metonymy that does not fall into any of the prespecified patterns, as in Ex. 11, where New Jersey refers to typical local tunes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metonymic readings encompass four types:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The thing about the record is the influences of the music. The bottom end is very New York/New Jersey and the top is very melodic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metonymic readings encompass four types:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "When two predicates are involved, triggering a different reading each (Nunberg, 1995) , the annotation category is mixed. In Ex. 12, both a literal and a place-for-people reading are involved.", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 85, |
|
"text": "(Nunberg, 1995)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metonymic readings encompass four types:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(12) they arrived in Nigeria, hitherto a leading critic of [. . . ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metonymic readings encompass four types:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The literal reading for organisation names describes references to the organisation in general, where an organisation is seen as a legal entity, which consists of organisation members that speak with a collective voice, and which has a charter, statute or defined aims. Examples of literal readings include (among others) descriptions of the structure of an organisation (see Ex. 13), associations between organisations (see Ex. 14) or relations between organisations and products/services they offer (see Ex. 15). -org-for-product the name of a commercial organisation can refer to its products, as in Ex. 3.", |
|
"cite_spans": [ |
|
{ |
|
"start": 425, |
|
"end": 432, |
|
"text": "Ex. 14)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Organisations", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "-org-for-facility organisations can also stand for the facility that houses the organisation or one of its branches, as in the following example.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Organisations", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The opening of a McDonald's is a major event -org-for-index an organisation name can be used for an index that indicates its value (see Ex. 2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Organisations", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "-othermet a metonymy that does not fall into any of the prespecified patterns, as in Ex. 20, where Barclays Bank stands for an account at the bank. Mixed readings exist for organisations as well. In Ex. 21, both an org-for-index and an org-formembers pattern are invoked.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Organisations", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Barclays slipped 4p to 351p after confirming 3,000 more job losses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(21)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Apart from class-specific metonymic readings, some patterns seem to apply across classes to all names. In the SemEval dataset, we annotated two of them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Class-independent categories", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "object-for-name all names can be used as mere signifiers, instead of referring to an object or set of objects. In Ex. 22, both Chevrolet and Ford are used as strings, rather than referring to the companies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Class-independent categories", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Chevrolet is feminine because of its sound (it's a longer word than Ford, has an open vowel at the end, connotes Frenchness).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(22)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "object-for-representation a name can refer to a representation (such as a photo or painting) of the referent of its literal reading. In Ex. 23, Malta refers to a drawing of the island when pointing to a map.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(22)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We used the CIA Factbook 3 and the Fortune 500 list as sampling frames for country and company names respectively. All occurrences (including plural forms) of all names in the sampling frames were extracted in context from all texts of the BNC, Version 1.0. All samples extracted are coded in XML and contain up to four sentences: the sentence in which the country/company name occurs, two before, and one after. If the name occurs at the beginning or end of a text the samples may contain less than four sentences. For both the location and the organisation subtask, two random subsets of the extracted samples were selected as training and test set, respectively. Before metonymy annotation, samples that were not understood by the annotators because of insufficient context were removed from the datsets. In addition, a sample was also removed if the name extracted was a homonym not in the desired semantic class (for example Mr. Greenland when annotating locations). 4 For those names that do have the semantic class location or organisation, metonymy annotation was performed, using the categories described in Section 2. All training set annotation was carried out independently by both organisers. Annotation was highly reliable with a kappa (Carletta, 1996) of .88/.89 for locations/organisations. 5 As agreement was established, annotation of the test set was carried out by the first organiser. All cases which were not entirely straightforward were then independently checked by the second organiser. Samples whose readings could not be agreed on (after a reconciliation phase) were excluded from both training and test set. The reading distributions of training and test sets for both subtasks are shown in Tables 1 and 2 . In addition to a simple text format including only the metonymy annotation, we provided participants with several linguistic annotations of both training and testset. This included the original BNC tokenisation and part-of-speech tags as well as manually annotated dependency relations for each annotated name (e.g. BMW subj-of-slip for Ex. 2).", |
|
"cite_spans": [ |
|
{ |
|
"start": 972, |
|
"end": 973, |
|
"text": "4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1250, |
|
"end": 1266, |
|
"text": "(Carletta, 1996)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1307, |
|
"end": 1308, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1720, |
|
"end": 1734, |
|
"text": "Tables 1 and 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "(23) This is Malta 3 Data Collection and Annotation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Teams were allowed to participate in the location or organisation task or both. We encouraged supervised, semi-supervised or unsupervised approaches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Submission and Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Systems could be tailored to recognise metonymies at three different levels of granu-larity: coarse, medium, or fine, with an increasing number and specification of target classification categories, and thus difficulty. At the coarse level, only a distinction between literal and non-literal was asked for; medium asked for a distinction between literal, metonymic and mixed readings; fine needed a classification into literal readings, mixed readings, any of the class-dependent and class-independent metonymic patterns (Section 2) or an innovative metonymic reading (category othermet).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Submission and Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Systems were evaluated via accuracy (acc) and coverage (cov), allowing for partial submissions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Submission and Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "acc = # correct predictions # predictions cov = # predictions # samples", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Submission and Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For each target category c we also measured:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Submission and Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "precision c = # correct assignments of c # assignments of c recall c = # correct assignments of c # dataset instances of c f score c = 2precisioncrecallc precisionc+recallc", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Submission and Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "A baseline, consisting of the assignment of the most frequent category (always literal), was used for each task and granularity level.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Submission and Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We received five submissions (FUH, GYDER, up13, UTD-HLT-CG, XRCE-M). All tackled the location task; three (GYDER, UTD-HLT-CG, XRCE-M) also participated in the organisation task. All systems were full submissions (coverage of 1) and participated at all granularity levels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Systems and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Out of five teams, four (FUH, GYDER, up13, UTD-HLT-CG) used supervised machine learning, including single (FUH,GYDER, up13) as well as multiple classifiers (UTD-HLT-CG). A range of learning paradigms was represented (including instance-based learning, maximum entropy, decision trees, etc.). One participant (XRCE-M) built a hybrid system, combining a symbolic, supervised approach based on deep parsing with an unsupervised distributional approach exploiting lexical information obtained from large corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods and Features", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Systems up13 and FUH used mostly shallow features extracted directly from the training data (including parts-of-speech, co-occurrences and collo-cations). The other systems made also use of syntactic/grammatical features (syntactic roles, determination, morphology etc.). Two of them (GYDER and UTD-HLT-CG) exploited the manually annotated grammatical roles provided by the organisers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods and Features", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "All systems apart from up13 made use of external knowledge resources such as lexical databases for feature generalisation (WordNet, FrameNet, VerbNet, Levin verb classes) as well as other corpora (the Mascara corpus for additional training material, the BNC, and the Web).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods and Features", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Tables 3 and 4 report accuracy for all systems. 6 Table 5 provides a summary of the results with lowest, highest, and average accuracy and f-scores for each subtask and granularity level. 7 The task seemed extremely difficult, with 2 of the 5 systems (up13,FUH) participating in the location task not beating the baseline. These two systems relied mainly on shallow features with limited or no use of external resources, thus suggesting that these features might only be of limited use for identifying metonymic shifts. The organisers themselves have come to similar conclusions in their own experiments (Markert and Nissim, 2002) . The systems using syntactic/grammatical features (GYDER, UTD-HLT-CG, XRCE-M) could improve over the baseline whether using manual annotation or parsing. These systems also made heavy use of feature generalisation. Classification granularity had only a small effect on system performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 189, |
|
"text": "7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 604, |
|
"end": 630, |
|
"text": "(Markert and Nissim, 2002)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Only few of the fine-grained categories could be distinguished with reasonable success (see the fscores in Table 5 ). These include literal readings, and place-for-people, org-for-members, and org-forproduct metonymies, which are the most frequent categories (see Tables 1 and 2 ). Rarer metonymic targets were either not assigned by the systems at all (\"undef\" in Table 5 ) or assigned wrongly ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 114, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 264, |
|
"end": 278, |
|
"text": "Tables 1 and 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 365, |
|
"end": 372, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Performance", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "There is a wide range of opportunities for future figurative language resolution tasks. In the SemEval corpus the reading distribution mirrored the actual distribution in the original corpus (BNC). Although realistic, this led to little training data for several phenomena. A future option, geared entirely towards system improvement, would be to use a stratified corpus, built with different acquisition strategies like active learning or specialised search procedures. There are also several options for expanding the scope of the task, for example to a wider range of semantic classes, from proper names to common nouns, and from lexical samples to an allwords task. In addition, our task currently covers only metonymies and could be extended to other kinds of figurative language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concluding Remarks", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "This example was taken from the Berkely Master Metaphor list(Lakoff and Johnson, 1980) .2 From now on, all examples in this paper are taken from the British National Corpus (BNC)(Burnard, 1995), but Ex. 23.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.cia.gov/cia/publications/ factbook/index.html 4 Given that the task is not about standard Named Entity Recognition, we assume that the general semantic class of the name is already known.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The training sets are part of the already available Mascara corpus for metonymy(Markert and Nissim, 2006). The test sets were newly created for SemEval.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Due to space limitations we do not report precision, recall, and f-score per class and refer the reader to each system description provided within this volume.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We are very grateful to the BNC Consortium for letting us use and distribute samples from the British National Corpus, version 1.0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": " 7 The value \"undef\" is used for cases where the system did not attempt any assignment for a given class, whereas the value \"0\" signals that assignments were done, but were not correct.8 Please note that results for the FUH system are slightly different than those presented in the FUH system description paper. This is due to a preprocessing problem in the FUH system that was fixed only after the run submission deadline. 0.000 undef 0.000 org-for-product-f 0.400 0.500 0.458 org-for-facility-f 0.000 0.222 0.141 org-for-index-f 0.000 undef 0.000 obj-for-name-f 0.250 0.800 0.592 obj-for-rep-f undef undef undef othermet-f 0.000 undef 0.000 mixed-f 0.000 0.343 0.135 (low f-scores). An exception is the object-forname pattern, which XRCE-M and UTD-HLT-CG could distinguish with good success. Mixed readings also proved problematic since more than one pattern is involved, thus limiting the possibilities of learning from a single training instance. Only GYDER succeeded in correctly identifiying a variety of mixed readings in the organisation subtask. No systems could identify unconventional metonymies correctly. Such poor performance is due to the nonregularity of the reading by definition, so that approaches based on learning from similar examples alone cannot work too well.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1, |
|
"end": 2, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Domain-transcending mappings in a system for metaphorical reasoning", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Barnden", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Glasbey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proc. of EACL-2003", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "57--61", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J.A. Barnden, S.R. Glasbey, M.G. Lee, and A.M. Walling- ton. 2003. Domain-transcending mappings in a system for metaphorical reasoning. In Proc. of EACL-2003, 57-61.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A clustering approach for the nearly unsupervised recognition of nonliteral language", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Birke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sarkaar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. of EACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Birke and A Sarkaar. 2006. A clustering approach for the nearly unsupervised recognition of nonliteral language. In Proc. of EACL-2006.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Users' Reference Guide", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Burnard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "British National Corpus. BNC Consortium", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. Burnard, 1995. Users' Reference Guide, British National Corpus. BNC Consortium, Oxford, England.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Assessing agreement on classification tasks: The kappa statistic", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Carletta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Computational Linguistics", |
|
"volume": "22", |
|
"issue": "", |
|
"pages": "249--254", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Carletta. 1996. Assessing agreement on classification tasks: The kappa statistic. Computational Linguistics, 22:249-254.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Deriving metonymic coercions from WordNet", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Harabagiu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Workshop on the Usage of WordNet in Natural Language Processing Systems, COLING-ACL '98", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "142--148", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Harabagiu. 1998. Deriving metonymic coercions from WordNet. In Workshop on the Usage of WordNet in Natural Language Processing Systems, COLING-ACL '98, 142-148, Montreal, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Interpretation as abduction", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Hobbs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Stickel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Appelt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Artificial Intelligence", |
|
"volume": "63", |
|
"issue": "", |
|
"pages": "69--142", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J.R. Hobbs, M.E. Stickel, D.E. Appelt, and P. Martin. 1993. Interpretation as abduction. Artificial Intelligence, 63:69- 142.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Metonymy: Reassessment, survey of acceptability and its treatment in machine translation systems", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Kamei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Wakao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proc. of ACL-92", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "309--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Kamei and T. Wakao. 1992. Metonymy: Reassessment, sur- vey of acceptability and its treatment in machine translation systems. In Proc. of ACL-92, 309-311.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Hunting elusive metaphors using lexical resources", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Krishnakamuran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "NAACL 2007 Workshop on Computational Approaches to Figurative Language", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Krishnakamuran and X. Zhu. 2007. Hunting elusive metaphors using lexical resources. In NAACL 2007 Work- shop on Computational Approaches to Figurative Language.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Metaphors We Live By", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Lakoff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. Lakoff and M. Johnson. 1980. Metaphors We Live By. Chicago University Press, Chicago, Ill.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "On metonymy recognition for gir", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Leveling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Hartrumpf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of GIR-2006: 3rd Workshop on Geographical Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Leveling and S. Hartrumpf. 2006. On metonymy recogni- tion for gir. In Proceedings of GIR-2006: 3rd Workshop on Geographical Information Retrieval.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Understanding metonymies in discourse", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Markert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Hahn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Artificial Intelligence", |
|
"volume": "135", |
|
"issue": "1", |
|
"pages": "145--198", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Markert and U. Hahn. 2002. Understanding metonymies in discourse. Artificial Intelligence, 135(1/2):145-198.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Metonymy resolution as a classification task", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Markert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Nissim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proc. of EMNLP-2002", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "204--213", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Markert and M. Nissim. 2002. Metonymy resolution as a classification task. In Proc. of EMNLP-2002, 204-213.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Metonymic proper names: A corpus-based account", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Markert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Nissim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Corpora in Cognitive Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Markert and M. Nissim. 2006. Metonymic proper names: A corpus-based account. In A. Stefanowitsch, editor, Corpora in Cognitive Linguistics. Vol. 1: Metaphor and Metonymy. Mouton de Gruyter, 2006.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Metabank: a knowledge base of metaphoric language conventions", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Computational Intelligence", |
|
"volume": "10", |
|
"issue": "2", |
|
"pages": "134--149", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Martin. 1994. Metabank: a knowledge base of metaphoric language conventions. Computational Intelli- gence, 10(2):134-149.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Cormet: A computational corpus-based conventional metaphor extraction system", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Mason", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Computational Linguistics", |
|
"volume": "30", |
|
"issue": "1", |
|
"pages": "23--44", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Z. Mason. 2004. Cormet: A computational corpus-based con- ventional metaphor extraction system. Computational Lin- guistics, 30(1):23-44.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Syntactic features and word similarity for supervised metonymy resolution", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Nissim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Markert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proc. of ACL-2003", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "56--63", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Nissim and K. Markert. 2003. Syntactic features and word similarity for supervised metonymy resolution. In Proc. of ACL-2003, 56-63.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Transfers of meaning", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Nunberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Journal of Semantics", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "109--132", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. Nunberg. 1995. Transfers of meaning. Journal of Seman- tics, 12:109-132.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Example-based metonymy recognition for proper nouns", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Peirsman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Student Session of EACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y Peirsman. 2006. Example-based metonymy recognition for proper nouns. In Student Session of EACL 2006.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Two kinds of metonymy", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Stallard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proc. of ACL-93", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "87--94", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Stallard. 1993. Two kinds of metonymy. In Proc. of ACL- 93, 87-94.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF2": { |
|
"text": "Reading distribution for locations", |
|
"type_str": "table", |
|
"content": "<table><tr><td>reading</td><td colspan=\"2\">train test</td></tr><tr><td>literal</td><td colspan=\"2\">737 721</td></tr><tr><td>mixed</td><td>15</td><td>20</td></tr><tr><td>othermet</td><td>9</td><td>11</td></tr><tr><td>obj-for-name</td><td>0</td><td>4</td></tr><tr><td>obj-for-representation</td><td>0</td><td>0</td></tr><tr><td>place-for-people</td><td colspan=\"2\">161 141</td></tr><tr><td>place-for-event</td><td>3</td><td>10</td></tr><tr><td>place-for-product</td><td>0</td><td>1</td></tr><tr><td>total</td><td colspan=\"2\">925 908</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"3\">: Reading distribution for organisations</td></tr><tr><td>reading</td><td colspan=\"2\">train test</td></tr><tr><td>literal</td><td colspan=\"2\">690 520</td></tr><tr><td>mixed</td><td>59</td><td>60</td></tr><tr><td>othermet</td><td>14</td><td>8</td></tr><tr><td>obj-for-name</td><td>8</td><td>6</td></tr><tr><td>obj-for-representation</td><td>1</td><td>0</td></tr><tr><td>org-for-members</td><td colspan=\"2\">220 161</td></tr><tr><td>org-for-event</td><td>2</td><td>1</td></tr><tr><td>org-for-product</td><td>74</td><td>67</td></tr><tr><td>org-for-facility</td><td>15</td><td>16</td></tr><tr><td>org-for-index</td><td>7</td><td>3</td></tr><tr><td>total</td><td colspan=\"2\">1090 842</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"text": "Accuracy scores for all systems for all the location tasks. 8 task \u2193 / system \u2192 baseline FUH UTD-HLT-CG XRCE-M GYDER up13", |
|
"type_str": "table", |
|
"content": "<table><tr><td>LOCATION-coarse</td><td>0.794</td><td>0.778</td><td>0.841</td><td>0.851</td><td>0.852</td><td>0.754</td></tr><tr><td>LOCATION-medium</td><td>0.794</td><td>0.772</td><td>0.840</td><td>0.848</td><td>0.848</td><td>0.750</td></tr><tr><td>LOCATION-fine</td><td>0.794</td><td>0.759</td><td>0.822</td><td>0.841</td><td>0.844</td><td>0.741</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"text": "Accuracy scores for all systems for all the organisation tasks", |
|
"type_str": "table", |
|
"content": "<table><tr><td>task \u2193 / system \u2192</td><td colspan=\"4\">baseline UTD-HLT-CG XRCE-M GYDER</td></tr><tr><td>ORGANISATION-coarse</td><td>0.618</td><td>0.739</td><td>0.732</td><td>0.767</td></tr><tr><td>ORGANISATION-medium</td><td>0.618</td><td>0.711</td><td>0.711</td><td>0.733</td></tr><tr><td>ORGANISATION-fine</td><td>0.618</td><td>0.711</td><td>0.700</td><td>0.728</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |