|
{ |
|
"paper_id": "S07-1031", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:23:11.082843Z" |
|
}, |
|
"title": "FUH (FernUniversit\u00e4t in Hagen): Metonymy Recognition Using Different Kinds of Context for a Memory-Based Learner", |
|
"authors": [ |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Leveling", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Intelligent Information and Communication Systems (IICS) FernUniversit\u00e4t", |
|
"institution": "University of Hagen", |
|
"location": { |
|
"settlement": "Hagen" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "For the metonymy resolution task at SemEval-2007, the use of a memory-based learner to train classifiers for the identification of metonymic location names is investigated. Metonymy is resolved on different levels of granularity, differentiating between literal and non-literal readings on the coarse level; literal, metonymic, and mixed readings on the medium level; and a number of classes covering regular cases of metonymy on a fine level. Different kinds of context are employed to obtain different features: 1) a sequence of n 1 synset IDs representing subordination information for nouns and for verbs, 2) n 2 prepositions, articles, modal, and main verbs in the same sentence, and 3) properties of n 3 tokens in a context window to the left and to the right of the location name. Different classifiers were trained on the Mascara data set to determine which values for the context sizes n 1 , n 2 , and n 3 yield the highest accuracy (n 1 = 4, n 2 = 3, and n 3 = 7, determined with the leave-oneout method). Results from these classifiers served as features for a combined classifier. In the training phase, the combined classifier achieved a considerably higher precision for the Mascara data. In the SemEval submission, an accuracy of 79.8% on the coarse, 79.5% on the medium, and 78.5% on the fine level is achieved (the baseline accuracy is 79.4%).", |
|
"pdf_parse": { |
|
"paper_id": "S07-1031", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "For the metonymy resolution task at SemEval-2007, the use of a memory-based learner to train classifiers for the identification of metonymic location names is investigated. Metonymy is resolved on different levels of granularity, differentiating between literal and non-literal readings on the coarse level; literal, metonymic, and mixed readings on the medium level; and a number of classes covering regular cases of metonymy on a fine level. Different kinds of context are employed to obtain different features: 1) a sequence of n 1 synset IDs representing subordination information for nouns and for verbs, 2) n 2 prepositions, articles, modal, and main verbs in the same sentence, and 3) properties of n 3 tokens in a context window to the left and to the right of the location name. Different classifiers were trained on the Mascara data set to determine which values for the context sizes n 1 , n 2 , and n 3 yield the highest accuracy (n 1 = 4, n 2 = 3, and n 3 = 7, determined with the leave-oneout method). Results from these classifiers served as features for a combined classifier. In the training phase, the combined classifier achieved a considerably higher precision for the Mascara data. In the SemEval submission, an accuracy of 79.8% on the coarse, 79.5% on the medium, and 78.5% on the fine level is achieved (the baseline accuracy is 79.4%).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Metonymy is typically defined as a figure of speech in which a speaker uses one entity to refer to another that is related to it (Lakoff and Johnson, 1980) . The identification of metonymy becomes important for NLP tasks such as question answering (Stallard, 1993) or geographic information retrieval (Leveling and Hartrumpf, 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 129, |
|
"end": 155, |
|
"text": "(Lakoff and Johnson, 1980)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 248, |
|
"end": 264, |
|
"text": "(Stallard, 1993)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 301, |
|
"end": 331, |
|
"text": "(Leveling and Hartrumpf, 2006)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For regular cases of metonymy for locations and organizations, Markert and Nissim have proposed a set of metonymy classes. Annotating a subset of the BNC (British National Corpus), they extracted a set of metonymic proper nouns from two categories: country names (Markert and Nissim, 2002) and organization names .", |
|
"cite_spans": [ |
|
{ |
|
"start": 263, |
|
"end": 289, |
|
"text": "(Markert and Nissim, 2002)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the metonymy resolution task at SemEval-2007, the goal was to identify metonymic names in a subset of the BNC. The task consists of two subtasks for company and country names, which are further divided into classification on a coarse level (recognizing literal and non-literal readings), on a medium level (differentiating non-literal readings into mixed and metonymic readings), and on a fine level (identifying classes of regular metonymy, such as a name referring to the population, place-for-people). The task is described in more detail by Markert and Nissim (2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 548, |
|
"end": 573, |
|
"text": "Markert and Nissim (2007)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The following tools and resources are used for the metonymy classification:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tools and Resources", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u2022 TiMBL 5.1 (Daelemans et al., 2004) , a memory-based learner for classification is em-ployed for training the classifiers (supervised learning). 1", |
|
"cite_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 36, |
|
"text": "(Daelemans et al., 2004)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tools and Resources", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u2022 Mascara 2.0 -Metonymy Annotation Scheme And Robust Analysis Markert and Nissim, 2002) contains annotated data for metonymic names from a subset of the the BNC.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 87, |
|
"text": "Markert and Nissim, 2002)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tools and Resources", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u2022 WordNet 2.0 (Fellbaum, 1998) serves as a linguistic resource for assigning synset IDs and for looking up subordination information and frequency of readings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tools and Resources", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u2022 The TreeTagger (Schmid, 1994) is utilized for sentence boundary detection, lemmatization, and part-of-speech tagging. The English tagger was trained on the PENN treebank and uses the English morphological database from the XTAG project (Karp et al., 1992) . The parameter files were obtained from the web site. 2", |
|
"cite_spans": [ |
|
{ |
|
"start": 17, |
|
"end": 31, |
|
"text": "(Schmid, 1994)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 238, |
|
"end": 257, |
|
"text": "(Karp et al., 1992)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tools and Resources", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Following the assumption that metonymic location names can be identified from the context, there are different kinds of context to consider. At most, the context comprises a single sentence in this setup. Three kinds of context were employed to extract features for the memory-based learner TiMBL:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Different Kinds of Context", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 C 1 : Subordination (hyponymy) information for nouns and verbs from the left and right context of the possibly metonymic name.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Different Kinds of Context", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 C 2 : The sentence context for modal verbs, main verbs, prepositions, and articles.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Different Kinds of Context", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 C 3 : A context window of tokens left and right of the location name.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Different Kinds of Context", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The trial data provided (a subset of the Mascara data) contained 188 non-literal location names (of 925 samples total). For a supervised learning approach, this is too few data. Therefore, the full Mascara data was converted to form training data consisting of feature values for context C 1 , C 2 , and C 3 . The training data contained 509 metonymic annotations (of 2797 samples total). Some cases in the Mascara corpus are filtered during processing, including cases annotated as homonyms and cases whose metonymy class could not be agreed upon. The test data had a majority baseline of 82.8% accuracy for country names.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Different Kinds of Context", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The Mascara data was processed to extract the following features (no hand-annotated data from Mascara was employed for feature values, i.e. no grammatical roles):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 For C 1 (WordNet context): From a context of n 1 verbs and nouns in the same sentence, their distance to the location name is calculated. A sequence of eight feature values of WordNet synset IDs is obtained by iteratively looking up the most frequent reading for a lemma in Word-Net and determining its synset ID. Subordination information between synsets is used to find a parent synset. This process is repeated until a top-level parent synset is reached. No actual word sense disambiguation is employed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 For C 2 (sentence context): Sentence boundaries, part-of-speech tags, and lemmatization are determined from the TreeTagger output. From a context window of n 2 tokens, lemma and distance are encoded as feature values for prepositions, articles, modal, and main verbs", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 For C 3 (word context): From a context of n 3 tokens to the left and to the right, the distance between token and location name, three prefix characters, three suffix characters, part-ofspeech tag, case information (U=upper case, L=lower case, N=numeric, O=other), and word length are used as feature values. Table 1 and Table 2 show results for memory based learners trained with TiMBL. Performance measures were obtained with the leave-oneout method. The classifiers were trained on features for different context sizes (n i ranging from 2 to 7) to determine the setting for which the highest accuracy is achieved (e.g. 1 c , 2 c , and 3 c ). In the next step, classifiers with a combined context were trained, selecting the setting with the highest accuracy for a single context for the combination (e.g. 4 c , 5 c , 6 c , and 7 c ). As an additional experiment, a classifier was trained on classification results of the classifiers described above (combination of 1-7, e.g. 8 c ). It was expected that the combination of features from different kinds of context would increase performance, and that the combination of classifier results would increase performance. Table 3 shows results for the official submission. Compared to results from the training phase on the Mascara data (tested with the leave-one-out method), performance is considerably lower. For this data, the combined classifier achieved a considerably higher precision (63.9% for non-literal readings; 57.3% for the fine class place-for-people and even 83.3% for the rare class place-for-event). Performance may be affected by several reasons: A number of problems were encountered while processing the data. The TreeTagger automatically tokenizes its input and applies sentence boundary detection. In some cases, the sentence boundary detection did not work well, returning sentences of more than 170 words. Furthermore, the tagger output had to be aligned with the test data again, as multi-word names (e.g. New York) were split into different tokens. In addition, the tag set of the tagger differs somewhat from the official PENN tag set and includes additional tags for verbs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 311, |
|
"end": 330, |
|
"text": "Table 1 and Table 2", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 1171, |
|
"end": 1178, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "In earlier experiments on metonymy classification on a German corpus (Leveling and Hartrumpf, 2006) , the data was nearly evenly distributed between literal and metonymic readings. This seems to make a classification task easier because there is no hidden bias in the classifier (i.e. the baseline of always selecting the literal readings is about 50%).", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 99, |
|
"text": "(Leveling and Hartrumpf, 2006)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Features are obtained by shallow NLP methods only, not making use of a parser or chunker. Thus, important syntactic or semantic information to decide on metonymy might be missing in the features. However, semantic features are more difficult to determine, because reliable automatic tools for semantic annotation are still missing. This is also indicated by the fact that the grammatical roles (comprising syntactic features) in Mascara data are handannotated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "However, some linguistic phenomena are already implicitly represented by shallower features from Table 3 : Results for the coarse (908 samples: 721 literal, 187 non-literal), medium (721 literal, 167 metonymic, 20 mixed), and fine classification (721 literal, 141 place-for-people, 10 place-for-event, 1 place-for-product, 4 object-for-name, 11 othermet, 20 mixed) of location names. the surface level (given enough training instances). For instance, active/passive voice may be encoded by a combination of features for main verb/modal verbs. If only a small training corpus is available, overall performance will be higher when utilizing explicit syntactic or semantic features. Finally, the data may be too sparse for a supervised memory-based learning approach. The identification of rare classes of metonymy (e.g. placefor-event) would greatly benefit from a larger corpus covering these classes.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 104, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Evaluation results on the training data were very promising, indicating a boost of precision by combining classification results. In the training phase, an accuracy of 83.7% was achieved on the coarse level, compared to the majority baseline accuracy of 81.8%. For the submission for the metonymy resolution task at SemEval-2007, accuracy is close to the majority baseline (79.4%) on the coarse (79.8%), medium (79.5%), and fine (78.5%) level.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In summary, using different context sizes for different kinds of context and combining results of different classifiers for metonymy resolution increases performance. The general approach would profit from combining results of more diverse classifiers, i.e. classifiers employing features extracted from the surface, syntactic, and semantic context of a location name.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Peirsman (2006) also employs TiMBL for metonymy resolution, but trains a single classifier.2 http://www.ims.uni-stuttgart.de/projekte/corplex/TreeTagger/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The research described was in part funded by the DFG (Deutsche Forschungsgemeinschaft) in the project IRSAW (Intelligent Information Retrieval on the Basis of a Semantically Annotated Web).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Ko van der Sloot, and Antal van den Bosch", |
|
"authors": [ |
|
{ |
|
"first": "Walter", |
|
"middle": [], |
|
"last": "Daelemans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakub", |
|
"middle": [], |
|
"last": "Zavrel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Walter Daelemans, Jakub Zavrel, Ko van der Sloot, and Antal van den Bosch. 2004. TiMBL: Tilburg memory based learner, version 5.1. TR 04-02, ILK.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Wordnet. An Electronic Lexical Database", |
|
"authors": [], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christiane Fellbaum, editor. 1998. Wordnet. An Elec- tronic Lexical Database. MIT Press, Cambridge, Mas- sachusetts.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A freely available wide coverage morphological analyzer for English", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Karp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Schabes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Zaidel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dania", |
|
"middle": [], |
|
"last": "Egedi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proc. of COLING-92", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "950--955", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Karp, Yves Schabes, Martin Zaidel, and Dania Egedi. 1992. A freely available wide coverage mor- phological analyzer for English. In Proc. of COLING- 92, pages 950-955, Morristown, NJ.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Metaphors We Live By", |
|
"authors": [ |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Lakoff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George Lakoff and Mark Johnson. 1980. Metaphors We Live By. Chicago University Press.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "On metonymy recognition for GIR", |
|
"authors": [ |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Leveling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sven", |
|
"middle": [], |
|
"last": "Hartrumpf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. of GIR-2006, the 3rd Workshop on Geographical Information Retrieval (held at SIGIR 2006)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johannes Leveling and Sven Hartrumpf. 2006. On metonymy recognition for GIR. In Proc. of GIR-2006, the 3rd Workshop on Geographical Information Re- trieval (held at SIGIR 2006), Seattle, Washington.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Towards a corpus for annotated metonymies: The case of location names", |
|
"authors": [ |
|
{ |
|
"first": "Katja", |
|
"middle": [], |
|
"last": "Markert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Malvina", |
|
"middle": [], |
|
"last": "Nissim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proc. of LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katja Markert and Malvina Nissim. 2002. Towards a corpus for annotated metonymies: The case of location names. In Proc. of LREC 2002, Las Palmas, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Corpus-based metonymy analysis. Metaphor and symbol", |
|
"authors": [ |
|
{ |
|
"first": "Katja", |
|
"middle": [], |
|
"last": "Markert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Malvina", |
|
"middle": [], |
|
"last": "Nissim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "18", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katja Markert and Malvina Nissim. 2003. Corpus-based metonymy analysis. Metaphor and symbol, 18(3).", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Task 08: Metonymy resolution at SemEval-07", |
|
"authors": [ |
|
{ |
|
"first": "Katja", |
|
"middle": [], |
|
"last": "Markert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Malvina", |
|
"middle": [], |
|
"last": "Nissim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proc. of Sem-Eval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katja Markert and Malvina Nissim. 2007. Task 08: Metonymy resolution at SemEval-07. In Proc. of Sem- Eval 2007.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Syntactic features and word similarity for supervised metonymy resolution", |
|
"authors": [ |
|
{ |
|
"first": "Malvina", |
|
"middle": [], |
|
"last": "Nissim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katja", |
|
"middle": [], |
|
"last": "Markert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proc. of ACL-2003", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Malvina Nissim and Katja Markert. 2003. Syntactic features and word similarity for supervised metonymy resolution. In Proc. of ACL-2003, Sapporo, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Example-based metonymy recognition for proper nouns", |
|
"authors": [ |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Peirsman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. of the Student Research Workshop of EACL-2006", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "71--78", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yves Peirsman. 2006. Example-based metonymy recog- nition for proper nouns. In Proc. of the Student Re- search Workshop of EACL-2006, pages 71-78, Trento, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Probabilistic part-of-speech tagging using decision trees", |
|
"authors": [ |
|
{ |
|
"first": "Helmut", |
|
"middle": [], |
|
"last": "Schmid", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "International Conference on New Methods in Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Helmut Schmid. 1994. Probabilistic part-of-speech tag- ging using decision trees. In International Conference on New Methods in Language Processing, Manchester, UK.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Two kinds of metonymy", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Stallard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proc. of ACL-93", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "87--94", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Stallard. 1993. Two kinds of metonymy. In Proc. of ACL-93, pages 87-94, Columbus, Ohio.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"num": null, |
|
"text": "Results for training the classifiers on the coarse location name classes (2797 instances, 509 non-literal, leave-one-out) for the Mascara data (P = precision, R = recall, F = F-score).", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>ID</td><td>n1,n2,n3</td><td>coarse class</td><td>P</td><td>R</td><td>F</td></tr><tr><td>1c</td><td>4,0,0</td><td colspan=\"4\">literal 0.850 0.893 0.871</td></tr><tr><td>1c</td><td>4,0,0</td><td colspan=\"4\">non-literal 0.377 0.289 0.327</td></tr><tr><td>2c</td><td>0,3,0</td><td colspan=\"4\">literal 0.848 0.874 0.860</td></tr><tr><td>2c</td><td>0,3,0</td><td colspan=\"4\">non-literal 0.342 0.295 0.317</td></tr><tr><td>3c</td><td>0,0,7</td><td colspan=\"4\">literal 0.880 0.889 0.885</td></tr><tr><td>3c</td><td>0,0,7</td><td colspan=\"4\">non-literal 0.478 0.455 0.467</td></tr><tr><td>4c</td><td>4,3,0</td><td colspan=\"4\">literal 0.848 0.892 0.896</td></tr><tr><td>4c</td><td>4,3,0</td><td colspan=\"4\">non-literal 0.368 0.282 0.320</td></tr><tr><td>5c</td><td>4,0,7</td><td colspan=\"4\">literal 0.860 0.913 0.885</td></tr><tr><td>5c</td><td>4,0,7</td><td colspan=\"4\">non-literal 0.459 0.332 0.385</td></tr><tr><td>6c</td><td>0,3,7</td><td colspan=\"4\">literal 0.875 0.905 0.889</td></tr><tr><td>6c</td><td>0,3,7</td><td colspan=\"4\">non-literal 0.496 0.420 0.455</td></tr><tr><td>7c</td><td>4,3,7</td><td colspan=\"4\">literal 0.860 0.918 0.888</td></tr><tr><td>7c</td><td>4,3,7</td><td colspan=\"4\">non-literal 0.473 0.332 0.390</td></tr><tr><td colspan=\"2\">8c res. of 1c-7c</td><td colspan=\"4\">literal 0.852 0.968 0.907</td></tr><tr><td colspan=\"2\">8c res. of 1c-7c</td><td colspan=\"4\">non-literal 0.639 0.248 0.357</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"text": "Excerpt from results for training the classifiers on the fine location name classes (2797 instances, leave-one-out) for the Mascara data.", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>ID</td><td>n1,n2,n3</td><td>fine class</td><td>P</td><td>R</td><td>F</td></tr><tr><td>1 f</td><td>4,0,0</td><td colspan=\"4\">literal 0.851 0.895 0.873</td></tr><tr><td>1 f</td><td>4,0,0</td><td colspan=\"4\">pl.-for-p. 0.366 0.280 0.318</td></tr><tr><td>1 f</td><td>4,0,0</td><td colspan=\"4\">pl.-for-e. 0.370 0.270 0.312</td></tr><tr><td>2 f</td><td>0,3,0</td><td colspan=\"4\">literal 0.848 0.876 0.862</td></tr><tr><td>2 f</td><td>0,3,0</td><td colspan=\"4\">pl.-for-p. 0.332 0.276 0.301</td></tr><tr><td>2 f</td><td>0,3,0</td><td colspan=\"4\">pl.-for-e. 0.222 0.270 0.244</td></tr><tr><td>3 f</td><td>0,0,7</td><td colspan=\"4\">literal 0.878 0.892 0.885</td></tr><tr><td>3 f</td><td>0,0,7</td><td colspan=\"4\">pl.-for-p. 0.463 0.424 0.442</td></tr><tr><td>3 f</td><td>0,0,7</td><td colspan=\"4\">pl.-for-e. 0.279 0.324 0.300</td></tr><tr><td>4 f</td><td>4,3,0</td><td colspan=\"4\">literal 0.851 0.899 0.875</td></tr><tr><td>4 f</td><td>4,3,0</td><td colspan=\"4\">pl.-for-p. 0.358 0.269 0.307</td></tr><tr><td>4 f</td><td>4,3,0</td><td colspan=\"4\">pl.-for-e. 0.435 0.270 0.333</td></tr><tr><td>5 f</td><td>4,0,7</td><td colspan=\"4\">literal 0.861 0.914 0.887</td></tr><tr><td>5 f</td><td>4,0,7</td><td colspan=\"4\">pl.-for-p. 0.452 0.322 0.377</td></tr><tr><td>5 f</td><td>4,0,7</td><td colspan=\"4\">pl.-for-e. 0.550 0.297 0.386</td></tr><tr><td>6 f</td><td>0,3,7</td><td colspan=\"4\">literal 0.871 0.906 0.888</td></tr><tr><td>6 f</td><td>0,3,7</td><td colspan=\"4\">pl.-for-p. 0.468 0.383 0.422</td></tr><tr><td>6 f</td><td>0,3,7</td><td colspan=\"4\">pl.-for-e. 0.400 0.324 0.358</td></tr><tr><td>7 f</td><td>4,3,7</td><td colspan=\"4\">literal 0.861 0.918 0.889</td></tr><tr><td>7 f</td><td>4,3,7</td><td colspan=\"4\">pl.-for-p. 0.459 0.323 0.378</td></tr><tr><td>7 f</td><td>4,3,7</td><td colspan=\"4\">pl.-for-e. 0.500 0.297 0.373</td></tr><tr><td>8 f</td><td>res. of 1 f -7 f</td><td colspan=\"4\">literal 0.854 0.963 0.905</td></tr><tr><td>8 f</td><td>res. of 1 f -7 f</td><td colspan=\"4\">pl.-for-p. 0.573 0.262 0.360</td></tr><tr><td>8 f</td><td>res. of 1 f -7 f</td><td colspan=\"4\">pl.-for-e. 0.833 0.270 0.408</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |