Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S07-1033",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:23:23.724298Z"
},
"title": "GYDER: maxent metonymy resolution",
"authors": [
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Farkas",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Szeged",
"location": {
"postCode": "H-6720",
"settlement": "Szeged",
"country": "\u00c1rp\u00e1d t\u00e9r"
}
},
"email": "[email protected]"
},
{
"first": "Eszter",
"middle": [],
"last": "Simon",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "Szarvas",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Szeged",
"location": {
"addrLine": "\u00c1rp\u00e1d t\u00e9r 2",
"postCode": "6720",
"settlement": "Szeged"
}
},
"email": "[email protected]"
},
{
"first": "D\u00e1niel",
"middle": [],
"last": "Varga",
"suffix": "",
"affiliation": {
"laboratory": "Budapest U. of Technology MOKK Media Research H-1111",
"institution": "",
"location": {
"addrLine": "Stoczek u 2",
"settlement": "Budapest"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Though the GYDER system has achieved the highest accuracy scores for the metonymy resolution shared task at SemEval-2007 in all six subtasks, we don't consider the results (72.80% accuracy for org, 84.36% for loc) particularly impressive, and argue that metonymy resolution needs more features.",
"pdf_parse": {
"paper_id": "S07-1033",
"_pdf_hash": "",
"abstract": [
{
"text": "Though the GYDER system has achieved the highest accuracy scores for the metonymy resolution shared task at SemEval-2007 in all six subtasks, we don't consider the results (72.80% accuracy for org, 84.36% for loc) particularly impressive, and argue that metonymy resolution needs more features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In linguistics metonymy means using one term, or one specific sense of a term, to refer to another, related term or sense. For example, in 'the pen is mightier than the sword' pen refers to writing, the force of ideas, while sword refers to military force. Named Entity Recognition (NER) is of key importance in numerous natural language processing applications ranging from information extraction to machine translation. Metonymic usage of named entities is frequent in natural language. On the basic NER categories person, place, organisation state-of-the-art systems generally perform in the mid to the high nineties. These systems typically do not distinguish between literal or metonymic usage of entity names, even though this would be helpful for most applications. Resolving metonymic usage of proper names would therefore directly benefit NER and indirectly all NLP tasks (such as anaphor resolution) that require NER. Markert and Nissim (2002) outlined a corpusbased approach to proper name metonymy as a semantic classification problem that forms the basis of the 2007 SemEval metonymy resolution task. Instances like 'He was shocked by Vietnam' or 'Schengen boosted tourism' were assigned to broad categories like place-for-event, sometimes ignoring narrower distinctions, such as the fact that it wasn't the signing of the treaty at Schengen but rather its actual implementation (which didn't take place at Schengen) that boosted tourism. But the corpus makes clear that even with these (sometimes coarse) class distinctions, several metonymy types seem to appear extremely rarely in actual texts. The shared task focused on two broad named entity classes as metonymic sources, location and org, each having several target classes. For more details on the data sets, see the task description paper Markert and Nissim (2007) .",
"cite_spans": [
{
"start": 928,
"end": 953,
"text": "Markert and Nissim (2002)",
"ref_id": "BIBREF2"
},
{
"start": 1811,
"end": 1836,
"text": "Markert and Nissim (2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several categories (e.g. place-for-event, organisation-for-index) did not contain a sufficient number of examples for machine learning, and we decided early on to accept the fact that these categories will not be learned and to concentrate on those classes where learning seemed feasible. The shared task itself consisted of 3 subtasks of different granularity for both organisation and location names. The fine-grained evaluation aimed at distinguishing between all categories, while the medium-grained evaluation grouped different types of metonymic usage together and addressed literal / mixed / metonymic usage. The coarse-grained subtask was in fact a literal / nonliteral two-class classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Though GYDER has obtained the highest accuracy for the metonymy shared task at SemEval-2007 in all six subtasks, we don't consider the results (72.80% accuracy for org, 84.36% for loc) particularly impressive. In Section 3 we describe the feature engineering lessons learned from working on the task. In Section 5 we offer some speculative remarks on what it would take to improve the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "GYDER (the acronym was formed from the initials of the author' first names) is a maximum entropy learner. It uses Zhang Le's 1 maximum entropy toolkit, setting the Gaussian prior to 1. We used random 5-fold cross-validation to determine the usefulness of a particular feature. Due to the small number of instances and features, the learning algorithm always converged before 30 iterations, so the crossvalidation process took only seconds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "2"
},
{
"text": "We also tested the classic C4.5 decision tree learning algorithm Quinlan (1993) , but our early experiments showed that the maximum entropy learner was consistently superior to the decision tree classifier for this task, yielding about 2-5% higher accuracy scores on average on both tasks (on the training set, using cross-validation).",
"cite_spans": [
{
"start": 65,
"end": 79,
"text": "Quinlan (1993)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "2"
},
{
"text": "We tested several features describing orthographic, syntactic, or semantic characteristics of the Possibly Metonymic Words (PMWs). Here we follow Nissim and Markert (2005) , who reported three classes of features to be the most relevant for metonymy resolution: the grammatical annotations provided for the corpus examples by the task organizers, the determiner, and the grammatical number of the PMW. We also report on some features that didn't work.",
"cite_spans": [
{
"start": 146,
"end": 171,
"text": "Nissim and Markert (2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Engineering",
"sec_num": "3"
},
{
"text": "We used the grammatical annotations provided for each PMW in several ways. First, we used as a feature the type of the grammatical relation and the word form of the related word. (If there was more than one related word, each became a feature.) To overcome data sparseness, it is useful to generalize from individual headwords Markert and Nissim (2003) . We used three different methods to achieve this:",
"cite_spans": [
{
"start": 327,
"end": 352,
"text": "Markert and Nissim (2003)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical annotations",
"sec_num": "3.1"
},
{
"text": "1 http://homepages.inf.ed.ac.uk/s0450736/maxent toolkit.html First, we used Levin's (1993) verb classification index to generalize the headwords of the most relevant grammatical relations (subject and object). The added feature was simply the class assigned to the verb by Levin. We also used WordNet (Fellbaum 1998) to generalize headwords. First we gathered the hypernym path from WordNet for each headword's sense#1 in the train corpus. Based on these paths we collected synsets whose tree frequently indicated metonymic sense. We indicated with a feature if the headword in question was in one of such collected subtrees.",
"cite_spans": [
{
"start": 76,
"end": 90,
"text": "Levin's (1993)",
"ref_id": "BIBREF1"
},
{
"start": 273,
"end": 279,
"text": "Levin.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical annotations",
"sec_num": "3.1"
},
{
"text": "Third, we have manually built a very small verb classification 'Trigger' table for specific cases. E.g. announce, say, declare all trigger the same feature. This table is the only resource in our final system that was manually built by us, so we note that on the test corpus, disabling this 'Trigger' feature does not alter org accuracy, and decreases loc accuracy by 0.44%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical annotations",
"sec_num": "3.1"
},
{
"text": "Following Nissim and Markert (2005) , we distinguished between definite, indefinite, demonstrative, possessive, wh and other determiners. We also marked if the PMW was sentence-initial, and thus necessarily determinerless. This feature was useful for the resolution of organisation PMWs so we used it only for the org tasks. It was not straightforward, however, to assign determiners to the PMWs without proper syntactic analysis. After some experiments, we linked the nearest determiner and the PMW together if we found only adjectives (or nothing) between them.",
"cite_spans": [
{
"start": 10,
"end": 35,
"text": "Nissim and Markert (2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Determiners",
"sec_num": "3.2"
},
{
"text": "This feature was particularly useful to separate metonymies of the org-for-product class. We assumed that only PMWs ending with letter s might be in plural form, and for them we compared the web search result numbers obtained by the Google API. We ran two queries for each PMWs, one for the full name, and one for the name without its last character. If we observed a significant increase in the number of hits returned by Google for the shorter phrase, we set this feature for plural.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number",
"sec_num": "3.3"
},
{
"text": "We included the surface form of the PMW as a feature, but only for the org domain. Cross-validation on the training corpus showed that the use of this feature causes an 1.5% accuracy improvement for organisations, and a slight degradation for locations. The improvement perfectly generalized to the test corpora. Some company names are indeed more likely to be used in a metonymic way, so we believe that this feature does more than just exploiting some specificity of the shared task corpora. We note that the ranking of our system would have been unaffected even if we didn't use this feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PMW word form",
"sec_num": "3.4"
},
{
"text": "Here we discuss those features where crossvalidation didn't show improvements (and thus were not included in the submitted system).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsuccessful features",
"sec_num": "3.5"
},
{
"text": "Trigger words were automatically collected lists of word forms and phrases that more frequently appeared near metonymic PMWs. Expert triggers were similar trigger words or phrases, but suggested by a linguist expert to be potentially indicative for metonymic usage. We experimented with sample-level, sentencelevel and vicinity trigger phrases. Named entity labels given by a state-of-the-art named entity recognizer (Szarvas et al. 2006) . POS tags around PMWs. Ortographical features such as capitalisation and and other surface characteristics for the PMW and nearby words. Individual tokens of the potentially metonymic phrase. Main category of Levin's hierarchical classification. Inflectional category of the verb nearest to the PMW in the sentence.",
"cite_spans": [
{
"start": 417,
"end": 438,
"text": "(Szarvas et al. 2006)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsuccessful features",
"sec_num": "3.5"
},
{
"text": "4 Results Table 1 . shows the accuracy scores of our submitted system on fine classification granularity. As a baseline, we also evalute the system without the Word-Net, Levin, Trigger and PMW word form features. This baseline system is quite similar to the one described by Nissim and Markert (2005) . We also publish the majority baseline scores. We could not exploit the hierarchical structure of the fine-grained tag set, and ended up treating it as totally unstructured even for the mixed class, unlike Nissim and Markert, who apply complicated heuristics to exploit the special semantics of this class.",
"cite_spans": [
{
"start": 275,
"end": 300,
"text": "Nissim and Markert (2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 10,
"end": 17,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unsuccessful features",
"sec_num": "3.5"
},
{
"text": "For the coarse and medium subtasks of the loc domain, we simply coarsened the fine-grained results. For the coarse and medium subtasks of the org domain, we coarsened the train corpus to medium coarseness before training. This idea was based on observations on training data, but was proven to be unjustified: it slightly decreased the system's accuracy on the medium subtask. coarse medium fine location 85.24 84.80 84.36 organisation 76.72 73.28 72.80 Table 2 : Accuracy of the GYDER system for each domain / granularity",
"cite_spans": [],
"ref_spans": [
{
"start": 454,
"end": 461,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unsuccessful features",
"sec_num": "3.5"
},
{
"text": "In general, the coarser grained evaluation did not show a significantly higher accuracy (see Table 2 .), proving that the main difficulty is to distinguish between literal and metonymic usage, rather than separating metonymy classes from each other (since different classes represent significantly different usage / context). Because of this, data sparseness remained a problem for coarse-grained classification as well.",
"cite_spans": [],
"ref_spans": [
{
"start": 93,
"end": 100,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unsuccessful features",
"sec_num": "3.5"
},
{
"text": "Per-class results of the submitted system for both domains are shown on Table 3 . Note that our system never predicted loc values from the four small classes place-for-event and product, object-for-name and other as these had only 26 instances altogether. Since we never had significant results for the mixed category, in effect the loc task ended up a binary classification task between literal and place-for-people. Table 3 : Per-class accuracies for both domains While in the org set the system also ignores the smallest categories othermet, org-for-index and event (a total of 11 instances), the six major categories literal, org-for-members, org-for-product, org-for-facility, object-for-name, mixed all receive meaningful hypotheses.",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 79,
"text": "Table 3",
"ref_id": null
},
{
"start": 418,
"end": 425,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unsuccessful features",
"sec_num": "3.5"
},
{
"text": "The features we eventually selected performed well enough to actually achieve the best scores in all six subtasks of the shared task, and we think they are useful in general. But it is worth emphasizing that many of these features are based on the grammatical annotation provided by the task organizers, and as such, would require a better dependency parser than we currently have at our disposal to create a fully automatic system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions, Further Directions",
"sec_num": "5"
},
{
"text": "That said, there is clearly a great deal of merit to provide this level of annotation, and we would like to speculate what would happen if even more detailed annotation, not just grammatical, but also semantical, were provided manually. We hypothesize that the metonymy task would break down into the task of identifying several journalistic cliches such as \"location for sports team\", \"capital city for government\", and so on, which are not yet always distinguished by the depth of the annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions, Further Directions",
"sec_num": "5"
},
{
"text": "It would be a true challenge to create a data set of non-cliche metonymy cases, or a corpus large enough to represent rare metonymy types and challenging non-cliche metonymies better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions, Further Directions",
"sec_num": "5"
},
{
"text": "We feel that at least regarding the corpus used for the shared task, the potential of the grammatical annotation for PMWs was more or less well exploited. Future systems should exploit more semantic knowledge, or the power of a larger data set, or preferably both.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions, Further Directions",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "We wish to thank Andr\u00e1s Kornai for help and encouragement, and the anonymous reviewers for valuable comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "WordNet: An Electronic Lexical Database",
"authors": [
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum Ed",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum ed. 1998. WordNet: An Electronic Lexical Database. MIT Press.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "English Verb Classes and Alternations. A Preliminary Investigation",
"authors": [
{
"first": "Beth",
"middle": [],
"last": "Levin",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beth Levin. 1993. English Verb Classes and Alterna- tions. A Preliminary Investigation. The University of Chicago Press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Metonymy resolution as a classification task",
"authors": [
{
"first": "Katja",
"middle": [],
"last": "Markert",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Nissim",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katja Markert and Malvina Nissim. 2002. Metonymy resolution as a classification task. Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP 2002). Philadelphia, USA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Syntactic Features and Word Similarity for Supervised Metonymy Resolution",
"authors": [
{
"first": "Katja",
"middle": [],
"last": "Markert",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Nissim",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL2003)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katja Markert and Malvina Nissim. 2003. Syntactic Fea- tures and Word Similarity for Supervised Metonymy Resolution. Proceedings of the 41st Annual Meet- ing of the Association for Computational Linguistics (ACL2003). Sapporo, Japan.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning to buy a Renault and talk to BMW: A supervised approach to conventional metonymy",
"authors": [
{
"first": "Malvina",
"middle": [],
"last": "Nissim",
"suffix": ""
},
{
"first": "Katja",
"middle": [],
"last": "Markert",
"suffix": ""
}
],
"year": 2005,
"venue": "International Workshop on Computational Semantics (IWCS2005)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Malvina Nissim and Katja Markert. 2005. Learning to buy a Renault and talk to BMW: A supervised approach to conventional metonymy. International Workshop on Computational Semantics (IWCS2005).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "SemEval-2007 Task 08: Metonymy Resolution at SemEval-2007",
"authors": [
{
"first": "Katja",
"middle": [],
"last": "Markert",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Nissim",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of SemEval-2007",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katja Markert and Malvina Nissim. 2007. SemEval- 2007 Task 08: Metonymy Resolution at SemEval- 2007. In Proceedings of SemEval-2007.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "C4.5: Programs for machine learning",
"authors": [
{
"first": "Ross",
"middle": [],
"last": "Quinlan",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ross Quinlan. 1993. C4.5: Programs for machine learn- ing. Morgan Kaufmann.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multilingual Named Entity Recognition System Using Boosting and C4.5 Decision Tree Learning Algorithms",
"authors": [
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "Szarvas",
"suffix": ""
},
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Farkas",
"suffix": ""
},
{
"first": "Andr\u00e1s",
"middle": [],
"last": "Kocsor",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of Discovery Science 2006, DS2006, LNAI 4265 pp",
"volume": "",
"issue": "",
"pages": "267--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gy\u00f6rgy Szarvas, Rich\u00e1rd Farkas and Andr\u00e1s Kocsor. 2006. Multilingual Named Entity Recognition Sys- tem Using Boosting and C4.5 Decision Tree Learning Algorithms. Proceedings of Discovery Science 2006, DS2006, LNAI 4265 pp. 267-278. Springer-Verlag.",
"links": null
}
},
"ref_entries": {}
}
}