Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S07-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:22:55.458424Z"
},
"title": "SemEval-2007 Task 06: Word-Sense Disambiguation of Prepositions",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Litkowski",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CL Research",
"location": {
"addrLine": "9208 Gue Road Damascus",
"postCode": "20872",
"region": "MD"
}
},
"email": ""
},
{
"first": "Orin",
"middle": [],
"last": "Hargraves",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The SemEval-2007 task to disambiguate prepositions was designed as a lexical sample task. A set of over 25,000 instances was developed, covering 34 of the most frequent English prepositions, with two-thirds of the instances for training and one-third as the test set. Each instance identified a preposition to be tagged in a full sentence taken from the FrameNet corpus (mostly from the British National Corpus). Definitions from the Oxford Dictionary of English formed the sense inventories. Three teams participated, with all achieving supervised results significantly better than baselines, with a high fine-grained precision of 0.693. This level is somewhat similar to results on lexical sample tasks with open class words, indicating that significant progress has been made. The data generated in the task provides ample opportunitites for further investigations of preposition behavior.",
"pdf_parse": {
"paper_id": "S07-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "The SemEval-2007 task to disambiguate prepositions was designed as a lexical sample task. A set of over 25,000 instances was developed, covering 34 of the most frequent English prepositions, with two-thirds of the instances for training and one-third as the test set. Each instance identified a preposition to be tagged in a full sentence taken from the FrameNet corpus (mostly from the British National Corpus). Definitions from the Oxford Dictionary of English formed the sense inventories. Three teams participated, with all achieving supervised results significantly better than baselines, with a high fine-grained precision of 0.693. This level is somewhat similar to results on lexical sample tasks with open class words, indicating that significant progress has been made. The data generated in the task provides ample opportunitites for further investigations of preposition behavior.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The SemEval-2007 task to disambiguate prepositions was designed as a lexical sample task to investigate the extent to which an important closed class of words could be disambiguated. In addition, because they are a closed class, with stable senses, the requisite datasets for this task are enduring and can be used as long as the problem of preposition disambiguation remains. The data used in this task was developed in The Preposition Project (TPP, Litkowski & Hargraves (2005) and Litkowski & Hargraves (2006) ), 1 with further refinements to fit the requirements of a SemEval task.",
"cite_spans": [
{
"start": 445,
"end": 479,
"text": "(TPP, Litkowski & Hargraves (2005)",
"ref_id": null
},
{
"start": 484,
"end": 512,
"text": "Litkowski & Hargraves (2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the following sections, we first describe the motivations for a preposition disambiguation task. Next, we describe the development of the datasets used for the task, i.e., the instance sets and the sense inventories. We describe how the task was performed and how it was evaluated (essentially using the same scoring methods as previous Senseval lexical sample tasks). We present the results obtained from the participating teams and provide an initial analysis of these results. Finally, we identify several further types of analyses that will provide further insights into the characterization of preposition behavior.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Prepositions are a closed class, meaning that the number of prepositions remains relatively constant and that their meanings are relatively stable. Despite this, their treatment in computational linguistics has been somewhat limited. In the Penn Treebank, only two types of prepositions are recognized (IN (locative, temporal, and manner) and TO (direction)) (O'Hara, 2005) . Prepositions are viewed as function words that occur with high frequency and therefore carry little meaning. A task to disambiguate prepositions would, in the first place, allow this limited treatment to be confronted more fully.",
"cite_spans": [
{
"start": 359,
"end": 373,
"text": "(O'Hara, 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "Preposition behavior has been the subject of much research, too voluminous to cite here. Three recent workshops on prepositions have been sponsored by the ACL-SIGSEM: Toulouse in 2003 , Colchester in 2005 , and Trento in 2006 . For the most part, these workshops have focused on individual prepositions, with various investigations of more generalized behavior. The SemEval preposition disambiguation task provides a vehicle to examine whether these behaviors are substantiated with a well-defined set of corpus instances.",
"cite_spans": [
{
"start": 179,
"end": 183,
"text": "2003",
"ref_id": null
},
{
"start": 184,
"end": 204,
"text": ", Colchester in 2005",
"ref_id": null
},
{
"start": 205,
"end": 225,
"text": ", and Trento in 2006",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "Prepositions assume more importance when they are considered in relation to verbs. While linguistic theory focuses on subjects and objects as important verb arguments, quite frequently there is an additional oblique argument realized in a prepositional phrase. But with the focus on the verbs, the prepositional phrases do not emerge as having more than incidental importance. However, within frame semantics (Fillmore, 1976) , prepositions rise to a greater prominence; frequently, two or three prepositional phrases are identified as constituting frame elements. In addition, frame semantic analyses indicate the possibility of a greater number of prepositional phrases acting as adjuncts (particularly identifying time and location frame elements). While linguistic theories may identify only one or two prepositions associated with an argument of a verb, frame semantic analyses bring in the possibility of a greater variety of prepositions introducing the same type of frame element. The preposition disambiguation task provides an opportunity to examine this type of variation. The question of prepositional phrase attachment is another important issue. Merlo & Esteve Ferrer (2006) suggest that this problem is a four-way disambiguation task, depending on the properties of nouns and verbs and whether the prepositional phrases are arguments or adjuncts. Their analysis relied on Penn Treebank data. Further insights may be available from the finer-grained data available in the preposition disambiguation task.",
"cite_spans": [
{
"start": 409,
"end": 425,
"text": "(Fillmore, 1976)",
"ref_id": "BIBREF2"
},
{
"start": 1160,
"end": 1188,
"text": "Merlo & Esteve Ferrer (2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "Another important thread of investigation concerning preposition behavior is the task of semantic role (and perhaps semantic relation) labeling (Gildea & Jurafsky, 2002) . This task has been the subject of a previous Senseval task (Automatic Semantic Role Labeling, Litkowski (2004)) and two shared tasks on semantic role labeling in the Conference on Natural Language Learning (Carreras & Marquez (2004) and Carreras & Marquez (2005) ). In addition, three other tasks in SemEval-2007 (semantic relations between nominals, task 4; temporal relation labeling, task 15; and frame semantic structure extraction, task 19) address issues of semantic role labeling. Since a great proportion of these semantic roles are realized in prepositional phrases, this gives greater urgency to understanding preposition behavior.",
"cite_spans": [
{
"start": 144,
"end": 169,
"text": "(Gildea & Jurafsky, 2002)",
"ref_id": "BIBREF3"
},
{
"start": 378,
"end": 404,
"text": "(Carreras & Marquez (2004)",
"ref_id": "BIBREF0"
},
{
"start": 409,
"end": 434,
"text": "Carreras & Marquez (2005)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "Despite the predominant view of prepositions as function words carrying little meaning, this view is not borne out in dictionary treatment of their definitions. To all appearances, prepositions exhibit definitional behavior similar to that of open class words. There is a reasonably large number of distinct prepositions and they show a range of polysemous senses. Thus, with a suitable set of instances, they may be amenable to the same types of analyses as open class words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "The development of the datasets for the preposition disambiguation task grew directly out of TPP. This project essentially articulates the corpus selection, the lexicon choice, and the production of the gold standard. The primary objective of TPP is to characterize each of 847 preposition senses for 373 prepositions (including 220 phrasal prepositions with 309 senses) 2 with a semantic role name and the syntactic and semantic properties of its complement and attachment point. The preposition sense inventory is taken from the Oxford Dictionary of English (ODE, 2004 ). 3",
"cite_spans": [
{
"start": 560,
"end": 570,
"text": "(ODE, 2004",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preparation of Datasets",
"sec_num": "3"
},
{
"text": "For a particular preposition, a set of instances is extracted from the FrameNet database. 4 FrameNet was chosen since it provides well-studied sentences drawn from the British National Corpus (as well as a limited set of sentences from other sources). Since the sentences to be selected for frame analysis were generally chosen for some open class verb or noun, these sentences would be expected to provide no bias with respect to prepositions. In addition, the use of this resource makes available considerable information for each sentence in its identification of frame elements, their phrase type, and their grammatical function. The FrameNet data was also made accessible in a form (FrameNet Explorer) 5 to facilitate a lexicographer's examination of preposition instances.",
"cite_spans": [
{
"start": 90,
"end": 91,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Development",
"sec_num": "3.1"
},
{
"text": "Each sentence in the FrameNet data is labeled with a subcorpus name. This name is generally intended only to capture some property of a set of instances. In particular, many of these subcorpus names include a string ppprep and this identification was used for the selection of instances. Thus, searching the FrameNet corpus for subcorpora labeled ppof or ppafter would yield sentences containing a prepositional phrase with a desired preposition. This technique was used for many common prepositions, yielding 300 to 4500 instances. The technique was modified for prepositions with fewer instances. Instead, all sentences having a phrase beginning with a desired preposition were selected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Development",
"sec_num": "3.1"
},
{
"text": "The number of sentences eventually used in the SemEval task is shown in Table 1 . More than 25,000 instances for 34 prepositions were tagged in TPP and used for the SemEval-2007 task.",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 79,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus Development",
"sec_num": "3.1"
},
{
"text": "As mentioned above, ODE (and its predecessor, the New Oxford Dictionary of English (NODE, 1997)) was used as the sense inventory for the prepositions. ODE is a corpus-based, lexicographically-drawn sense inventory, with a two-level hierarchy, consisting of a set of core senses and a set of subsenses (if any) that are semantically related to the core sense. The full set of information, both printed and in electronic form, containing additional lexicographic information, was made publicly available for TPP, and hence, the SemEval disambiguation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Development",
"sec_num": "3.2"
},
{
"text": "The sense inventory was not used as absolute and further information was added during TPP. The lexicographer (Hargraves) was free to add senses, particularly as the corpus evidence provided by the FrameNet data suggested. The process of refining the sense inventory was performed as the lexicographer assigned a sense to each instance. While engaged in this sense assignment, the lexicographer accumulated an understanding of the behavior of the preposition, assigning a name to each sense (characterizing its semantic type), and characterizing the syntactic and semantic properties of the preposition complement and its point of attachment or head. Each sense was also characterized by its syntactic function and its meaning, identifying the relevant paragraph(s) where it is discussed in Quirk et al (1985) .",
"cite_spans": [
{
"start": 802,
"end": 808,
"text": "(1985)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Development",
"sec_num": "3.2"
},
{
"text": "After sense assignments were completed, the set of instances for each preposition was analyzed against the FrameNet database. In particular, the FrameNet frames and frame elements associated with each sense was identified. The set of sentences was provided in SemEval format in an XML file with the preposition tagged as <head>, along with an answer key (also identifying the FrameNet frame and frame element). Finally, using the FrameNet frame and frame element of the tagged instances, syntactic alternation patterns (other syntactic forms in which the semantic role may be realized) are provided for each FrameNet target word for each sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Development",
"sec_num": "3.2"
},
{
"text": "All of the above information was combined into a preposition database. 6 For SemEval-2007, entries for the target prepositions were combined into an XML file as the \"Definitions\" to be used as the sense inventory, where each sense was given a unique identifier. All prepositions for which a set of instances had been analyzed in TPP were included. These 34 prepositions are shown in Table 1 (below, beyond, and near were used in the trial set).",
"cite_spans": [],
"ref_spans": [
{
"start": 383,
"end": 390,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lexicon Development",
"sec_num": "3.2"
},
{
"text": "Unlike previous Senseval lexical sample tasks, tagging was not performed as a separate step. Rather, sense tagging was completed as an integral part of TPP. Funding was unavailable to perform additional tagging with other lexicographers and the appropriate interannotator agreement studies have not yet been completed. At this time, only qualitative assessments of the tagging can be given.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold Standard Production",
"sec_num": "3.3"
},
{
"text": "As indicated, the sense inventory for each preposition evolved as the lexicographer examined the set of FrameNet instances. Multiple sources (such as Quirk et al.) and lexicographic experience were important components of the sense tagging. The tagging was performed without any deadlines and with full adherence to standard lexicographic principles. Importantly, the availability of the FrameNet corpora facilitated the sense assignment, since many similar instances were frequently contiguous in the instance set (e.g., associated with the same target word and frame).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold Standard Production",
"sec_num": "3.3"
},
{
"text": "Another important factor suggesting higher quality in the sense assignment is the quality of the sense inventory. Unlike previous Senseval lexical sample tasks, the sense inventory was developed using lexicographic principles and was quite stable. In arriving at the sense inventory, the lexicographer was able to compare ODE with its predecessor NODE, noting in most cases that the senses had not changed or had changed in only minor ways.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold Standard Production",
"sec_num": "3.3"
},
{
"text": "Finally, the lexicographer had little difficulty in making sense assignments. The sense distinctions were well enough drawn that there was relatively little ambiguity given a sentence context. The lexicographer was not constrained to selecting one sense, but could tag a preposition with multiple senses as deemed necessary. Out of 25,000 instances, only 350 instances received multiple senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold Standard Production",
"sec_num": "3.3"
},
{
"text": "The organization followed standard SemEval (Senseval) procedures. The data were prepared in XML, using Senseval DTDs. That is, each instance was labeled with an instance identifier as an XML attribute. Within the <instance> tag, the FrameNet sentence was labeled as the <context> and included one item, the target preposition, in the <head> tag. The FrameNet sentence identifier was used as the instance identifier, enabling participants to make use of other FrameNet data. Unlike lexical sample tasks for open class words, only one sentence was provided as the context. Although no examination of whether this is sufficient context for prepositions, it seems likely that all information necessary for preposition disambiguation is contained in the local context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Organization and Evaluation",
"sec_num": "4"
},
{
"text": "A trial set of three prepositions was provided (the three smallest instance sets that had been developed). For each of the remaining 34 prepositions, the data was split in a ratio of two to one between training and test data. The training data included the sense identifier. Table 1 shows the total number of instances for each preposition, along with the number in the training and the test sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 275,
"end": 282,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task Organization and Evaluation",
"sec_num": "4"
},
{
"text": "Answers were submitted in the standard Senseval format, consisting of the lexical item name, the instance identifier, the system sense assignments, and optional comments. Although participants were not restricted to selecting only one sense, all did so and did not provide either multiple senses or weighting of different senses. Because of this, a simple Perl script was used to score the results, giving precision, recall, and F-score. 7 The answers were also scored using the standard Senseval scoring program, which records a result for \"attempted\" rather than F-score, with precision interpreted as percent of attempted instances that are correct and recall as percent of total instances that are correct. 8 Table 1 reports the standard SemEval recall, while Tables 2 and 3 use the standard notions of precision and recall.",
"cite_spans": [
{
"start": 438,
"end": 439,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 713,
"end": 779,
"text": "Table 1 reports the standard SemEval recall, while Tables 2 and 3",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Task Organization and Evaluation",
"sec_num": "4"
},
{
"text": "Tables 2 and 3 present the overall fine-grained and coarse-grained results, respectively, for the three participating teams (University of Melbourne, Ko\u00e7 University, and Instituto Trentino di Cultura, IRST). The tables show the team designator, and the results over all prepositions, giving the precision, the recall, and the F-score. The table also shows the results for two baselines. The FirstSense baseline selects the first sense of each preposition as the answer (under the assumption that the senses are organized somewhat according to prominence). The FreqSense baseline selects the most frequent sense from the training set. Table 1 shows the fine-grained recall scores for each team for each preposition. Table 1 also shows the entropy and perplexity for each preposition, based on the data from the training sets. As can be seen, all participating teams performed significantly better than the baselines. Additional improvements occurred at the coarse grain, although the differences are not dramatically higher.",
"cite_spans": [],
"ref_spans": [
{
"start": 634,
"end": 641,
"text": "Table 1",
"ref_id": null
},
{
"start": 715,
"end": 722,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "All participating teams used supervised systems, using the training data for their submissions. The University of Melbourne used a maximum entropy system using a wide variety of syntactic and semantic features. Ko\u00e7 University used a statistical language model (based on Google ngram data) to measure the likelihood of various substitutes for various senses. IRST-BP used Chain Clarifying Relationships, in which contextual lexical and syntactic features of representative contexts are used for learning sense discriminative patterns. Further details on their methods are available in their respective papers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Examination of the detailed results by preposition in Table 1 shows that performance is inversely related to polysemy. The greater number of senses leads to reduced performance. The first sense heuristic has a correlation of -0.64; the most frequent sense heuristic has a correlation of -0.67. the correlations for MELB, KU, and IRST are -0.40, -0.70, and -0.56, respectively. The scores are also negatively correlated with the number of test instances. The correlations are -0.34 and -0.44 for the first sense and the most frequent sense heuristics. For the systems, the scores are -0.17, -0.48, and -0.39 for Melb, KU, and IRST.",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 61,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "The scores for each preposition are strongly negatively correlated with entropy and perplexity, as frequently observed in lexical sample disambiguation. For MELB-YB and IRST-BP, the correlation with entropy is about -0.67, while for KU, the correlation is -0.885. For perplexity, the correlation is -0.55 for MELB-YB, -0.62 for IRST-ESP , and -0.82 for KU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "More detailed analysis is required to examine the performance for each preposition, particularly for the most frequent prepositions (of, in, from, with, to, for, on, at, into, and by) . Performance on these prepositions ranged from fairly good to mediocre to relatively poor. In addition, a comparison of the various attributes of the TPP sense information with the different performances might be fruitful. Little of this information was used by the various systems.",
"cite_spans": [
{
"start": 132,
"end": 183,
"text": "(of, in, from, with, to, for, on, at, into, and by)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "The SemEval-2007 preposition disambiguation task can be considered successful, with results that can be exploited in general NLP tasks. In addition, the task has generated considerable information for further examination of preposition behavior.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "http://www.clres.com/prepositions.html.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The number of prepositions and the number of senses is not fixed, but has changed during the course of the project, as will become clear.3 TPP does not include particle senses of such words as in or over (or any other particles) used with verbs to make phrasal verbs. In this context, phrasal verbs are to be distinguished from verbs that select a preposition (such as on in rely on), which may be characterized as a collocation.4 http://framenet.icsi.berkeley.edu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available for the Windows operating system at http://www.clres.com for those with access to the FrameNet data.6 The full database is viewable in the Online TPP (http://www.clres.com/cgi-bin/onlineTPP/find_prep.cgi ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Precision is the percent of total correct instances and recall is the percent of instances attempted, so that an F-score can be computed.8 The standard SemEval (Senseval) scoring program, scorer2, does not work to compute a coarse-grained score for the preposition instances, since senses are numbers such as \"4(2a)\" and not alphabetic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "and Applications, University of Essex -Colchester, United Kingdom. 171-179. Kenneth C. Litkowski.& Orin Hargraves. 2006 ",
"cite_spans": [
{
"start": 76,
"end": 119,
"text": "Kenneth C. Litkowski.& Orin Hargraves. 2006",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Introduction to the CoNLL-2004 Shared Task: Semantic Role Labeling",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Lluis",
"middle": [],
"last": "Marquez",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Carreras and Lluis Marquez. 2004. Introduction to the CoNLL-2004 Shared Task: Semantic Role Labeling. In: Proceedings of CoNLL-2004.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Introduction to the CoNLL-2005 Shared Task: Semantic Role Labeling",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Lluis",
"middle": [],
"last": "Marquez",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of CoNLL-2005",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Carreras and Lluis Marquez. 2005. Introduction to the CoNLL-2005 Shared Task: Semantic Role Labeling. In: Proceedings of CoNLL-2005.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Frame Semantics and the Nature of Language",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Fillmore",
"suffix": ""
}
],
"year": 1976,
"venue": "Annals of the New York Academy of Sciences",
"volume": "280",
"issue": "",
"pages": "20--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Fillmore. 1976. Frame Semantics and the Nature of Language. Annals of the New York Academy of Sciences, 280: 20-32.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatic Labeling of Semantic Roles",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "3",
"pages": "245--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea and Daniel Jurafsky. 2002. Automatic Labeling of Semantic Roles. Computational Linguistics, 28 (3), 245-288.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Senseval-3 Task: Automatic Labeling of Semantic Roles",
"authors": [
{
"first": "C",
"middle": [],
"last": "Kenneth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Litkowski",
"suffix": ""
}
],
"year": 2004,
"venue": "Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text. ACL",
"volume": "",
"issue": "",
"pages": "9--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth C. Litkowski. 2004. Senseval-3 Task: Automatic Labeling of Semantic Roles. In Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text. ACL. 9-12.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The Preposition Project",
"authors": [
{
"first": "C",
"middle": [],
"last": "Kenneth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Litkowski & Orin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hargraves",
"suffix": ""
}
],
"year": 2005,
"venue": "ACL-SIGSEM Workshop on the Linguistic Dimensions of Prepositions and their Use in Computational Linguistic Formalisms",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth C. Litkowski & Orin Hargraves. 2005. The Preposition Project. In: ACL-SIGSEM Workshop on the Linguistic Dimensions of Prepositions and their Use in Computational Linguistic Formalisms",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"html": null,
"text": "",
"num": null,
"content": "<table><tr><td/><td>Prec Rec</td><td>F</td></tr><tr><td>MELB-YB</td><td colspan=\"2\">0.693 1.000 0.818</td></tr><tr><td>KU</td><td colspan=\"2\">0.547 1.000 0.707</td></tr><tr><td>IRST-BP</td><td colspan=\"2\">0.496 0.864 0.630</td></tr><tr><td>FirstSense</td><td colspan=\"2\">0.289 1.000 0.449</td></tr><tr><td>FreqSense</td><td colspan=\"2\">0.396 1.000 0.568</td></tr><tr><td colspan=\"2\">Table 3. Coarse-Grained Scores</td><td/></tr><tr><td colspan=\"2\">(All Prepositions -8096 Instances)</td><td/></tr><tr><td>Team</td><td>Prec Rec</td><td>F</td></tr><tr><td>MELB-YB</td><td colspan=\"2\">0.755 1.000 0.861</td></tr><tr><td>KU</td><td colspan=\"2\">0.642 1.000 0.782</td></tr><tr><td>IRST-BP</td><td colspan=\"2\">0.610 0.864 0.715</td></tr><tr><td>FirstSense</td><td colspan=\"2\">0.441 1.000 0.612</td></tr><tr><td>FreqSense</td><td colspan=\"2\">0.480 1.000 0.649</td></tr></table>"
}
}
}
}