Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C08-1025",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:25:14.473722Z"
},
"title": "Re-estimation of Lexical Parameters for Treebank PCFGs",
"authors": [
{
"first": "Tejaswini",
"middle": [],
"last": "Deoskar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cornell University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present procedures which pool lexical information estimated from unlabeled data via the Inside-Outside algorithm, with lexical information from a treebank PCFG. The procedures produce substantial improvements (up to 31.6% error reduction) on the task of determining subcategorization frames of novel verbs, relative to a smoothed Penn Treebank-trained PCFG. Even with relatively small quantities of unlabeled training data, the re-estimated models show promising improvements in labeled bracketing f-scores on Wall Street Journal parsing, and substantial benefit in acquiring the subcategorization preferences of low-frequency verbs.",
"pdf_parse": {
"paper_id": "C08-1025",
"_pdf_hash": "",
"abstract": [
{
"text": "We present procedures which pool lexical information estimated from unlabeled data via the Inside-Outside algorithm, with lexical information from a treebank PCFG. The procedures produce substantial improvements (up to 31.6% error reduction) on the task of determining subcategorization frames of novel verbs, relative to a smoothed Penn Treebank-trained PCFG. Even with relatively small quantities of unlabeled training data, the re-estimated models show promising improvements in labeled bracketing f-scores on Wall Street Journal parsing, and substantial benefit in acquiring the subcategorization preferences of low-frequency verbs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In order to obtain the meaning of a sentence automatically, it is necessary to have access to its syntactic analysis at some level of complexity. Many NLP applications like translation, questionanswering, etc. might benefit from the availability of syntactic parses. Probabilistic parsers trained over labeled data have high accuracy on indomain data: lexicalized parsers get an f-score of up to 90.0% on Wall Street Journal data (Charniak and Johnson (2005) 's re-ranking parser), while recently, unlexicalized PCFGs have also been shown to perform much better than previously believed (Klein and Manning, 2003) . However, the limited size of annotated training data results in many parameters of a PCFG being badly estimated when c 2008.",
"cite_spans": [
{
"start": 430,
"end": 458,
"text": "(Charniak and Johnson (2005)",
"ref_id": "BIBREF6"
},
{
"start": 587,
"end": 612,
"text": "(Klein and Manning, 2003)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported license (http://creativecommons.org/licenses/by-nc-sa/3.0/). Some rights reserved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "trained on annotated data. The Zipfian nature of a text corpus results in PCFG parameters related to the properties of specific words being especially badly estimated. For instance, about 38% of verbs in the training sections of the Penn Treebank (PTB) (Marcus et al., 1993) occur only once -the lexical properties of these verbs (such as their most common subcategorization frames ) cannot be represented accurately in a model trained exclusively on the Penn Treebank.",
"cite_spans": [
{
"start": 253,
"end": 274,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The research reported here addresses this issue. We start with an unlexicalized PCFG trained on the PTB. We then re-estimate the parameters of this PCFG from raw text using an unsupervised estimation method based on the Inside-Outside algorithm (Lari and Young, 1990) , an instance of the Expectation Maximization algorithm (Dempster et al., 1977) for PCFG induction. The reestimation improves f-score on the standard test section of the PTB significantly. Our focus is on learning lexical parameters i.e. those parameters related to the lexico-syntactic properties of openclass words. Examples of such properties are: subcategorization frames of verbs and nouns, attachment preference of adverbs to sentential, verbal or nominal nodes, attachment preference of PPs to a verbal or nominal node, etc.",
"cite_spans": [
{
"start": 245,
"end": 267,
"text": "(Lari and Young, 1990)",
"ref_id": "BIBREF18"
},
{
"start": 324,
"end": 347,
"text": "(Dempster et al., 1977)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The current research is related to semisupervised training paradigms like self-trainingthese methods are currently being explored to improve the performance of existing PCFG models by utilizing unlabeled data. For example, Mc-Closkey et al. (2006) achieve a 1.1% improvement in labeled bracketing f-score by the use of unlabeled data to self-train the parser-reranker system from Charniak and Johnson (2005) . Earlier research on inside-outside estimation of PCFG models has reported some positive results as well (Pereira and Schabes, 1992; Carroll and Rooth, 1998; Beil et al., 1999; imWalde, 2002) . In some of these cases, an initial model is derived by other means -inside-outside is used to reestimate the initial model. However, many questions still remain open about its efficacy for PCFG re-estimation. Grammars used previously have not been treebank grammars (for e.g., Carroll and Rooth (1998) and Beil et al. (1999) used handcrafted grammars), hence these models could not be evaluated according to standardized evaluations in the parsing literature. In the current work, we use a Penn Treebank based grammar; hence all reestimated grammars can be evaluated using standardized criteria.",
"cite_spans": [
{
"start": 223,
"end": 247,
"text": "Mc-Closkey et al. (2006)",
"ref_id": null
},
{
"start": 380,
"end": 407,
"text": "Charniak and Johnson (2005)",
"ref_id": "BIBREF6"
},
{
"start": 514,
"end": 541,
"text": "(Pereira and Schabes, 1992;",
"ref_id": "BIBREF22"
},
{
"start": 542,
"end": 566,
"text": "Carroll and Rooth, 1998;",
"ref_id": "BIBREF4"
},
{
"start": 567,
"end": 585,
"text": "Beil et al., 1999;",
"ref_id": "BIBREF0"
},
{
"start": 586,
"end": 600,
"text": "imWalde, 2002)",
"ref_id": "BIBREF14"
},
{
"start": 880,
"end": 904,
"text": "Carroll and Rooth (1998)",
"ref_id": "BIBREF4"
},
{
"start": 909,
"end": 927,
"text": "Beil et al. (1999)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows: First, we describe in brief the construction of an unlexicalized PCFG from the PTB. We then describe a procedure based on the inside-outside algorithm to re-estimate the lexical parameters of this PCFG from unlabeled Wall Street Journal data. Finally, we present evaluations of the reestimated models, based on labeled bracketing measures and on the detection of subcategorization frames of verbs: there is a 31.6% reduction in error for novel verbs and up to 8.97% reduction in overall subcategorization error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We build an unlexicalized PCFG from the standard training sections of the PTB. As is common (Collins, 1997; Johnson, 1998; Klein and Manning, 2003; Schmid, 2006) , the treebank is first transformed in various ways, in order to give an accurate PCFG. In our framework, treebank trees are augmented with extra features; the methodology involves constructing a feature-constraint grammar from a context-free treebank backbone grammar. The detailed methodology is described in Deoskar and Rooth (2008) 1 . A PCFG is trained on the transformed treebank, with these added features incorporated into the PCFG's non-terminal categories. The framework affords us the flexibility to stipulate the features to be incorporated in the PCFG categories, as parameters of the PCFG.",
"cite_spans": [
{
"start": 92,
"end": 107,
"text": "(Collins, 1997;",
"ref_id": "BIBREF7"
},
{
"start": 108,
"end": 122,
"text": "Johnson, 1998;",
"ref_id": "BIBREF15"
},
{
"start": 123,
"end": 147,
"text": "Klein and Manning, 2003;",
"ref_id": "BIBREF16"
},
{
"start": 148,
"end": 161,
"text": "Schmid, 2006)",
"ref_id": "BIBREF24"
},
{
"start": 473,
"end": 497,
"text": "Deoskar and Rooth (2008)",
"ref_id": "BIBREF10"
},
{
"start": 498,
"end": 499,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unlexicalized treebank PCFG",
"sec_num": "2"
},
{
"text": "Our features are largely designed to have a linguistically relevant interpretation 2 . For exam- 1 The reason for using this framework (as opposed to using available unlexicalized PCFGs) is that it allows us flexibility in designing features of interest, and can also be used for languages other than English with existing treebanks. As a measure of the quality of the transformed-PTB based PCFG, Table 1 gives the labeled bracketing scores on the standard test section 23 of the PTB, comparing them to unlexicalized PCFG scores in (Schmid, 2006) and (Klein and Manning, 2003 ) (K&M). The current PCFG f-score is comparable to the state-of-the-art in unlexicalized PCFGs ( (Schmid, 2006) , to our knowledge). We stopped grammar development when the f-score reached state-of-the-art since our goal was to use this grammar as the initial model and baseline for the unsupervised re-estimation procedure, described in the next section.",
"cite_spans": [
{
"start": 97,
"end": 98,
"text": "1",
"ref_id": null
},
{
"start": 532,
"end": 546,
"text": "(Schmid, 2006)",
"ref_id": "BIBREF24"
},
{
"start": 551,
"end": 575,
"text": "(Klein and Manning, 2003",
"ref_id": "BIBREF16"
},
{
"start": 673,
"end": 687,
"text": "(Schmid, 2006)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 397,
"end": 404,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unlexicalized treebank PCFG",
"sec_num": "2"
},
{
"text": "As a basic unsupervised estimation method, we use standard inside-outside estimation of PCFGs, which realizes EM estimation (Lari and Young, 1990; Pereira and Schabes, 1992) . We use the notation I(C, e) to designate the new frequency model, computed via inside-outside from the corpus C by using a probability model based on the frequency model e 3 . The iterative inside-outside re-estimation procedure has the following simple form (Eq.1), where each successive frequency model e i+1 is estimated from the corpus C using a probability model determined by the previous frequency model e i . Our notation always refers to frelinguistic interpretation, but result in a good PCFG, such as a parent feature on some categories, following Johnson (1998) .",
"cite_spans": [
{
"start": 124,
"end": 146,
"text": "(Lari and Young, 1990;",
"ref_id": "BIBREF18"
},
{
"start": 147,
"end": 173,
"text": "Pereira and Schabes, 1992)",
"ref_id": "BIBREF22"
},
{
"start": 735,
"end": 749,
"text": "Johnson (1998)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inside-Outside Re-estimation",
"sec_num": "3"
},
{
"text": "3 The inside-outside algorithm uses an existing grammar model and a raw text corpus (incomplete data) to obtain corresponding complete data (a set of analyses/parses for the corpus sentences). A new grammar model is then estimated from this complete data. See (Prescher, 2003) for an explanation using the standard EM notions of incomplete/complete data. quency models such as e i , rather than the relativefrequency probability models they determine 4 . e 1 = I(C, e 0 ) ... e i+1 = I(C, e i )",
"cite_spans": [
{
"start": 260,
"end": 276,
"text": "(Prescher, 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inside-Outside Re-estimation",
"sec_num": "3"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inside-Outside Re-estimation",
"sec_num": "3"
},
{
"text": "It is well-known that while lexicalization is useful, lexical parameters determined from the treebank are poorly estimated because of the sparseness of treebank data for particular words (e.g. Hindle and Rooth (1993) ). Gildea (2001) and Bikel (2004) show that removing bilexical dependencies hardly hurts the performance of the Collins Model2 parser, although there is the benefit of lexicalization in the form of lexico-syntactic dependencies -structures being conditioned on words. On the other hand, structural parameters are comparatively well-estimated from treebanks since they are not keyed to particular words. Thus, it might be beneficial to use a combination of supervised and unsupervised estimation for lexical parameters, while obtaining syntactic (structural) parameters solely by supervised estimation (i.e. from a treebank). The experiments in this paper are based on this idea. In an unlexicalised PCFG like the one described in \u00a72, it is easy to make the distinction between structural parameters (nonterminal rules) and lexical parameters (preterminal to terminal rules).",
"cite_spans": [
{
"start": 193,
"end": 216,
"text": "Hindle and Rooth (1993)",
"ref_id": "BIBREF13"
},
{
"start": 220,
"end": 233,
"text": "Gildea (2001)",
"ref_id": "BIBREF11"
},
{
"start": 238,
"end": 250,
"text": "Bikel (2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interleaved Inside-Outside",
"sec_num": "3.1"
},
{
"text": "To this end, we define a modified inside-outside procedure in which a frequency transformation T (c, t) is interleaved between the iterations of the standard inside-outside procedure. The form of this interleaved procedure is shown in Eq. 2. In Eq. 2, t designates a smoothed treebank model (the smoothing procedure is described later in \u00a73.1.1). This smoothed treebank model is used as the prior model for the inside-outside re-estimation procedure. For each iteration i, c i represent models obtained by inside-outside estimation. d i represent derived models obtained by performing a transformation T on c i . The transformation T combines the re-estimated model c i and the smoothed tree- 4 We use a frequency-based notation because we use outof-the-box software Bitpar (Schmid, 2004) which implements inside-outside estimation -Bitpar reads in frequency models and converts them to relative frequency models. We justify the use of the frequency-based notation by ensuring that all marginal frequencies in the treebank model are always preserved in all other models. bank model t (hence represented as T (c i , t)).",
"cite_spans": [
{
"start": 693,
"end": 694,
"text": "4",
"ref_id": null
},
{
"start": 774,
"end": 788,
"text": "(Schmid, 2004)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interleaved Inside-Outside",
"sec_num": "3.1"
},
{
"text": "d 0 = t smoothed treebank model c 1 = I(C, d 0 ) estimation step d 1 = T (c 1 , t) transformation step ... c i+1 = I(C, d i ) estimation step d i+1 = T (c i+1 , t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interleaved Inside-Outside",
"sec_num": "3.1"
},
{
"text": "transformation step",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interleaved Inside-Outside",
"sec_num": "3.1"
},
{
"text": "(2) The lexical parameters for the treebank model t or the re-estimated models c i are represented as t(w, \u03c4, \u03b9) or c i (w, \u03c4, \u03b9), where w is the terminal word, \u03c4 is the PTB-style PoS tag, and \u03b9 is the sequence of additional features incorporated into the PoS tag (the entries in our lexicon have the form w.\u03c4.\u03b9 with an associated frequency). The transformation T preserves the marginal frequencies seen in the treebank model. A marginal tagincorporation frequency is defined by summation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interleaved Inside-Outside",
"sec_num": "3.1"
},
{
"text": "f (\u03c4, \u03b9) = w f (w, \u03c4, \u03b9).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interleaved Inside-Outside",
"sec_num": "3.1"
},
{
"text": "( 3)The transformation T is used to obtain the derived models d i and consists of two parts, corresponding to the syntactic and the lexical parameters of d i :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interleaved Inside-Outside",
"sec_num": "3.1"
},
{
"text": "\u2022 The syntactic parameters of d i are copied from t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interleaved Inside-Outside",
"sec_num": "3.1"
},
{
"text": "\u2022 To obtain the lexical parameters of d i , lexical parameters from the treebank model t and lexical parameters from the re-estimated model are linearly combined, shown in Eq. 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interleaved Inside-Outside",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d i (w, \u03c4, \u03b9) = (1 \u2212 \u03bb \u03c4,\u03b9 )t(w, \u03c4, \u03b9) + \u03bb \u03c4,\u03b9ci (w, \u03c4, \u03b9)",
"eq_num": "(4)"
}
],
"section": "Interleaved Inside-Outside",
"sec_num": "3.1"
},
{
"text": "where \u03bb \u03c4,\u03b9 is a parameter with 0 < \u03bb \u03c4,\u03b9 < 1 which may depend on the tag and incorporation. The termc i (w, \u03c4, \u03b9) in Eq. 4 is obtained by scaling the frequencies in c i (w, \u03c4, \u03b9), as shown in Eq. 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interleaved Inside-Outside",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c i (w, \u03c4, \u03b9) = t(\u03c4, \u03b9) c i (\u03c4, \u03b9) c i (w, \u03c4, \u03b9).",
"eq_num": "(5)"
}
],
"section": "Interleaved Inside-Outside",
"sec_num": "3.1"
},
{
"text": "In terms of probability models determined from the frequency models, the effect of T is to allocate a fixed proportion of the probability mass for each \u03c4, \u03b9 to the corpus, and share it out among words w in proportion to relative frequencies c i (w,\u03c4,\u03b9) c i (\u03c4,\u03b9) in the inside-outside estimate c i . Eqs. 6 and 7 verify that marginals are preserved in the derived model d.",
"cite_spans": [
{
"start": 257,
"end": 262,
"text": "(\u03c4,\u03b9)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interleaved Inside-Outside",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c(\u03c4, \u03b9) = wc (w, \u03c4, \u03b9) = w t(\u03c4,\u03b9) c(\u03c4,\u03b9) c(w, \u03c4, \u03b9) = t(\u03c4,\u03b9) c(\u03c4,\u03b9) w c(w, \u03c4, \u03b9) = t(\u03c4,\u03b9) c(\u03c4,i) c(\u03c4, \u03b9) = t(\u03c4, \u03b9). (6) d(\u03c4, \u03b9) = w d(w, \u03c4, \u03b9) = w (1 \u2212 \u03bb \u03c4,\u03b9 )t(w, \u03c4, \u03b9) + \u03bb \u03c4,\u03b9c (w, \u03c4, \u03b9) = (1 \u2212 \u03bb \u03c4,\u03b9 ) w t(w, \u03c4, \u03b9) + \u03bb \u03c4,\u03b9 wc (w, \u03c4, \u03b9) = (1 \u2212 \u03bb \u03c4,\u03b9 )t(\u03c4, \u03b9) + \u03bb \u03c4,\u03b9c (\u03c4, \u03b9) = (1 \u2212 \u03bb \u03c4,\u03b9 )t(\u03c4, \u03b9) + \u03bb \u03c4,\u03b9 t(\u03c4, \u03b9) = t(\u03c4, \u03b9).",
"eq_num": "(7)"
}
],
"section": "Interleaved Inside-Outside",
"sec_num": "3.1"
},
{
"text": "To initialize the iterative procedures, a smoothing scheme is required which allocates frequency to combinations of words w and PoS tags \u03c4 which are not present in the treebank model but are present in the corpus, and also to all possible incorporations of a PoS tag. Otherwise, if the unsmoothed treebank model (t 0 ) has zero frequency for some lexical parameter, the inside-outside estimate I(C, t 0 ) for that parameter would also be zero, and new lexical entries would never be induced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing the treebank model",
"sec_num": "3.1.1"
},
{
"text": "The smoothed treebank model t is obtained from the unsmoothed model t 0 as follows. First a PoS tagger (Treetagger, (Schmid, 1994) ) is run on the unsupervised corpus C, which assigns PTB-style PoS tags to the corpus. Tokens of words and PoS tags are tabulated to obtain a frequency table g(w, \u03c4 ). Each frequency g(w, \u03c4 ) is split among possible incorporations \u03b9 in proportion to a ratio of marginal frequencies in t 0 :",
"cite_spans": [
{
"start": 116,
"end": 130,
"text": "(Schmid, 1994)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing the treebank model",
"sec_num": "3.1.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g(w, \u03c4, \u03b9) = t 0 (\u03c4, \u03b9) t 0 (\u03c4 ) g(w, \u03c4 )",
"eq_num": "(8)"
}
],
"section": "Smoothing the treebank model",
"sec_num": "3.1.1"
},
{
"text": "The smoothed model t is defined as an interpolation of g and t 0 for lexical parameters as shown in 9, with syntactic parameters copied from t 0 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing the treebank model",
"sec_num": "3.1.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t(w, \u03c4, \u03b9) = (1 \u2212 \u03bb \u03c4,\u03b9 )t 0 (w, \u03c4, \u03b9) + \u03bb \u03c4,\u03b9 g(w, \u03c4, \u03b9)",
"eq_num": "(9)"
}
],
"section": "Smoothing the treebank model",
"sec_num": "3.1.1"
},
{
"text": "The treebank grammar is trained over sections 0-22 of the transformed PTB (minus about 7000 sentences held out for testing). Testset I contains 1331 sentences and is constructed as follows: First, we select 117 verbs whose frequency in PTB sections 0-22 is between 10-20 (mid-frequency verbs). All sentences containing occurrences of these verbs are held out from the training data to form Testset I. The effect of holding out these sentences is to make these 117 verbs novel (i.e. unseen in training). This testset is used to evaluate the learning of subcategorization frames of novel verbs. We also construct another testset (Testset II) by holding out every 10 th sentence in PTB sections 0-22 (4310 sentences). The corpus used for re-estimation is about 4 million words of unannotated Wall Street Journal text (year 1997) (sentence length<25 words). The reestimation was carried out using Bitpar (Schmid, 2004) for inside-outside estimation. The parameter \u03bb in Eq. 4 was set to 0.5 for all \u03c4 and \u03b9, giving equal weight to the treebank and the re-estimated lexicons. Starting from a smoothed treebank grammar t, we separately ran 6 iterations of the interleaved estimation procedure defined in Eq. 2, and 4 iterations of standard inside-outside estimation. This gave us two series of models corresponding to the two procedures.",
"cite_spans": [
{
"start": 900,
"end": 914,
"text": "(Schmid, 2004)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "3.2"
},
{
"text": "As a basic evaluation of the re-estimated grammars, we report the labeled bracketing scores on the standard test section 23 of the PTB (Table 2) . Using the re-estimated models, maximum probability (viterbi) parses were obtained for all sentences in sec. 23, after stripping away the treebank annotation, including the pre-terminal tag. The baseline is the treebank model t 0t 5 . The scores for re-estimated grammars from successive iterations are under columns It 1, It 2, etc. All models obtained using the interleaved procedure show an improvement over the baseline. The best model is obtained after 2 iterations, after which the score reduces a little. Statistically significant improvements are marked with *, with p<0.005 for recall and p<0.0001 for precision for the best model. Table 2 also shows scores for grammars estimated using the standard inside-outside procedure. The first re-estimated model is better than any model obtained from either procedure. Notice however, the disparity in precision and recall -precision is much lower than recall. This is not surprising; inside-outside is known to converge to incorrect solutions for PCFGs (Lari and Young, 1990; de Marcken, 1995) . This causes the f-score to deteriorate in successive iterations. The improvement in labeled bracketing f-score for the interleaved procedure is small, but is an encouraging result. The benefit to the re-estimated models comes only from better estimates of lexical parameters. We expect that re-estimation will benefit parameters associated with low frequency words -lexical parameters for high frequency words are bound to be estimated accurately from the treebank. We did not expect a large impact on labeled bracketing scores, given that low frequency words have correspondingly few occurrences in this test dataset. It is possible that the impact on f-score will be higher for a test set from a different domain. Note also that the size of our unlabeled training corpus (\u223c4M words) is relatively small -only about 4 times the PTB.",
"cite_spans": [
{
"start": 1152,
"end": 1174,
"text": "(Lari and Young, 1990;",
"ref_id": "BIBREF18"
},
{
"start": 1175,
"end": 1192,
"text": "de Marcken, 1995)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 135,
"end": 144,
"text": "(Table 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Labeled Bracketing Results",
"sec_num": "4"
},
{
"text": "We focus on learning verbal subcategorization, as a typical case of lexico-syntactic information. The subcategorization frame (SF) of verbs is a parameter of our PCFG -verbal tags in the PCFG are followed by an incorporation sequence that denotes the SF for that verb. We evaluate the re-estimated models on the task of detecting correct SFs of verbs in maximum-probability (viterbi) parses obtained using the models. All tokens of verbs and their preterminal symbols (consisting of a PoS tag and an incorporation sequence encoding the SF) are extracted from the viterbi parses of sentences in a testset. This tag-SF sequence is compared to a gold standard, and is scored correct if the two match exactly. PoS errors are scored as incorrect, even if the SF is correct. The gold standard is obtained from the transformed PTB trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verbal Subcategorization",
"sec_num": "5"
},
{
"text": "The incorporation sequence corresponding to the SF consists of 3 features: The first one denotes basic categories of subcategorization such as transitive, intransitive, ditransitive, NP-PP, S, etc. The second feature denotes, for clausal complements, the type of clause (finite, infinite, small clause, Figure 1 : A subcat. frame for control verb want.",
"cite_spans": [],
"ref_spans": [
{
"start": 303,
"end": 311,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Verbal Subcategorization",
"sec_num": "5"
},
{
"text": "etc.). The third feature encodes the nature of the subject of the clausal complements (empty category or non-empty). For example, the verb considered in the treebank sentence They are officially considered strategic gets a preterminal sequence of VBD.s.e.sc. This sequence indicates a past tense verb (VBD) with a clausal complement (s) which has an empty subject (e) since the sentence is passive and is of the type small clause (sc). A control verb (with an infinitival complement) in the sentence fragment ..did not want to fund X.. gets the frame s.e.to (see Fig. 1 for an example of a verb with its complement, as parsed by our PCFG). We have a total of 81 categories of SFs (without counting specific prepositions for prepositional frames), making fairly fine-grained distinctions of verbal categories.",
"cite_spans": [],
"ref_spans": [
{
"start": 563,
"end": 569,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Verbal Subcategorization",
"sec_num": "5"
},
{
"text": "We measure the error rate in the detection of the subcategorization frame of 1360 tokens of 117 verbs in Testset I. Recall from \u00a73.2 that these verbs are novel verbs with respect to the treebank model. Table 3 shows this error rate (i.e. the fraction of test items which receive incorrect tagincorporations in viterbi parses) for various models obtained using the interleaved and standard reestimation procedures. t 0t1 is the treebank model t 0 with the test data from Testset I merged in (to account for unknown words) using the smoothing scheme given in Eq. 9. This model has no verb specific information for the test verbs. For each test verb, it has a smoothed SF distribution proportional to the SF distribution for all verbs of that tag. The baseline error is 33.36%. This means that there is enough information in the average distribution of all verbs to correctly assign the subcategorization frame to novel verbs in 66.64% cases. For the models obtained using the interleaved reestimation, the error rate falls to the lowest value of 22.81% for the model obtained in the 5 th iteration: an absolute reduction of 10.55 points, and a percentage error-reduction of 31.6%. The error reduction is statistically significant for all iterations compared to the baseline, with the 5 th iteration being also significantly better than the 1 st . The models obtained using standard re-estimation do not perform as well. Even for the model from the first iteration, whose labeled bracketing score was highest, the SF error is higher than the corresponding model from the interleaved procedure (possibly due to the low precision of this model). The error rate for the standard procedure starts to increase after the 2 nd iteration in contrast to the interleaved procedure.",
"cite_spans": [],
"ref_spans": [
{
"start": 202,
"end": 209,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Learning Subcat Frames of Novel Verbs",
"sec_num": "5.1"
},
{
"text": "While the re-estimation clearly results in gains in SF detection for novel verbs, we also perform an evaluation for all verbs (novel and non-novel) in a given testset (Testset II as described in \u00a73.2). The overall error reduction using the interleaved procedure is 8.97% (in Iteration 1). In order to better understand the relative efficacy of the supervised and unsupervised estimation for lexical items of different frequencies, we break up the set of test verbs into subsets based on their frequency of occurrence in the PTB training data, and evaluate them sepa- rately. Table 4 shows the error rates for verbs divided into these sets. We present error rates only for Iteration 1 in Table 4 , since most of the error reduction takes place with the 1 st iteration. Statistically significant reductions are marked with * (confidence>99.9) and ** (>95). The second row shows error rates for verbs which have zero frequency in the treebank training data (i.e. novel verbs): Note that this error reduction is much less than the 31.6% in Testset I. These verbs are truly rare and hence have much fewer occurrences in the unlabeled corpus than Testset I verbs, which were artificially made novel (but are really midfrequency verbs). This might indicate that error rates will decrease further if the size of the unlabeled corpus is increased. There is substantial error reduction for low-frequency verbs (<21 PTB occurrences). This is not hard to understand: the PTB does not provide enough data to have good parameter estimates for these verbs. For mid-tohigh frequency verbs (from 21 to 500), the benefit of the unsupervised procedure reduces, though error reduction is still positive. Surprisingly, the error reduction for very high frequency verbs (more than 500 occurrences in the treebank) is also fairly high: we expected that parameters for high frequency words would benefit the least from the unsupervised estimation, given that they are already common enough in the PTB to be accurately estimated from it. The high frequency verbs (>500 occurrences) consist of very few types-mainly auxiliaries, some light verbs (make, do) and a few others (rose, say). It is possible that re-estimation from large data is beneficial for light verbs since they have a larger number of frames. The frequency range 2K-5K consists solely of auxiliary verbs. Examination of viterbi parses shows that improved results are largely due to better detection of predicative frames in re-estimated models.",
"cite_spans": [],
"ref_spans": [
{
"start": 575,
"end": 582,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 687,
"end": 694,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Analysis of subcategorization learning",
"sec_num": "5.2"
},
{
"text": "To measure the impact of more unlabeled training data, we ran the interleaved procedure with 8M words of WSJ text. The SF error for novel verbs reduces to 22.06% in the 2 nd iteration (significantly different from the best error of 22.81% in the 5 th iteration for 4M words of training data)). We also get an improved overall error reduction of 9.9% on Testset II for the larger training data, as compared to 8.97% previously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of subcategorization learning",
"sec_num": "5.2"
},
{
"text": "While there has been substantial previous work on the task of SF acquisition from corpora (Brent (1991) ; Manning (1993) ; Briscoe and Carroll (1997) ; Korhonen (2002) , amongst others), we find that relatively few parsing-based evaluations are reported. Since their goal is to build probabilistic SF dictionaries, these systems are evaluated either against existing dictionaries, or on distributional similarity measures. Most are evaluated on testsets of high-frequency verbs (unlike the present work), in order to gauge the effectiveness of the acquisition strategy. Briscoe and Carroll (1997) report a token-based evaluation for seven verb types-their system gets an average recall accuracy of 80.9% for these verbs (which appear to be high-frequency verbs). This is slightly lower than the present system, which has an overall accuracy of 83.16% (on Testset II (It 1), as shown in Table 4 ). However, for low frequency verbs (exemplars <10) they report that their results are around chance. A parsing evaluation of their lexicon using an unlexicalized grammar as baseline, on 250 sentences from the Suzanne treebank gave a small (but not statistically significant) improvement in f-score (from 71.49 to 72.14%). Korhonen (2002) reports a parsing-based evaluation on 500 test sentences. She found a small increase in f-score (of grammatical relations markup) from 76.03 to 76.76. In general PARSE-VAL measures are not very sensitive to subcategorization ; they therefore use a dependency-based evaluation. In the present re-search as well, we obtain statistically significant but quite small improvements in f-score ( \u00a74). Since we are interested in acquisition of PCFG lexicons, we focus our evaluations on verbal subcategorization of token occurrences of verbs in viterbi parses.",
"cite_spans": [
{
"start": 90,
"end": 103,
"text": "(Brent (1991)",
"ref_id": "BIBREF2"
},
{
"start": 106,
"end": 120,
"text": "Manning (1993)",
"ref_id": "BIBREF19"
},
{
"start": 123,
"end": 149,
"text": "Briscoe and Carroll (1997)",
"ref_id": "BIBREF3"
},
{
"start": 152,
"end": 167,
"text": "Korhonen (2002)",
"ref_id": "BIBREF17"
},
{
"start": 570,
"end": 596,
"text": "Briscoe and Carroll (1997)",
"ref_id": "BIBREF3"
},
{
"start": 1217,
"end": 1232,
"text": "Korhonen (2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 886,
"end": 893,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "5.3"
},
{
"text": "We have presented a methodology for incorporating additional lexical information from unlabeled data into an unlexicalized treebank PCFG. We obtain a large error reduction (31.6%) in SF detection for novel verbs as compared to a treebank baseline. The interleaved re-estimation scheme gives a significant increase in labeled bracketing scores from a relatively small unlabeled corpus. The interleaved scheme has an advantage over standard inside-outside PCFG estimation, as measured both by labeled bracketing scores and on the task of detecting SFs of novel verbs. Since our re-estimated models are treebank models, all evaluations are against treebank standards.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "The grammar we worked with has very few incorporated features compared to the grammar used by, say Klein and Manning (2003) . It would make sense to experiment with grammars with much richer sets of incorporated features. Features related to structure-selection by categories other than verbs -nouns, adverbs and adjectives -might be beneficial. These features should be incorporated as PCFG parameters, similar to verbal subcategorization. Experiments with 8 million words of training data gave significantly better results than with 4 million words, indicating that larger training sets will be beneficial as well. It would also be useful to make the transformation T of lexical parameters sensitive to treebank frequency of words. For instance, more weight should be given to the treebank model rather than the corpus model for mid-to-high frequency words, by making the parameter \u03bb in T sensitive to frequency.",
"cite_spans": [
{
"start": 99,
"end": 123,
"text": "Klein and Manning (2003)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "This methodology is relevant to the task of domain-adaption. Hara et al. (2007) find that retraining a model of HPSG lexical entry assignments is more critical for domain adaptation than re-training a structural model alone. Our PCFG captures many of the important dependencies captured in a framework like HPSG; in addition, we can use unlabeled data from a new domain in an unsupervised fashion for re-estimating lexical parameters, an important consideration in domainadaption. Preliminary experiments on this task us-ing New York Times unlabeled data with the PTBtrained PCFG show promising results.",
"cite_spans": [
{
"start": 61,
"end": 79,
"text": "Hara et al. (2007)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "This baseline is slightly lower than that reported in Table 1 due to holding out an additional 7000 sentences from the treebank training set. In order to accommodate unknown words from the test data (sec 23), the treebank model t0 is smoothed in a manner similar to that shown in Eq. 9, with the test words (tagged using Treetagger) forming g(w, \u03c4 ) and \u03bb = 0.1. A testset is always merged with a given model in this manner before parsing, to account for unknown words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "I am grateful to Mats Rooth for extensive comments and guidance during the course of this research. The inside-outside re-estimation was conducted using the resources of the Cornell University Center for Advanced Computing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Inside-outside estimation of a lexicalized PCFG for German",
"authors": [
{
"first": "G",
"middle": [],
"last": "Beil",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Prescher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Riezler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rooth",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th meeting of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beil, G. Carroll, D. Prescher, S. Riezler, and M. Rooth. 1999. Inside-outside estimation of a lexicalized PCFG for German. In Proceedings of the 37th meeting of ACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic acquisition of subcategorization frames from untagged text",
"authors": [
{
"first": "M",
"middle": [],
"last": "Brent",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the 29th meeting of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Brent. 1991. Automatic acquisition of subcat- egorization frames from untagged text. In Pro- ceedings of the 29th meeting of ACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatic Extraction of Subcategorization from Corpora",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 5th ACL Conference on Applied NLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Briscoe and John Carroll. 1997. Automatic Extraction of Subcategorization from Corpora. In Proceedings of the 5th ACL Conference on Applied NLP.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Valence induction with a head-lexicalized PCFG",
"authors": [
{
"first": "G",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rooth",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Carroll and M. Rooth. 1998. Valence induction with a head-lexicalized PCFG. In Proceedings of EMNLP 1998.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Can subcategorization probabilities help parsing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Minnen",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of 6th ACL/SIGDAT Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Carroll, G. Minnen, and E. Briscoe. 1998. Can subcategorization probabilities help parsing. In Proceedings of 6th ACL/SIGDAT Workshop on Very Large Corpora.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Coarse-to-fine n-best parsing and MaxEnt discriminative reranking",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of 43rd meeting of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse-to-fine n-best parsing and MaxEnt dis- criminative reranking. In Proceedings of 43rd meeting of ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Three generative, lexicalised models for statistical parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35th meeting of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1997. Three generative, lexi- calised models for statistical parsing. In Pro- ceedings of the 35th meeting of ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "On the unsupervised induction of Phrase Structure grammars",
"authors": [
{
"first": "",
"middle": [],
"last": "Carl De Marcken",
"suffix": ""
}
],
"year": 1995,
"venue": "3rd Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carl de Marcken. 1995. On the unsupervised in- duction of Phrase Structure grammars. In 3rd Workshop on Very Large Corpora.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Maximum likelihood estimation from incomplete data via the EM algorithm",
"authors": [
{
"first": "A",
"middle": [
"P"
],
"last": "Dempster",
"suffix": ""
},
{
"first": "N",
"middle": [
"M"
],
"last": "Laird",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "J. Royal Statistical Society",
"volume": "39",
"issue": "B",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood estimation from in- complete data via the EM algorithm. J. Royal Statistical Society, 39(B):1-38.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Induction of Treebank-Aligned Lexical Resources",
"authors": [
{
"first": "Tejaswini",
"middle": [],
"last": "Deoskar",
"suffix": ""
},
{
"first": "Mats",
"middle": [],
"last": "Rooth",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of 6th LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tejaswini Deoskar and Mats Rooth. 2008. Induc- tion of Treebank-Aligned Lexical Resources. In Proceedings of 6th LREC.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Corpus Variation and Parser Performance",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea. 2001. Corpus Variation and Parser Performance. In Proceedings of EMNLP 2001.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Evaluating Impact of Re-training a Lexical Disambiguation Model on Domain Adaptation of an HPSG Parser",
"authors": [
{
"first": "T",
"middle": [],
"last": "Hara",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 10th International Conference on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Hara, Y. Miyao, and J. Tsujii. 2007. Evalu- ating Impact of Re-training a Lexical Disam- biguation Model on Domain Adaptation of an HPSG Parser. In Proceedings of the 10th Inter- national Conference on Parsing Technologies.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Structural ambiguity and lexical relations",
"authors": [
{
"first": "Donald",
"middle": [],
"last": "Hindle",
"suffix": ""
},
{
"first": "Mats",
"middle": [],
"last": "Rooth",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "18",
"issue": "2",
"pages": "103--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donald Hindle and Mats Rooth. 1993. Structural ambiguity and lexical relations. Computational Linguistics, 18(2):103-120.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A Subcategorisation Lexicon for German Verbs induced from a Lexicalised PCFG",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Schulte Imwalde",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Schulte imWalde. 2002. A Subcategori- sation Lexicon for German Verbs induced from a Lexicalised PCFG. In Proceedings of LREC 2002.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "PCFG models of linguistic tree representations",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson. 1998. PCFG models of linguistic tree representations. Computational Linguistics, 24(4).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Klein and C. Manning. 2003. Accurate unlexi- calized parsing. In Proceedings of the 41st ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Subcategorization Acquisition",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Korhonen. 2002. Subcategorization Acqui- sition. Ph.D. thesis, Univ. of Cambridge.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The estimation of stochastic context-free grammars using the Inside-Outside algorithm",
"authors": [
{
"first": "K",
"middle": [],
"last": "Lari",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 1990,
"venue": "Computer Speech and Language",
"volume": "4",
"issue": "",
"pages": "35--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Lari and S. J. Young. 1990. The estimation of stochastic context-free grammars using the Inside-Outside algorithm. Computer Speech and Language, 4:35-56.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Automatic acquisition of a large subcategorization dictionary from corpora",
"authors": [
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 31st meeting of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Manning. 1993. Automatic acquisition of a large subcategorization dictionary from corpora. In Proceedings of the 31st meeting of ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Building a Large Annotated Corpus of English: The Penn Treebank",
"authors": [
{
"first": "M",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. P. Marcus, B. Santorini, and M. A. Marcinkiewicz. 1993. Building a Large Anno- tated Corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Effective Self-Training for Parsing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Mccloskey",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. McCloskey, E. Charniak, and M. Johnson. 2006. Effective Self-Training for Parsing. In Proceedings of HLT-NAACL 2006.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A Tutorial on the Expectation-Maximization Algorithm Including Maximum-Likelihood Estimation and EM Training of Probabilistic Context-Free Grammars. ESSLLI 2003. Helmut Schmid",
"authors": [
{
"first": "Schabes",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of International Conference on New Methods in Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pereira and Schabes. 1992. Inside-Outside re- estimation from partially bracketed corpora. In Proceedings of the 30th meeting of ACL. Detlef Prescher. 2003. A Tutorial on the Expectation-Maximization Algorithm Includ- ing Maximum-Likelihood Estimation and EM Training of Probabilistic Context-Free Gram- mars. ESSLLI 2003. Helmut Schmid. 1994. Probabilistic Part-of- Speech Tagging Using Decision Trees. In Pro- ceedings of International Conference on New Methods in Language Processing.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Efficient Parsing of Highly Ambiguous Context-Free Grammars with Bit Vectors",
"authors": [
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helmut Schmid. 2004. Efficient Parsing of Highly Ambiguous Context-Free Grammars with Bit Vectors. In Proceedings of the 20th COLING.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Trace Prediction and Recovery with Unlexicalised PCFGs and Slash Features",
"authors": [
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helmut Schmid. 2006. Trace Prediction and Re- covery with Unlexicalised PCFGs and Slash Features. In Proceedings of the 21st Conference on Computational Linguistics (COLING).",
"links": null
}
},
"ref_entries": {
"TABREF3": {
"type_str": "table",
"html": null,
"text": "Subcat. error for novel verbs (Testset I).",
"content": "<table/>",
"num": null
},
"TABREF5": {
"type_str": "table",
"html": null,
"text": "",
"content": "<table/>",
"num": null
}
}
}
}