Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U11-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:09:55.163738Z"
},
"title": "Frontier Pruning for Shift-Reduce CCG Parsing",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sydney NSW 2006",
"location": {
"country": "Australia"
}
},
"email": "[email protected]"
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sydney NSW 2006",
"location": {
"country": "Australia"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We apply the graph-structured stack (GSS) to shift-reduce parsing in a Combinatory Categorial Grammar (CCG) parser. This allows the shift-reduce parser to explore all possible parses in polynomial time without resorting to heuristics, such as beam search. The GSSbased shift-reduce parser is 34% slower than CKY in the finely-tuned C&C parser. We perform frontier pruning on the GSS, increasing the parsing speed to be competitive with the C&C parser with a small accuracy penalty.",
"pdf_parse": {
"paper_id": "U11-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "We apply the graph-structured stack (GSS) to shift-reduce parsing in a Combinatory Categorial Grammar (CCG) parser. This allows the shift-reduce parser to explore all possible parses in polynomial time without resorting to heuristics, such as beam search. The GSSbased shift-reduce parser is 34% slower than CKY in the finely-tuned C&C parser. We perform frontier pruning on the GSS, increasing the parsing speed to be competitive with the C&C parser with a small accuracy penalty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Parsing is a vital component of sophisticated natural language processing (NLP) systems that require deep and accurate semantic interpretation, including question answering and summarisation. Unfortunately, the complexity of natural languages results in substantial ambiguity. For even a typical sentence, thousands of potential analyses may be considered by a wide-coverage parser, making parsing impractical for large-scale applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several methods have been proposed to improve parsing speed, including supertagging (Bangalore and Joshi, 1999; Clark and Curran, 2004; Kummerfeld et al., 2010) , coarse-to-fine parsing (Charniak and Johnson, 2005; Pauls and Klein, 2009) , chart repair (Djordjevic, 2006) , chart constraints (Roark and Hollingshead, 2009) , structure caching (Dawborn and Curran, 2009) and chart pruning (Zhang et al., 2010) . These heuristic methods offer a tradeoff between accuracy and speed. A* parsing (Klein and Manning, 2003) offers speed increases with no reduction in accuracy. For parsers optimised for speed, the overhead required by additional efficiency techniques can exceed the speed gains they provide (Dawborn and Curran, 2009) . As mistakes made in the parsing phase propagate to later stages, high speed but low accuracy parsers may not be useful in NLP systems (Chang et al., 2006) .",
"cite_spans": [
{
"start": 84,
"end": 111,
"text": "(Bangalore and Joshi, 1999;",
"ref_id": "BIBREF1"
},
{
"start": 112,
"end": 135,
"text": "Clark and Curran, 2004;",
"ref_id": "BIBREF4"
},
{
"start": 136,
"end": 160,
"text": "Kummerfeld et al., 2010)",
"ref_id": "BIBREF17"
},
{
"start": 186,
"end": 214,
"text": "(Charniak and Johnson, 2005;",
"ref_id": "BIBREF3"
},
{
"start": 215,
"end": 237,
"text": "Pauls and Klein, 2009)",
"ref_id": "BIBREF21"
},
{
"start": 253,
"end": 271,
"text": "(Djordjevic, 2006)",
"ref_id": "BIBREF9"
},
{
"start": 292,
"end": 322,
"text": "(Roark and Hollingshead, 2009)",
"ref_id": "BIBREF23"
},
{
"start": 343,
"end": 369,
"text": "(Dawborn and Curran, 2009)",
"ref_id": "BIBREF8"
},
{
"start": 388,
"end": 408,
"text": "(Zhang et al., 2010)",
"ref_id": "BIBREF31"
},
{
"start": 491,
"end": 516,
"text": "(Klein and Manning, 2003)",
"ref_id": "BIBREF16"
},
{
"start": 702,
"end": 728,
"text": "(Dawborn and Curran, 2009)",
"ref_id": "BIBREF8"
},
{
"start": 865,
"end": 885,
"text": "(Chang et al., 2006)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we modify the C&C (Clark and Curran, 2007) Combinatory Categorial Grammar (CCG) parser to enable shift-reduce (SR) parsing. The Cocke-Kasami-Younger (CKY) algorithm (Kasami, 1965; Younger, 1967) is replaced with the shiftreduce algorithm (Aho and Ullman, 1972) . However, back-tracking in shift-reduce parsers make them exponential in the worst case.",
"cite_spans": [
{
"start": 33,
"end": 57,
"text": "(Clark and Curran, 2007)",
"ref_id": "BIBREF5"
},
{
"start": 180,
"end": 194,
"text": "(Kasami, 1965;",
"ref_id": "BIBREF15"
},
{
"start": 195,
"end": 209,
"text": "Younger, 1967)",
"ref_id": "BIBREF29"
},
{
"start": 253,
"end": 275,
"text": "(Aho and Ullman, 1972)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To eliminate this duplication of work, a graphstructured stack (GSS; Tomita, 1988) is employed. This is the equivalent, for shift-reduce parsing, of the chart in CKY parsing, which stores all possible parse states compactly and enables polynomial time worst-case complexity. Due to the incremental nature of shift-reduce parsing, we can perform pruning of the parse state in the process of considering the next word (the frontier). Our frontier pruning model is an averaged perceptron trained to recognise the highest-scoring derivation that the C&C parser would have selected.",
"cite_spans": [
{
"start": 69,
"end": 82,
"text": "Tomita, 1988)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "By eliminating unlikely derivations , we substantially decrease the amount of ambiguity that the parser is required to handle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The GSS SR parser considers all the derivations that the C&C parser would consider, but is 34% slower. When frontier pruning is applied, incremental parsing speed is improved by 39% relative to the GSS parser with a negligible impact on accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Combinatory Categorial Grammar (CCG; Steedman, 2000) is a lexicalised grammar formalism that incorporates both constituent structure and dependency information into its analyses.",
"cite_spans": [
{
"start": 37,
"end": 52,
"text": "Steedman, 2000)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CCG Parsing",
"sec_num": "2"
},
{
"text": "In CCG, each word is assigned a category which encodes sub-categorisation information. Categories may be atomic, such as N and S ; or complex, such as NP /N for a word that requires an N to the right to produce an NP . Similarly, S \\NP is an intransitive verb and produces a sentence when an NP is found to the left. Finally, a transitive verb receives (S \\NP )/NP as it consumes an NP on the right, producing a verb phrase. Figure 1 shows two examples of CCG derivations with lexical categories assigned to each word. Both examples also provide the word saw with the (S \\NP )/NP category.",
"cite_spans": [],
"ref_spans": [
{
"start": 425,
"end": 433,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "CCG Parsing",
"sec_num": "2"
},
{
"text": "Lexicalised grammars typically have a small set of rules (the combinatory rules in CCG) and instead rely on categories that describe a word's syntactic role in a sentence. In Figure 1 , the word with contains two separate categories indicating whether it modifies saw (first example) or John (second example). In a highly lexicalised grammar, a parser may need to explore a large search space of categories in order to select the correct category for each word. Bangalore and Joshi (1999) proposed supertagging, where each word is assigned a reduced set of categories by a sequence tagger, rather than all of the categories previously seen with that word. Our supertags are CCG categories, and so are much more detailed than POS tags. By limiting the number of supertags for each word, there is a massive reduction in the number of derivations. The effectiveness of supertagging (Clark and Curran, 2004) demonstrates the influence of lexical ambiguity on parsing complexity for lexicalised grammars. Hockenmaier and Steedman (2007) developed CCGbank, a semi-automated conversion of the Penn Treebank (Marcus et al., 1993) to the CCG formalism. A number of statistical parsers Clark et al., 2002) have been created for CCG parsing using CCGbank.",
"cite_spans": [
{
"start": 462,
"end": 488,
"text": "Bangalore and Joshi (1999)",
"ref_id": "BIBREF1"
},
{
"start": 879,
"end": 903,
"text": "(Clark and Curran, 2004)",
"ref_id": "BIBREF4"
},
{
"start": 1000,
"end": 1031,
"text": "Hockenmaier and Steedman (2007)",
"ref_id": "BIBREF13"
},
{
"start": 1100,
"end": 1121,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF18"
},
{
"start": 1176,
"end": 1195,
"text": "Clark et al., 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 175,
"end": 183,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "CCG Parsing",
"sec_num": "2"
},
{
"text": "Clark and Curran (2007) describe the three stages of the high-performance C&C CCG parser. First, the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The C&C Parser",
"sec_num": "2.1"
},
{
"text": "I saw John with binoculars NP (S \\NP )/NP NP ((S \\NP )\\(S \\NP ))/NP NP > > S \\NP (S \\NP )\\(S \\NP ) < S \\NP < S I saw John with binoculars NP (S \\NP )/NP NP (NP \\NP )/NP NP > NP \\NP < NP > S \\NP < S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The C&C Parser",
"sec_num": "2.1"
},
{
"text": "Figure 1: Two CCG derivations with PP ambiguity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The C&C Parser",
"sec_num": "2.1"
},
{
"text": "supertagger provides each word with a set of likely categories, reducing the search space considerably. Second, the parser combines the categories, using the CKY chart-parsing algorithm and CCG's combinatory rules, to produce all derivations that can be constructed with the given categories. Finally, the decoder finds the best derivation from amongst the spanning analyses in the chart. The C&C parser uses a maximum-entropy model to score each derivation, using a wide range of features defined over local sub-trees in the derivation, including the head words and their POS tags, the local categories, and word-word dependencies. We use the default normal-form mode with the derivations decoder (Clark and Curran, 2007) and a maximum of 1,000,000 categories in the chart. Clark and Curran (2004) describe the role of supertagging in the C&C parser and its impact on parser speed. The supertagger initially assigns as few supertags as possible per word. If the parser is unable to provide a spanning analysis, the parser requests more supertags for each word. By restricting the number of supertags considered, this provides substantial pruning at the lexical level. Recent work by Kummerfeld et al. (2010) has shown that by training the supertagger on parser output, the parser's speed can be substantially increased whilst achieving the same accuracy as the baseline system. This exploits the idea that the only supertags the parser needs are those used by the highest-scoring derivation, reducing the search space even more than traditional supertagging.",
"cite_spans": [
{
"start": 698,
"end": 722,
"text": "(Clark and Curran, 2007)",
"ref_id": "BIBREF5"
},
{
"start": 775,
"end": 798,
"text": "Clark and Curran (2004)",
"ref_id": "BIBREF4"
},
{
"start": 1184,
"end": 1208,
"text": "Kummerfeld et al. (2010)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The C&C Parser",
"sec_num": "2.1"
},
{
"text": "Whilst the approach we present here focuses on CCG parsing, the techniques apply equally to any other binary branching or binarised grammars. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The C&C Parser",
"sec_num": "2.1"
},
{
"text": "In its deterministic form, a shift-reduce parser performs a single left-to-right scan of the input sentence, selecting one or more actions at each step. The current state of the parser is stored in a stack, where the partial derivation is stored and the parsing operations are performed. For the actions, either we shift the current word onto the stack or reduce the top two (or more) items at the top of the stack (Aho and Ullman, 1972) . As the scoring model can be defined over actions, this can allow for highly efficient parsing through greedy search (Sagae and Lavie, 2005) . This has made shift-reduce parsing popular for high-speed dependency parsers (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004) . Unfortunately, a deterministic shift-reduce parser cannot handle ambiguity because it only considers a single derivation. A simple extension is to eliminate determinism and perform a best-first search, backtracking if the parser reaches a dead end. This backtracking leads to duplicate construction of substructures and complete exploration is exponential in the worst case. Beam search has been used to handle this exponential explosion by discarding a large portion of the search space.",
"cite_spans": [
{
"start": 415,
"end": 437,
"text": "(Aho and Ullman, 1972)",
"ref_id": "BIBREF0"
},
{
"start": 556,
"end": 579,
"text": "(Sagae and Lavie, 2005)",
"ref_id": "BIBREF24"
},
{
"start": 659,
"end": 687,
"text": "(Yamada and Matsumoto, 2003;",
"ref_id": "BIBREF28"
},
{
"start": 688,
"end": 711,
"text": "Nivre and Scholz, 2004)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce Parsing",
"sec_num": "3"
},
{
"text": "In Zhang and Clark (2011) , a direct comparison is made between their shift-reduce CCG parser and the chart-based C&C parser. As CCG allows for a limited number of unary rules, specifically typechanging and type-raising, Zhang and Clark extend the shift-reduce algorithm to consider unary actions. In order to handle the exponential search space, their parser performs beam search, only keeping the top 16 scoring states. Whilst this approximate search may potentially lose the best scoring parse, they achieve competitive accuracies compared to the C&C chart parser.",
"cite_spans": [
{
"start": 3,
"end": 25,
"text": "Zhang and Clark (2011)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce Parsing",
"sec_num": "3"
},
{
"text": "Shift-reduce parsing allows for fully incremental parsing that does not require the full sentence. Whilst the C&C parser could be modified to perform in this fashion, POS tagging and supertagging accuracy would likely decrease, leading to lower overall parsing accuracy as mistakes propagate up the parsing pipeline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advantages of Semi-Incremental Parsing",
"sec_num": "3.1"
},
{
"text": "Semi-incremental parsing can still be advantageous compared to non-incremental parsing. By using features to provide a partial understanding of the sentence structure to components not traditionally integrated with the parser, such as the POS tagger and supertagger, improved accuracy is possible. This is because these components currently only use the orthographic properties of the input text as features, with no understanding of how each word may be potentially used during parsing. In Merity (2011), we have begun exploring tightly integrating parsing and tagging, specifically for POS tags and supertags, by using semi-incremental parsing and shown improved tagging accuracy is possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advantages of Semi-Incremental Parsing",
"sec_num": "3.1"
},
{
"text": "Back-tracking shift-reduce parsers are worst case exponential, preventing a full exploration of the search space. A graph-structured stack (GSS) is a general structure that allows for the efficient handling of non-determinism in shift-reduce parsing (Tomita, 1988) . The GSS allows for polynomial time non-deterministic shift-reduce parsing and has been shown to be highly effective for dependency parsing (Huang and Sagae, 2010) . The use of GSS allows for the incremental construction of the parse tree without being forced to discard large segments of the search space.",
"cite_spans": [
{
"start": 250,
"end": 264,
"text": "(Tomita, 1988)",
"ref_id": "BIBREF26"
},
{
"start": 406,
"end": 429,
"text": "(Huang and Sagae, 2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Structured Stack",
"sec_num": "3.2"
},
{
"text": "Here we will show an example of using a GSS to augment shift-reduce parsing and then show how it can be applied to CCG parsing. In the example grammar below, all three reduction rules are possible on the given stack. By performing backtracking and pursuing all possible reductions, shift-reduce parsing becomes worst-case exponential as previous results must be re-computed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Structured Stack",
"sec_num": "3.2"
},
{
"text": "\u2205 A B C D E Reduction Rules F \u2190 D E G \u2190 D E H \u2190 C D E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Structured Stack",
"sec_num": "3.2"
},
{
"text": "The GSS solves this by storing multiple possible derivations in a single structure. Note that all possible rules have been applied and are now stored in the GSS. These reduce operations are also nondestructive, leaving the original structure from the above figure in place. Thus, the GSS can store multiple possible derivations. Note that there is only a single bottom node, \u2205, representing an empty stack.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Structured Stack",
"sec_num": "3.2"
},
{
"text": "F G \u2205 A B C D E H",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Structured Stack",
"sec_num": "3.2"
},
{
"text": "When a new node is pushed onto the stack, we combine it with the heads of all of the existing stacks stored in the GSS. This means that only a single shift action is necessary for the GSS instead of one for each possible derivation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Structured Stack",
"sec_num": "3.2"
},
{
"text": "F G \u2205 A B C D E I H",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Structured Stack",
"sec_num": "3.2"
},
{
"text": "Finally, to prevent an exponential explosion due to local ambiguity, we check if a new partial derivation is equivalent to any existing partial derivations. If it is, we keep track of the ways the given node can be generated and then merge them into a single node. This is referred to as local ambiguity packing by Tomita (1988) and allows shift-reduce parsing to be performed in polynomial time. In the example below, the new reduction rules result in two new J nodes. These two nodes are merged to form a single node as they are equivalent.",
"cite_spans": [
{
"start": 315,
"end": 328,
"text": "Tomita (1988)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Structured Stack",
"sec_num": "3.2"
},
{
"text": "F G J \u2205 A B C D E I H Reduction Rules J \u2190 F I J \u2190 G I",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Structured Stack",
"sec_num": "3.2"
},
{
"text": "When parsing an n word sentence, there are n possible stages in the GSS. We refer to these stages as frontiers, with the k th frontier containing all partial derivations that contain a total span of k. In CKY chart terms, a frontier can be considered as representing all cells on the diagonal from the top left to the bottom right, as seen in Figure 3 . Figure 2 represents an incomplete sentence processed using a GSS-based shift-reduce CCG parser.",
"cite_spans": [],
"ref_spans": [
{
"start": 343,
"end": 351,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 354,
"end": 362,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Graph-Structured Stack",
"sec_num": "3.2"
},
{
"text": "The frontier for the word with contains two heads, ((S \\NP )\\(S \\NP ))/NP and (NP \\NP )/NP . When the CCG category for the word binoculars is shifted on to the GSS, it connects to both of the previous heads. As the category for the word binoculars is an NP , we can then reduce the stack by applying combinatory rules from CCG to both of the heads found in the previous frontier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Structured Stack",
"sec_num": "3.2"
},
{
"text": "In light gray, we show the full derivation for \"John with binoculars\". During the parsing process, we start with an empty GSS. During the shift step, we add all the possible CCG categories provided by the supertagger for the k th word to the GSS and connect each category to all of the head categories on the GSS. Next, we attempt all possible reduce operations on the partial derivations in the current frontier. In CCG shiftreduce parsing, these reduce operations are the CCG combinatory rules. If a reduction is possible, we create a new top partial derivation from the result and place it in the k th frontier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Structured Stack",
"sec_num": "3.2"
},
{
"text": "The purpose of frontier pruning is to cut down the search space of the parser by only considering partial derivations that are likely to be in the highestscoring derivation. Like adaptive supertagging, it exploits the idea that the only partial derivations the parser needs to generate are those used by the highest-scoring derivation. The model is trained using the parser's initial unpruned output and aims to distinguish between partial derivations that are necessary and those that are not. By eliminating a large number of those unnecessary partial deriva-tions, parsing ambiguity is significantly decreased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frontier Pruning",
"sec_num": "4"
},
{
"text": "This approach is similar to beam search as frontier pruning removes partial derivations once it is likely they will not be used in the highest-scoring derivation. Beam search prunes nodes that are below a multiple (\u03b2) of the highest-scoring node in the frontier. For certain instances, such as n-best re-ranking, beam search would be preferred as derivations without the highest score are still useful in the parsing process. For one best parsing, however, the parser may waste time generating these additional derivations when it could be known in advanced that they will not be used. This could occur during attachment ambiguity where, although the parser is guaranteed to select one attachment, the other attachment may be constructed as it is valid and still competitive when considered by beam search's criteria.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frontier Pruning",
"sec_num": "4"
},
{
"text": "The only modifications are to the core parsing algorithm, which involves replacing CKY with SR, and to the parsing process via pruning. As the decoder and base models used for selecting the bestscoring derivation remain unchanged, any improvements seen are from an improved parsing process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The C&C code base has been optimised for CKY parsing and we have made only limited attempts to optimise specifically for the shift-reduce approach. Due to this, the speed of the SR parser is 34% slower than the CKY parser. As the frontier pruning is implemented on the SR parser, all speeds will be relative to the SR baseline. For the frontier pruning SR parser to be competitive with the CKY parser, a speed improvement of 34% or more must be achieved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We train a binary averaged perceptron model (Collins, 2002) on parser output generated by the SR C&C parser using the standard parsing model. Once the base parser has successfully processed a sentence, all partial derivations that lead to the highestscoring derivation are marked. For each partial derivation in the GSS, the perceptron model attempts to classify whether it was part of the marked set. If the classification is incorrect, the perceptron model updates the weights appropriately.",
"cite_spans": [
{
"start": 44,
"end": 59,
"text": "(Collins, 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Processing",
"sec_num": "5.1"
},
{
"text": "During processing, pruning occurs as each fron- tier is developed. For each partial derivation, the perceptron model classifies whether the partial derivation is likely to be used in the highest-scoring derivation. If not, the partial derivation is removed from the frontier, eliminating any paths that the partial derivation would have generated. Perfect frontier pruning would allow only a single derivation, specifically the highest-scoring one, to develop.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Processing",
"sec_num": "5.1"
},
{
"text": "For frontier pruning to be effective, the model must be able to accurately distinguish between partial derivations that will be used in the highest-scoring derivation and those that shall not. As the features of the C&C parser dictate the highest-scoring derivation, the features used for frontier pruning have been chosen to be similar. For a full description of the features used in the C&C parser, refer to Clark and Curran (2007) . Each partial derivation is given a base set of features derived from the current category. The initial features include a NULL which all categories receive, the CCG category itself and whether the category was assigned by the supertagger. There are also features that encode rule instantiation, including whether the category was created by type raising, a lexical rule, or any CCG combinatory rule. If the category was created by a CCG combinatory rule, the type of rule (such as forward/backward application and so on) is included as a feature.",
"cite_spans": [
{
"start": 410,
"end": 433,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Features",
"sec_num": "5.2"
},
{
"text": "Features representing the past decisions the parser has made are also included. Note that current rep-resents the current category and left/right is the current category's left or right child respectively. For unary categories, a tuple of [current,current\u2192left] Finally, additional features are added that represent the possible future parsing decisions. This is achieved by adding information about the remaining partial derivations on the stack (the past frontier) and the future incoming partial derivations (the next frontier). These do not exist in the C&C parser and are only possible due to the implementation of the GSS. For each category in the previous frontier, a feature is added of the type [previous, current] . For the next frontier, which is only composed of supertags at this point, the feature is [current, next] . These features allow the pruning classifier to determine whether the current category is likely to be active in any other reductions in future parsing work. As we only want to score the optimal path using the previous and next features, only the highest weighted of these features are selected. The rest of the previous and next features are discarded and do not contribute to the classification.",
"cite_spans": [
{
"start": 239,
"end": 261,
"text": "[current,current\u2192left]",
"ref_id": null
},
{
"start": 704,
"end": 723,
"text": "[previous, current]",
"ref_id": null
},
{
"start": 815,
"end": 830,
"text": "[current, next]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Features",
"sec_num": "5.2"
},
{
"text": "An example of this can be seen in Table 1 , where the features for the partial derivation of S \\NP are enumerated.",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 41,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Model Features",
"sec_num": "5.2"
},
{
"text": "These features differ to the traditional features used by shift-reduce parsers due to the addition of the GSS. As traditional shift-reduce parsing only considers a single derivation at a time, it is trivial to include history further back than the current category's previous frontier. As GSS-based shift-reduce parsing encodes an exponential number of states, however, the overhead of unpacking these states into a feature representation is substantial. Our approximation of selecting the highest weighted previous and next frontier features approximates the nondeterministic shift-reduce solution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Features",
"sec_num": "5.2"
},
{
"text": "Compared to the unmarked set, the marked set of partial derivations used to create the highest-scoring derivation is small. If a single CCG category from the marked set is pruned accidentally, the accuracy may be negatively impacted. The loss of a single category may even mean it is impossible to form a spanning analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improving Marked Set Recall",
"sec_num": "5.3"
},
{
"text": "To prevent this loss of accuracy and coverage, the recall of the marked set needs to be improved. This can be achieved by biasing the binary perceptron algorithm towards a certain class, trading precision for recall. Traditionally, a binary perceptron classifier returns true if w \u2022 x > 0, else false, with w being a vector of weights for each feature and x being a binary vector indicating whether a feature was active.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improving Marked Set Recall",
"sec_num": "5.3"
},
{
"text": "By providing a manual bias \u03bb, w \u2022 x > \u03bb, we can bias the classifier towards a class. The value of \u03bb modifies the perceptron threshold level, allowing us to improve the recall of the marked set by lowering the precision. The value for \u03bb is obtained manually through the use of a development set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improving Marked Set Recall",
"sec_num": "5.3"
},
{
"text": "Identifying the optimal threshold value is important. Too high a recall value would prevent pruning any parts of the parse tree whilst too low a threshold reverts back to traditional unpruned parsing. Due to the overheads involved in the frontier pruning process, ineffective frontier pruning may also be slower than traditional parsing, especially for an optimised parser such as the C&C parser. This value is determined experimentally using a development dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improving Marked Set Recall",
"sec_num": "5.3"
},
{
"text": "For frontier pruning to produce a speed gain, enough of the search space must be pruned in order to compensate for the additional computational overhead of the pruning step itself. This is a challenge as the C&C parser is written in C++ with a focus on efficiency and already features substantial lexical pruning due to the use of supertagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Balancing Pruning Features and Speed",
"sec_num": "5.4"
},
{
"text": "For this reason, there were instances where expressive features needed to be traded for simpler features in the frontier pruning process. Whilst these simpler features may not prune as effectively, they take far less time to compute and result in higher speed gains than complex features with a further reduced search space. The complexity of the frontier pruning features may be dictated by the speed of the core parser itself, with more expressive features being possible if the core parser is slower.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Balancing Pruning Features and Speed",
"sec_num": "5.4"
},
{
"text": "The implementation of these features also had to focus on efficiency. To decrease the stress and improve memory locality of the hash table storing the feature weights, only a subset of features were stored. This feature subset was obtained from the gold standard training data as it contains far less ambiguity than the same training data which uses lexical categories supplied by the supertagger.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Balancing Pruning Features and Speed",
"sec_num": "5.4"
},
{
"text": "Hash tables were used for storing the relevant feature weights. Simple hash based feature representation were used for associating features with weights to reduce the complexity of equivalence checking. The hash values of features that were to be reused were also cached to prevent recalculation, substantially decreasing the computational overhead of feature calculation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Balancing Pruning Features and Speed",
"sec_num": "5.4"
},
{
"text": "Our experiments are performed using CCGbank which was split into three subsets for training (Sections 02-21), development (Section 00), and the final evaluation (Section 23). The performance is measured in terms of sentence coverage, accuracy and parsing time. The accuracy is computed as F-score over the extracted labeled and unlabeled CCG dependencies found in CCGbank. All unmarked experiments use gold standard POS tags whilst experiments marked Auto use automatically assigned POS tags using the C&C POS tagger.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "To establish bounds on the potential search space reduction, the size of the marked set compared to the total tree size was tracked over all sentences in the training data. This represents the size of the tree after optimal pruning occurs. Two figures are presented, one with gold supertags and the other with supertags supplied by the C&C supertagger. Gold represents the reduction in search space possible when only the correct CCG categories are used to parse the sentence. In contrast, the C&C supertagger may apply multiple CCG categories to improve supertagging accuracy, resulting in higher ambiguity and greater potential search space reductions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training the Frontier Pruning Algorithm",
"sec_num": "7.1"
},
{
"text": "As can be seen in Table 2 , the size of the marked set is 10 times smaller for gold supertags and 15 times smaller for automatically supplied supertags.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 25,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training the Frontier Pruning Algorithm",
"sec_num": "7.1"
},
{
"text": "Marked set recall (gold supertags) 84.4% Marked set recall 72.9% Average pruned size (gold supertags) 9.6% Average pruned size 6.7% Table 2 : Recall of the marked set from the frontier pruning algorithm across all trees and the size of the pruned tree compared to the original tree.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 139,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Acc.",
"sec_num": null
},
{
"text": "This places an upper-bound on the potential speed improvement the parser may see due to aggressive frontier pruning. The recall of the marked set was low for both gold supertags and automatically assigned supertags. This suggests the need for a modified perceptron threshold level in order to increase the recall of the marked set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acc.",
"sec_num": null
},
{
"text": "Tuning the perceptron threshold level, as described in the previous section, has an important impact on frontier pruning. If the baseline parser cannot form a spanning analysis with the supertags initially supplied by the supertagger, it requests more supertags. Aggressive frontier pruning may counter-intuitively result in a slower parser as the parser spends more time attempting to unsuccessfully parse the sentence with an increasingly large number of supertags. By tuning the perceptron threshold level we can prevent potential slow-downs caused by aggressive pruning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning the Perceptron Threshold Level",
"sec_num": "7.2"
},
{
"text": "To optimise the threshold level, experiments were performed on the development portion of CCGbank, Section 00. The results are shown in Table 3 . Decreasing the perceptron thresholds level (\u03bb) is shown to decrease the speed of the parser substantially without increasing the accuracy. For extremely low values of \u03bb, frontier pruning will keep partial derivations previously discarded as the perceptron classifier becomes biased towards recall. For a sufficiently low value, the accuracy would reach the same levels as the CKY and SR C&C parsers, but the speed would be far too slow due to the computational overhead of frontier pruning added to the small reduction in the search space. More work on fine-tuning the feature representation and allowing for more expressive features in a faster manner will be required. For \u03bb = 0, however, frontier pruning increases the parser's speed by 25.7% compared to the baseline GSS-based SR parser on which the frontier pruning operates. There is also a small 9.8% speed increase compared to the CKY baseline parser. The F-score for both labeled and unlabeled dependencies is negatively impacted though. Table 4 reports the impact frontier pruning has on speed compared to the baseline CKY and SR C&C parsers. Frontier pruning has improved the speed of the GSS-based SR C&C parser by 39%, an improvement over the speed increase seen during evaluation. Longer sentences seem to have a higher impact on the speed of the frontier pruning algorithm due to the increased computational complexity of feature generation. This indicates that implementing a form of beam search on top of this may be beneficial, keeping on the top k scoring states in a frontier. Currently all partial derivations that are greater than the perceptron threshold level \u03bb are kept.",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 143,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1143,
"end": 1150,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tuning the Perceptron Threshold Level",
"sec_num": "7.2"
},
{
"text": "As the C&C parser is already highly tuned and thus extremely fast, the optimal balance between feature expressiveness and accurate pruning is difficult to achieve. However, there was still room for improvement. This suggests that on slower parsers than the C&C parser, frontier pruning may have a much more substantial impact on parsing speeds. More work needs to be done on reducing the number of computationally intensive feature lookups and calculations. Even when using the goldstandard subset of the features, the feature look-up process accounts for the majority of the slow-down that the frontier pruning algorithm causes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and future work",
"sec_num": "8"
},
{
"text": "The C&C code has been highly optimised to suit CKY parsing. It should be possible to improve the GSS parser to be directly competitive with the CKY implementation. The frontier pruning provides speed increases for the GSS parser, allowing it to be competitive with the original CKY parser, but with an improved GSS parser, we could expect further improvements over the original CKY parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and future work",
"sec_num": "8"
},
{
"text": "Finally, we are still using the separate maximum entropy model and decoder to find the best derivation. If we add more features to the perceptron model, it may be possible to use it for frontier pruning and finding the best derivation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and future work",
"sec_num": "8"
},
{
"text": "We present a shift-reduce CCG parser that can explore all possible analyses in polynomial time through the use of a graph-structured stack (GSS). Whilst this parser is 34% slower than the CKY parser on which it is based, it can parse 60 sentences per second whilst exploring the full search space. We show that by performing frontier pruning on the GSS and reducing this search space, the speed of the GSS parser can be improved by 39% whilst only incurring a small accuracy penalty. This allows for shiftreduce parsing to attain speeds directly competitive with the CKY parser, whilst allowing all the potential advantages of a semi-incremental parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "We have also shown that whilst pruning is occur-ring at the lexical level due to supertagging, substantial speed-ups are still possible by performing pruning during the parsing process itself. This has also illustrated the difficulty in balancing expressive features and feature calculations overhead that frontier pruning needs to achieve. Our approach uses the output of the original C&C parser as training data, and so we can use any amount of parser output to train the system. This self-training has been shown to be highly effective in adaptive supertagging for increasing parser speed (Kummerfeld et al., 2010) . The final result will be a substantially faster wide-coverage CCG parser that can be used for large-scale NLP applications.",
"cite_spans": [
{
"start": 592,
"end": 617,
"text": "(Kummerfeld et al., 2010)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
}
],
"back_matter": [
{
"text": "This work was supported by the Capital Markets Cooperative Research Centre, an Australian Research Council Discovery grant DP1097291 and a University of Sydney Honours Scholarship. We thank the anonymous reviewers for their insightful feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Theory of Parsing, Translation, and Compiling. Volume I: Parsing",
"authors": [
{
"first": "V",
"middle": [],
"last": "Alred",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"D"
],
"last": "Aho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ullman",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alred V. Aho and Jeffrey D. Ullman. 1972. The The- ory of Parsing, Translation, and Compiling. Volume I: Parsing. Prentice-Hall.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Supertagging: An Approach to Almost Parsing",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "25",
"issue": "2",
"pages": "237--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivas Bangalore and Aravind K. Joshi. 1999. Su- pertagging: An Approach to Almost Parsing. Com- putational Linguistics, 25(2):237-265.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Multilingual Dependency Parsing: A Pipeline Approach",
"authors": [
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Quang",
"middle": [],
"last": "Do",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2006,
"venue": "Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming-Wei Chang, Quang Do, and Dan Roth. 2006. Mul- tilingual Dependency Parsing: A Pipeline Approach. In Nicolas Nicolov, editor, Recent Advances in Natu- ral Language Processing.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Coarseto-Fine n-Best Parsing and MaxEnt Discriminative Reranking",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL-05)",
"volume": "",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse- to-Fine n-Best Parsing and MaxEnt Discriminative Reranking. In Proceedings of the 43rd Annual Meet- ing of the Association for Computational Linguis- tics (ACL-05), pages 173-180, Ann Arbor, Michigan, USA, June.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The Importance of Supertagging for Wide-Coverage CCG Parsing",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th International Conference on Computational Linguistics (COLING-04)",
"volume": "",
"issue": "",
"pages": "282--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark and James R. Curran. 2004. The Impor- tance of Supertagging for Wide-Coverage CCG Pars- ing. In Proceedings of the 20th International Con- ference on Computational Linguistics (COLING-04), pages 282-288, Geneva, Switzerland, August.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Wide-Coverage Efficient Statistical Parsing with CCG and Log-Linear Models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "4",
"pages": "493--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark and James R. Curran. 2007. Wide- Coverage Efficient Statistical Parsing with CCG and Log-Linear Models. Computational Linguistics, 33(4):493-552.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Building Deep Dependency Structures using a Wide-Coverage CCG Parser",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL-02)",
"volume": "",
"issue": "",
"pages": "327--334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark, Julia Hockenmaier, and Mark Steedman. 2002. Building Deep Dependency Structures using a Wide-Coverage CCG Parser. In Proceedings of the 40th Annual Meeting of the Association for Computa- tional Linguistics (ACL-02), pages 327-334, Philadel- phia, Pennsylvania, USA, July.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP-03)",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative Training Meth- ods for Hidden Markov Models: Theory and Experi- ments with Perceptron Algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP-03), pages 1-8.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "CCG parsing with one syntactic structure per n-gram",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Dawborn",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Australasian Language Technology Association Workshop 2009 (ALTA-09)",
"volume": "",
"issue": "",
"pages": "71--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Dawborn and James R. Curran. 2009. CCG parsing with one syntactic structure per n-gram. In Proceed- ings of the Australasian Language Technology Associ- ation Workshop 2009 (ALTA-09), pages 71-79, Syd- ney, Australia, December.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Efficient Combinatory Categorial Grammar Parsing",
"authors": [
{
"first": "Bojan",
"middle": [],
"last": "Djordjevic",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bojan Djordjevic. 2006. Efficient Combinatory Cate- gorial Grammar Parsing. In Proceedings of the 2006",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Australasian Language Technology Workshop (ALTW-06)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "3--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Australasian Language Technology Workshop (ALTW- 06), pages 3-10, Sydney, Australia, December.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Generative Models for Statistical Parsing with Combinatory Categorial Grammar",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Hockenmaier and Mark Steedman. 2002. Gen- erative Models for Statistical Parsing with Combina- tory Categorial Grammar. In Proceedings of the 40th",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Annual Meeting of the Association for Computational Linguistics (ACL-02)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "335--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics (ACL-02), pages 335-342, Philadelphia, PA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "CCGbank: A Corpus of CCG Derivations and Dependency Structures Extracted from the Penn Treebank",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "3",
"pages": "355--396",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Hockenmaier and Mark Steedman. 2007. CCG- bank: A Corpus of CCG Derivations and Dependency Structures Extracted from the Penn Treebank. Com- putational Linguistics, 33(3):355-396.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Dynamic programming for linear-time incremental parsing",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL-10)",
"volume": "",
"issue": "",
"pages": "1077--1086",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang and Kenji Sagae. 2010. Dynamic program- ming for linear-time incremental parsing. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL-10), pages 1077- 1086, Uppsala, Sweden, July.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "An efficient recognition and syntax analysis algorithm for context-free languages",
"authors": [
{
"first": "Tadao",
"middle": [],
"last": "Kasami",
"suffix": ""
}
],
"year": 1965,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tadao Kasami. 1965. An efficient recognition and syntax analysis algorithm for context-free languages. Techni- cal Report AFCRL-65-758, Air Force Cambridge Re- search Laboratory, Bedford, MA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A* Parsing: Fast Exact Viterbi Parse Selection",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Human Language Technology Conference and the North American Association for Computational Linguistics (HLT-NAACL-03)",
"volume": "3",
"issue": "",
"pages": "119--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2003. A* Pars- ing: Fast Exact Viterbi Parse Selection. In Proceed- ings of the Human Language Technology Conference and the North American Association for Computa- tional Linguistics (HLT-NAACL-03), volume 3, pages 119-126.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Faster Parsing by Supertagger Adaptation",
"authors": [
{
"first": "Jonathan",
"middle": [
"K"
],
"last": "Kummerfeld",
"suffix": ""
},
{
"first": "Jessika",
"middle": [],
"last": "Roesner",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Dawborn",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Haggerty",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL-10)",
"volume": "",
"issue": "",
"pages": "345--355",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan K. Kummerfeld, Jessika Roesner, Tim Daw- born, James Haggerty, James R. Curran, and Stephen Clark. 2010. Faster Parsing by Supertagger Adapta- tion. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL-10), pages 345-355, Uppsala, Sweden, July.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Building a Large Annotated Corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computa- tional Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Integrated Tagging and Pruning via Shift-Reduce CCG Parsing",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Merity. 2011. Integrated Tagging and Pruning via Shift-Reduce CCG Parsing. Honours Thesis, The University of Sydney, Sydney, Australia.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Deterministic Dependency Parsing of English Text",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Scholz",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 18nd International Conference on Computational Linguistics (COLING-04)",
"volume": "",
"issue": "",
"pages": "64--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre and Mario Scholz. 2004. Determinis- tic Dependency Parsing of English Text. In Proceed- ings of the 18nd International Conference on Com- putational Linguistics (COLING-04), pages 64-70, Geneva, Switzerland, August.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "K-Best A* Parsing",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Pauls",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Pauls and Dan Klein. 2009. K-Best A* Pars- ing. In Proceedings of the Joint Conference of the 47th",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "958--966",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 958-966, Singapore, August.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Linear Complexity Context-Free Parsing Pipelines via Chart Constraints",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
},
{
"first": "Kristy",
"middle": [],
"last": "Hollingshead",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of 2009 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT/NAACL-09)",
"volume": "",
"issue": "",
"pages": "647--655",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Roark and Kristy Hollingshead. 2009. Linear Complexity Context-Free Parsing Pipelines via Chart Constraints. In Proceedings of 2009 Human Language Technology Conference of the North American Chap- ter of the Association for Computational Linguistics (HLT/NAACL-09), pages 647-655, Boulder, Colorado, June.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A Classifier-Based Parser with Linear Run-Time Complexity",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Ninth International Workshop on Parsing Technology (IWPT-05)",
"volume": "",
"issue": "",
"pages": "125--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Sagae and Alon Lavie. 2005. A Classifier-Based Parser with Linear Run-Time Complexity. In Pro- ceedings of the Ninth International Workshop on Pars- ing Technology (IWPT-05), pages 125-132, Vancou- ver, British Columbia, Canada, October.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The Syntactic Process",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steedman. 2000. The Syntactic Process. MIT Press, Cambridge, Massachusetts, USA.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Graph-structured Stack and Natural Language Parsing",
"authors": [
{
"first": "Masaru",
"middle": [],
"last": "Tomita",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the 26th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masaru Tomita. 1988. Graph-structured Stack and Nat- ural Language Parsing. In Proceedings of the 26th",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "249--257",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 249-257, Buffalo, New York, USA, June.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Statistical Dependency Analysis with Support Vector Machines",
"authors": [
{
"first": "Hiroyasu",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of International Conference on Parsing Technologies (IWPT-03)",
"volume": "",
"issue": "",
"pages": "195--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroyasu Yamada and Yuji Matsumoto. 2003. Statis- tical Dependency Analysis with Support Vector Ma- chines. Proceedings of International Conference on Parsing Technologies (IWPT-03), pages 195-206.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Recognition and Parsing of Context-Free Languages in Time n 3 . Information and Control",
"authors": [
{
"first": "H",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Younger",
"suffix": ""
}
],
"year": 1967,
"venue": "",
"volume": "10",
"issue": "",
"pages": "189--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel H. Younger. 1967. Recognition and Parsing of Context-Free Languages in Time n 3 . Information and Control, 10(2):189-208, February.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Shift-Reduce CCG Parsing",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-11:HLT)",
"volume": "",
"issue": "",
"pages": "683--692",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Stephen Clark. 2011. Shift-Reduce CCG Parsing. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguistics: Human Language Technologies (ACL-11:HLT), pages 683-692, Portland, Oregon, USA, June.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Chart Pruning for Fast Lexicalised-Grammar Parsing",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Byung-Gyu Ahn",
"suffix": ""
},
{
"first": "Curt",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Van Wyk",
"suffix": ""
},
{
"first": "Laura",
"middle": [
"Rimell"
],
"last": "Curran",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the COLING 2010 Poster Sessions",
"volume": "",
"issue": "",
"pages": "1471--1479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang, Byung-Gyu Ahn, Stephen Clark, Curt Van Wyk, James R. Curran, and Laura Rimell. 2010. Chart Pruning for Fast Lexicalised-Grammar Parsing. In Proceedings of the COLING 2010 Poster Sessions, pages 1471-1479, Beijing, China, August.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "graph-structured stack (GSS) representing an incomplete parse of the sentences found inFigure 1. The nodes and lines in bold were provided by the supertagger, whilst the non-bold nodes and lines have been created during parsing. The light gray lines represent what reduce operation created that lexical category."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "An illustration of the relation between the chart in CKY and the graph-structured stack in SR"
},
"TABREF1": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Example features extracted from S \\NP in the third frontier ofFigure 2. For the frontier features, bold represents the highest-scoring feature selected for contribution to the classification decision."
},
"TABREF4": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>The perceptron threshold level is referred to as \u03bb. All</td></tr><tr><td>results are against the development dataset, Section 00 of</td></tr><tr><td>CCGbank, which contains 1,913 sentences.</td></tr></table>",
"text": "Comparison to baseline parsers and analysis of the impact of threshold levels on frontier pruning (FP)."
}
}
}
}