Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C04-1025",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:20:54.000758Z"
},
"title": "A grammar formalism and parser for linearization-based HPSG",
"authors": [
{
"first": "Michael",
"middle": [
"W"
],
"last": "Daniels",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Ohio State University",
"location": {
"addrLine": "222 Oxley Hall, 1712 Neil Avenue Columbus",
"postCode": "43210",
"region": "OH"
}
},
"email": "daniels|[email protected]"
},
{
"first": "W",
"middle": [
"Detmar"
],
"last": "Meurers",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Ohio State University",
"location": {
"addrLine": "222 Oxley Hall, 1712 Neil Avenue Columbus",
"postCode": "43210",
"region": "OH"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Linearization-based HPSG theories are widely used for analyzing languages with relatively free constituent order. This paper introduces the Generalized ID/LP (GIDLP) grammar format, which supports a direct encoding of such theories, and discusses key aspects of a parser that makes use of the dominance, precedence, and linearization domain information explicitly encoded in this grammar format. We show that GIDLP grammars avoid the explosion in the number of rules required under a traditional phrase structure analysis of free constituent order. As a result, GIDLP grammars support more modular and compact grammar encodings and require fewer edges in parsing.",
"pdf_parse": {
"paper_id": "C04-1025",
"_pdf_hash": "",
"abstract": [
{
"text": "Linearization-based HPSG theories are widely used for analyzing languages with relatively free constituent order. This paper introduces the Generalized ID/LP (GIDLP) grammar format, which supports a direct encoding of such theories, and discusses key aspects of a parser that makes use of the dominance, precedence, and linearization domain information explicitly encoded in this grammar format. We show that GIDLP grammars avoid the explosion in the number of rules required under a traditional phrase structure analysis of free constituent order. As a result, GIDLP grammars support more modular and compact grammar encodings and require fewer edges in parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Within the framework of Head-Driven Phrase Structure Grammar (HPSG), the so-called linearizationbased approaches have argued that constraints on word order are best captured within domains that extend beyond the local tree. A range of analyses for languages with relatively free constituent order have been developed on this basis (see, for example, Reape, 1993; Kathol, 1995; M\u00fcller, 1999; Donohue and Sag, 1999; Bonami et al., 1999) so that it is attractive to exploit these approaches for processing languages with relatively free constituent order.",
"cite_spans": [
{
"start": 350,
"end": 362,
"text": "Reape, 1993;",
"ref_id": "BIBREF16"
},
{
"start": 363,
"end": 376,
"text": "Kathol, 1995;",
"ref_id": "BIBREF8"
},
{
"start": 377,
"end": 390,
"text": "M\u00fcller, 1999;",
"ref_id": "BIBREF10"
},
{
"start": 391,
"end": 413,
"text": "Donohue and Sag, 1999;",
"ref_id": "BIBREF1"
},
{
"start": 414,
"end": 434,
"text": "Bonami et al., 1999)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper introduces a grammar format that supports a direct encoding of linearization-based HPSG theories. The Generalized ID/LP (GIDLP) format explicitly encodes the dominance, precedence, and linearization domain information and thereby supports the development of efficient parsing algorithm making use of this information. We make this concrete by discussing key aspects of a parser for GIDLP grammars that integrates the word order domains and constraints into the parsing process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The idea of discontinuous constituency was first introduced into HPSG in a series of papers by Mike Reape (see Reape, 1993 and references therein). 1 The core idea is that word order is determined not at the level of the local tree, but at the newly introduced level of an order domain, which can include elements from several local trees. We interpret this in the following way: Each terminal has a corresponding order domain, and just as constituents combine to form larger constituents, so do their order domains combine to form larger order domains.",
"cite_spans": [
{
"start": 111,
"end": 122,
"text": "Reape, 1993",
"ref_id": "BIBREF16"
},
{
"start": 148,
"end": 149,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linearization-based HPSG",
"sec_num": "2"
},
{
"text": "Following Reape, a daughter's order domain enters its mother's order domain in one of two ways. The first possibility, domain union, forms the mother's order domain by shuffling together its daughters' domains. The second option, domain compaction, inserts a daughter's order domain into its mother's. Compaction has two effects:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linearization-based HPSG",
"sec_num": "2"
},
{
"text": "Contiguity: The terminal yield of a compacted category contains all and only the terminal yield of the nodes it dominates; there are no holes or additional strings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linearization-based HPSG",
"sec_num": "2"
},
{
"text": "LP Locality: Precedence statements only constrain the order among elements within the same compacted domain. In other words, precedence constraints cannot look into a compacted domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linearization-based HPSG",
"sec_num": "2"
},
{
"text": "Note that these are two distinct functions of domain compaction: defining a domain as covering a contiguous stretch of terminals is in principle independent of defining a domain of elements for LP constraints to apply to. In linearization-based HPSG, domain compaction encodes both aspects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linearization-based HPSG",
"sec_num": "2"
},
{
"text": "Later work (Kathol and Pollard, 1995; Kathol, 1995; Yatabe, 1996) introduced the notion of partial compaction, in which only a portion of the daughter's order domain is compacted; the remaining elements are domain unioned.",
"cite_spans": [
{
"start": 11,
"end": 37,
"text": "(Kathol and Pollard, 1995;",
"ref_id": "BIBREF7"
},
{
"start": 38,
"end": 51,
"text": "Kathol, 1995;",
"ref_id": "BIBREF8"
},
{
"start": 52,
"end": 65,
"text": "Yatabe, 1996)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linearization-based HPSG",
"sec_num": "2"
},
{
"text": "Formally, a theory in the HPSG architecture consists of a set of constraints on the data structures introduced in the signature; thus, word order domains and the constraints thereon can be straightforwardly expressed. On the computational side, however, most systems employ parsers to efficiently process HPSG-based grammars organized around a phrase structure backbone. Phrase structure rules encode immediate dominance (ID) and linear precedence (LP) information in local trees, so they cannot directly encode linearization-based HPSG, which posits word order domains that can extend the local trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing linearization-based HPSG",
"sec_num": "3"
},
{
"text": "The ID/LP grammar format (Gazdar et al., 1985) was introduced to separate immediate dominance from linear precedence, and several proposals have been made for direct parsing of ID/LP grammars (see, for example, Shieber, 1994) . However, the domain in which word order is determined still is the local tree licensed by an ID rule, which is insufficient for a direct encoding of linearization-based HPSG.",
"cite_spans": [
{
"start": 25,
"end": 46,
"text": "(Gazdar et al., 1985)",
"ref_id": "BIBREF3"
},
{
"start": 192,
"end": 225,
"text": "(see, for example, Shieber, 1994)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Processing linearization-based HPSG",
"sec_num": "3"
},
{
"text": "The LSL grammar format as defined by Suhre (1999) (based on G\u00f6tz and Penn, 1997) allows elements to be ordered in domains that are larger than a local tree; as a result, categories are not required to cover contiguous strings. Linear precedence constraints, however, remain restricted to local trees: elements that are linearized in a word order domain larger than their local tree cannot be constrained. The approach thus provides valuable worst-case complexity results, but it is inadequate for encoding linearization-based HPSG theories, which crucially rely on the possibility to express linear precedence constraints on the elements within a word order domain.",
"cite_spans": [
{
"start": 60,
"end": 80,
"text": "G\u00f6tz and Penn, 1997)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Processing linearization-based HPSG",
"sec_num": "3"
},
{
"text": "In sum, no grammar format is currently available that adequately supports the encoding of a processing backbone for linearization-based HPSG grammars. As a result, implementations of linearizationbased HPSG grammars have taken one of two options. Some simply do not use a parser, such as the work based on ConTroll (G\u00f6tz and Meurers, 1997) ; as a consequence, the efficiency and termination properties of parsers cannot be taken for granted in such approaches.",
"cite_spans": [
{
"start": 315,
"end": 339,
"text": "(G\u00f6tz and Meurers, 1997)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Processing linearization-based HPSG",
"sec_num": "3"
},
{
"text": "The other approaches use a minimal parser that can only take advantage of a small subset of the requisite constraints. Such parsers are typically limited to the general concept of resource sensitivity -every element in the input needs to be found exactly once -and the ability to require certain categories to dominate a contiguous segment of the input. Some of these approaches (Johnson, 1985; Reape, 1991) lack word order constraints altogether. Others (van Noord, 1991; Ramsay, 1999) have the grammar writer provide a combinatory predicate (such as concatenate, shuffle, or head-wrap) for each rule specifying how the string coverage of the mother is determined from the string coverages of the daughter. In either case, the task of constructing a word order domain and enforcing word order constraints in that domain is left out of the parsing algorithm; as a result, constraints on word order domains either cannot be stated or are tested in a separate clean-up phase.",
"cite_spans": [
{
"start": 379,
"end": 394,
"text": "(Johnson, 1985;",
"ref_id": "BIBREF6"
},
{
"start": 395,
"end": 407,
"text": "Reape, 1991)",
"ref_id": "BIBREF15"
},
{
"start": 455,
"end": 472,
"text": "(van Noord, 1991;",
"ref_id": "BIBREF20"
},
{
"start": 473,
"end": 486,
"text": "Ramsay, 1999)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Processing linearization-based HPSG",
"sec_num": "3"
},
{
"text": "To develop a grammar format for linearizationbased HPSG, we take the syntax of ID/LP rules and augment it with a means for specifying which daughters form compacted domains. A Generalized ID/LP (GIDLP) grammar consists of four parts: a root declaration, a set of lexical entries, a set of grammar rules, and a set of global order constraints. We begin by describing the first three parts, which are reminiscent of context-free grammars (CFGs), and then address order constraints in section 4.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Defining GIDLP Grammars 2",
"sec_num": "4"
},
{
"text": "The root declaration has the form root(S , L) and states the start symbol S of the grammar and any linear precedence constraints L constraining the root domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Defining GIDLP Grammars 2",
"sec_num": "4"
},
{
"text": "Lexical entries have the form A \u2192 t and link the pre-terminal A to the terminal t, just as in CFGs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Defining GIDLP Grammars 2",
"sec_num": "4"
},
{
"text": "Grammar rules have the form A \u2192 \u03b1; C. They specify that a non-terminal A immediately dominates a list of non-terminals \u03b1 in a domain where a set of order constraints C holds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Defining GIDLP Grammars 2",
"sec_num": "4"
},
{
"text": "Note that in contrast to CFG rules, the order of the elements in \u03b1 does not encode immediate precedence or otherwise contribute to the denotational meaning of the rule. Instead, the order can be used to generalize the head marking used in grammars for head-driven parsing (Kay, 1990; van Noord, 1991) by additionally ordering the non-head daughters. 3 If the set of order constraints is empty, we obtain the simplest type of rule, exemplified in (1).",
"cite_spans": [
{
"start": 272,
"end": 283,
"text": "(Kay, 1990;",
"ref_id": "BIBREF9"
},
{
"start": 284,
"end": 300,
"text": "van Noord, 1991)",
"ref_id": "BIBREF20"
},
{
"start": 350,
"end": 351,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Defining GIDLP Grammars 2",
"sec_num": "4"
},
{
"text": "(1) S \u2192 NP, VP This rule says that an S may immediately dominate an NP and a VP, with no constraints on the relative ordering of NP and VP. One may precede the other, the strings they cover may be interleaved, and material dominated by a node dominating S can equally be interleaved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Defining GIDLP Grammars 2",
"sec_num": "4"
},
{
"text": "GIDLP grammars include two types of order constraints: linear precedence constraints and compaction statements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Order Constraints",
"sec_num": "4.1"
},
{
"text": "Linear precedence constraints can be expressed in two contexts: on individual rules (as rule-level constraints) and in compaction statements (as domainlevel constraints). Domain-level constraints can also be specified as global order constraints, which has the effect that they are specified for each single domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Precedence Constraints",
"sec_num": "4.1.1"
},
{
"text": "All precedence constraints enforce the following property: given any appropriate pair of elements in the same domain, one must completely precede the other for the resulting parse to be valid. Precedence constraints may optionally require that there be no intervening material between the two elements: this is referred to as immediate precedence. Precedence constraints are notated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Precedence Constraints",
"sec_num": "4.1.1"
},
{
"text": "\u2022 Weak precedence: A < B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Precedence Constraints",
"sec_num": "4.1.1"
},
{
"text": "\u2022 Immediate precedence: A B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Precedence Constraints",
"sec_num": "4.1.1"
},
{
"text": "A pair of elements is considered appropriate when one element in a domain matches the symbol A, another matches B, and neither element dominates the other (it would otherwise be impossible to express an order constraint on a recursive rule).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Precedence Constraints",
"sec_num": "4.1.1"
},
{
"text": "The symbols A and B may be descriptions or tokens. A category in a domain matches a description if it is subsumed by it; a token refers to a specific category in a rule, as discussed below. A constraint involving descriptions applies to any pair of elements in any domain in which the described categories occur; it thus can also apply more than once within a given rule or domain. Tokens, on the other hand, can only occur in rule-level constraints and sis (Pollard and Sag, 1994) , see combines in a ternary structure with him and laugh. Note that the constituent that is appropriate in the place occupied by him here can only be determined once one has looked at the other complement, laugh, from which it is raised. refer to particular RHS members of a rule. In this paper, tokens are represented by numbers referring to the subscripted indices on the RHS categories.",
"cite_spans": [
{
"start": 458,
"end": 481,
"text": "(Pollard and Sag, 1994)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Precedence Constraints",
"sec_num": "4.1.1"
},
{
"text": "In 2we see an example of a rule-level linear precedence constraint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Precedence Constraints",
"sec_num": "4.1.1"
},
{
"text": "(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Precedence Constraints",
"sec_num": "4.1.1"
},
{
"text": "2) A \u2192 NP 1 , V 2 , NP 3 ; 3< V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Precedence Constraints",
"sec_num": "4.1.1"
},
{
"text": "This constraint specifies that the token 3 in the rule's RHS (the second NP) must precede any constituents described as V occurring in the same domain (this includes, but is not limited to, the V introduced by the rule).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Precedence Constraints",
"sec_num": "4.1.1"
},
{
"text": "As with LP constraints, compaction statements exist as rule-level and as global order constraints; they cannot, however, occur within other compaction statements. A rule-level compaction statement has the form \u03b1, A, L , where \u03b1 is a list of tokens, A is the category representing the compacted domain, and L is a list of domain-level precedence constraints. Such a statement specifies that the constituents referenced in \u03b1 form a compacted domain with category A, inside of which the order constraints in L hold. As specified in section 2, a compacted domain must be contiguous (contain all and only the terminal yield of the elements in that domain), and it constitutes a local domain for LP statements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compaction Statements",
"sec_num": "4.1.2"
},
{
"text": "It is because of partial compaction that the second component A in a compaction statement is needed. If only one constituent is compacted, the resulting domain will be of the same category; but when multiple categories are fused in partial compaction, the category of the resulting domain needs to be determined so that LP constraints can refer to it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compaction Statements",
"sec_num": "4.1.2"
},
{
"text": "The rule in (3) illustrates compaction: each of the S categories forms its own domain. In (4) partial compaction is illustrated: the V and the first NP form a domain named VP to the exclusion of the second NP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compaction Statements",
"sec_num": "4.1.2"
},
{
"text": "(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compaction Statements",
"sec_num": "4.1.2"
},
{
"text": "3) S \u2192 S 1 , Conj 2 , S 3 ; 1 2, 2 3, [1], S, [] , [3], S, [] (4) VP \u2192 V 1 , NP 2 , NP 3 ; [1, 2], VP, []",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compaction Statements",
"sec_num": "4.1.2"
},
{
"text": "One will often compact only a single category without adding domain-specific LP constraints, so we introduce the abbreviatory notation of writing such a compacted category in square brackets. In this way (3) can be written as (5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compaction Statements",
"sec_num": "4.1.2"
},
{
"text": "(5) S \u2192 [S 1 ], Conj 2 , [S 3 ]; 1 2, 2 3 A final abbreviatory device is useful when the entire RHS of a rule forms a single domain, which Suhre (1999) refers to as \"left isolation\". This is denoted by using the token 0 in the compaction statement if linear precedence constraints are attached, or by enclosing the LHS category in square brackets, otherwise. (See rules (13d) and (13j) in section 6 for an example of this notation.)",
"cite_spans": [
{
"start": 139,
"end": 151,
"text": "Suhre (1999)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compaction Statements",
"sec_num": "4.1.2"
},
{
"text": "The formalism also supports global compaction statements. A global compaction statement has the form A, L , where A is a description specifying a category that always forms a compacted domain, and L is a list of domain-level precedence constraints applying to the compacted domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compaction Statements",
"sec_num": "4.1.2"
},
{
"text": "We start with an example illustrating how a CFG rule is encoded in GIDLP format. A CFG rule encodes the fact that each element of the RHS immediately precedes the next, and that the mother category dominates a contiguous string. The contextfree rule in (6) is therefore equivalent to the GIDLP rule shown in (7).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples",
"sec_num": "4.2"
},
{
"text": "(6) S \u2192 Nom V Acc (7) [S] \u2192 V 1 , Nom 2 , Acc 3 ; 2 1, 1 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples",
"sec_num": "4.2"
},
{
"text": "In 8we see a more interesting example of a GIDLP grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(8) a) root(A, []) b) A \u2192 B 1 , C 2 , [D 3 ]; 2 < 3 c) B \u2192 F 1 , G 2 , E 3 d) C \u2192 E 1 , D 2 , I 3 ; [1,2], H, [] e) D \u2192 J 1 , K 2 f) Lexical entries: E \u2192 e, . . . g) E < F",
"eq_num": "("
}
],
"section": "Examples",
"sec_num": "4.2"
},
{
"text": "8a) is the root declaration, stating that an input string must parse as an A; the empty list shows that no LP constraints are specifically declared for this domain. (8b) is a grammar rule stating that an A may immediately dominate a B, a C, and a D; it further states that the second constituent must precede the third and that the third is a compacted domain. (8c) gives a rule for B: it dominates an F, a G, and an E, in no particular order. (8d) is the rule for C, illustrating partial compaction: its first two constituents jointly form a compacted domain, which is given the name H. (8e) gives the rule for D and (8f) specifies the lexical entries (here, the preterminals just rewrite to the respective lowercase terminal). Finally, (8g) introduces a global LP constraint requiring an E to precede an F whenever both elements occur in the same domain. Now consider licensing the string efjekgikj with the above grammar. The parse tree, recording which rules are applied, is shown in (9). Given that the domains in which word order is determined can be larger than the local trees, we see crossing branches where discontinuous constituents are licensed. 9A",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples",
"sec_num": "4.2"
},
{
"text": "B C [D] E F [D E] H G I K J J K e f j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples",
"sec_num": "4.2"
},
{
"text": "To obtain a representation in which the order domains are represented as local trees again, we can draw a tree with the compacted domains forming the nodes, as shown in (10). 10A",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "e k g i k j",
"sec_num": null
},
{
"text": "H D e f j e k g i k j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "e k g i k j",
"sec_num": null
},
{
"text": "There are three non-lexical compacted domains in the tree in (9): the root A, the compacted D, and the partial compaction of D and E forming the domain H within C. In each domain, the global LP constraint E < F must be obeyed. Note that the string is licensed by this grammar even though the second occurrence of E does not precede the F. This E is inside a compacted domain and therefore is not in the same domain as the F, so that the LP constraint does not apply to those two elements. This illustrates the property of LP locality: domain compaction acts as a 'barrier' to LP application.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "e k g i k j",
"sec_num": null
},
{
"text": "The second aspect of domain compaction, contiguity, is also illustrated by the example, in connection with the difference between total and partial compaction. The compaction of D specified in (8b) requires that the material it dominates be a contiguous segment of the input. In contrast, the partial compaction of the first two RHS categories in rule (8d) requires that the material dominated by D and E, taken together, be a continuous segment. This allows the second e to occur between the two categories dominated by D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "e k g i k j",
"sec_num": null
},
{
"text": "Finally, the two tree representations above illustrate the separation of the combinatorial potential of rules (9) from the flatter word order domains (10) that the GIDLP format achieves. It would, of course, be possible to write phrase structure rules that license the word order domain tree in (10) directly, but this would amount to replacing a set of general rules with a much greater number of flatter rules corresponding to the set of all possible ways in which the original rules could be combined without introducing domain compaction. M\u00fcller (2004) discusses the combinatorial explosion of rules that results for an analysis of German if one wants to flatten the trees in this way. If recursive rules such as adjunction are included -which is necessary since adjuncts and complements can be freely intermixed in the German Mittelfeld -such flattening will not even lead to a finite number of rules. We will return to this issue in section 6.",
"cite_spans": [
{
"start": 543,
"end": 556,
"text": "M\u00fcller (2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "e k g i k j",
"sec_num": null
},
{
"text": "We have developed a GIDLP parser based on Earley's algorithm for context-free parsing (Earley, 1970) . In Earley's original algorithm, each edge encodes the interval of the input string it covers. With discontinuous constituents, however, that is no longer an option. In the spirit of Johnson (1985) and Reape (1991) , and following Ramsay (1999) , we represent edge coverage with bitvectors, stored as integers. For instance, 00101 represents an edge covering words one and three of a five-word sentence. 4 Our parsing algorithm begins by seeding the chart with passive edges corresponding to each word in the input and then predicting a compacted instance of the start symbol covering the entire input; each final completion of this edge will correspond to a successful parse.",
"cite_spans": [
{
"start": 86,
"end": 100,
"text": "(Earley, 1970)",
"ref_id": "BIBREF2"
},
{
"start": 285,
"end": 299,
"text": "Johnson (1985)",
"ref_id": "BIBREF6"
},
{
"start": 304,
"end": 316,
"text": "Reape (1991)",
"ref_id": "BIBREF15"
},
{
"start": 333,
"end": 346,
"text": "Ramsay (1999)",
"ref_id": "BIBREF14"
},
{
"start": 506,
"end": 507,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Parsing Algorithm for GIDLP",
"sec_num": "5"
},
{
"text": "As with Earley's algorithm, the bulk of the work performed by the algorithm is borne by two steps, prediction and completion. Unlike the context-free case, however, it is not possible to anchor these steps to string positions, proceeding from left to right. The strategy for prediction used by Suhre (1999) for his LSL parser is to predict every rule at every position. While this strategy ensures that no possibility is overlooked, it fails to integrate and use the information provided by the word order constraints attached to the rules -in other words, the parser receives no top-down guidance. Some of the edges generated by prediction therefore fall prey to the word order constraints later, in a generate-andtest fashion. This need not be the case. Once one daughter of an active edge has been found, the other daughters should only be predicted to occur in string positions that are compatible with the word order constraints of the active edge. For example, consider the edge in (11).",
"cite_spans": [
{
"start": 294,
"end": 306,
"text": "Suhre (1999)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Parsing Algorithm for GIDLP",
"sec_num": "5"
},
{
"text": "(11) A \u2192 B 1 \u2022 C 2 ; 1< 2 4 Note that the first word is the rightmost bit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Parsing Algorithm for GIDLP",
"sec_num": "5"
},
{
"text": "This notation represents the point in the parse during which the application of this rule has been predicted, and a B has already been located. Assuming that B has been found to cover the third position of a five-word string, two facts are known. From the LP constraint, C cannot precede B, and from the general principle that the RHS of a rule forms a partition of its LHS, C cannot overlap B. Thus C cannot cover positions one, two, or three.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Parsing Algorithm for GIDLP",
"sec_num": "5"
},
{
"text": "We can now discuss the integration of GIDLP word order constraints into the parsing process. A central insight of our algorithm is that the same data structure used to describe the coverage of an edge can also encode restrictions on the parser's search space. This is done by adding two bitvectors to each edge, in addition to the coverage vector: a negative mask (n-mask) and a positive mask (p-mask). Efficient bitvector operations can then be used to compute, manipulate, and test the encoded constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compiling LP Constraints into Bitmasks",
"sec_num": "5.1"
},
{
"text": "The n-mask constrains the set of possible coverage vectors that could complete the edge. The 1-positions in a masking vector represent the positions that are masked out: the positions that cannot be filled when completing this edge. The 0positions in the negative mask represent positions that may potentially be part of the edge's coverage. For the example above, the coverage vector for the edge is 00100 since only the third word B has been found so far. Assuming no restrictions from a higher rule in the same domain, the n-mask for C is 00111, encoding the fact that the final coverage vector of the edge for A must be either 01000, 10000, or 11000 (that is, C must occupy position four, position five, or both of these positions). The negative mask in essence encodes information on where the active category cannot be found.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negative Masks",
"sec_num": null
},
{
"text": "The p-mask encodes information about the positions the active category must occupy. This knowledge arises from immediate precedence constraints. For example, consider the edge in (12).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Positive Masks",
"sec_num": null
},
{
"text": "(12) D \u2192 E 1 \u2022 F 2 ; 1 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Positive Masks",
"sec_num": null
},
{
"text": "If E occupies position one, then F must at least occupy position two; the second position in the positive mask would therefore be occupied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Positive Masks",
"sec_num": null
},
{
"text": "Thus in the prediction step, the parser considers each rule in the grammar that provides the symbol being predicted, and for each rule, it generates bitmasks for the new edge, taking both rule-level and domain-level order constraints into account. The resulting masks are checked to ensure that there is enough space in the resulting mask for the minimum number of categories required by the rule. 5 Then, as part of each completion step, the parser must update the LP constraints of the active edge with the new information provided by the passive edge. As edges are initially constructed from grammar rules, all order constraints are initially expressed in terms of either descriptions or tokens. As the parse proceeds, these constraints are updated in terms of the actual locations where matching constituents have been found. For example, a constraint like 1 < 2 (where 1 and 2 are tokens) can be updated with the information that the constituent corresponding to token 1 has been found as the first word, i.e. as position 00001.",
"cite_spans": [
{
"start": 398,
"end": 399,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Positive Masks",
"sec_num": null
},
{
"text": "In summary, compiling LP constraints into bitmasks in this way allows the LP constraints to be integrated directly into the parser at a fundamental level. Instead of weeding out inappropriate parses in a cleanup phase, LP constraints in this parser can immediately block an edge from being added to the chart.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Positive Masks",
"sec_num": null
},
{
"text": "As discussed at the end of section 4.2, it is possible to take a GIDLP grammar and write out the discontinuity. All non-domain introducing rules must be folded into the domain-introducing rules, and then each permitted permutation of a RHS must become a context-free rule on its own -generally, at the cost of a factorial increase in the number of rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "This construction indicates the basis for a preliminary assessment of the GIDLP formalism and its parser. The grammar in (13) recognizes a very small fragment of German, focusing on the free word order of arguments and adjuncts in the so-called Mittelfeld that occurs to the right of either the finite verb in yes-no questions or the complementizer in complementized sentences. 6 (13) a) root(s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": ", []) b) s \u2192 s(cmp) 1 c) s \u2192 s(que) 1 d) s(cmp) \u2192 cmp 1 , clause 2 ; [0], s(cmp), cmp < , < v( ) e) s(que) \u2192 clause 1 ; [0], s(que), v( ) < f) clause \u2192 np(n) 1 , vp 2 g) vp \u2192 v(ditr) 1 , np(a) 2 , np(d) 3 h) vp \u2192 adv 1 , vp 2 i) vp \u2192 v(cmp) 1 , s(cmp) 2 j) [np(Case)] \u2192 det(Case) 1 , n(Case) 2 ; 1 2 k) v(ditr) \u2192 gab q) v(cmp) \u2192 denkt l) comp \u2192 dass r) det(nom) \u2192 der m) det(dat) \u2192 der s) det(acc) \u2192 das n) n(nom) \u2192 Mann t) n(dat) \u2192 Frau o) n(acc) \u2192 Buch u) adv \u2192 gestern p) adv \u2192 dort",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "The basic idea of this grammar is that domain compaction only occurs at the top of the head path, after all complements and adjuncts have been found. When the grammar is converted into a CFG, the effect of the larger domain can only be mimicked by eliminating the clause and vp constituents altogether.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "As a result, while this GIDLP grammar has 10 syntactic rules, the corresponding flattened CFG (allowing for a maximum of two adverbs) has 201 rules. In an experiment, the four sample sentences in (14) 7 were parsed with both our prototype GIDLP parser (using the GIDLP grammar) as well as a vanilla Earley CFG parser (using the CFG); the results are shown in 15 Averaging over the four sentences, the GIDLP grammar requires 89% fewer active edges. It also generates additional passive edges corresponding to the extra non-terminals vp and clause. It is important to keep in mind that the GIDLP grammar is more general than the CFG: in order to obtain a finite number of CFG rules, we had to limit the number of adverbs. When using a grammar capable of handling longer sentences with more adverbs, the number of CFG rules (and active edges, as a consequence) increases factorially.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "Timings have not been included in (15); it is generally the case that the GIDLP parser/grammar combination was slower than the CFG/Earley parser. This is an artifact of the use of atomic categories, however. For the large feature structures used as categories in HPSG, we expect the larger numbers of edges encountered while parsing with the CFG to have a greater impact on parsing time, to the point where the GIDLP grammar/parser is faster.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "In this abstract, we have introduced a grammar format that can be used as a processing backbone for linearization-based HPSG grammars that supports the specification of discontinuous constituents and word order constraints on domains that extend beyond the local tree. We have presented a prototype parser for this format illustrating the use of order constraint compilation techniques to improve efficiency. Future work will concentrate on additional techniques for optimized parsing as well as the application of the parser to feature-based grammars. We hope that the GIDLP grammar format will encourage research on such optimizations in general, in support of efficient processing of relatively free constituent order phenomena using linearizationbased HPSG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "7"
},
{
"text": "Apart from Reape's approach, there have been proposals for a more complete separation of word order and syntactic structure in HPSG (see, for example,Richter andSailer, 2001 andPenn, 1999). In this paper, we focus on the majority of linearization-based HPSG approaches, which follow Reape.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Due to space limitations, we focus here on introducing the syntax of the grammar formalism and giving an example. We will also base the discussion on simple term categories; nothing hinges on this, and when using the formalism to encode linearization-based HPSG grammars, one will naturally use the feature descriptions known from HPSG as categories.3 By ordering the right-hand side of a rule so that those categories come first that most restrict the search space, it becomes possible to define a parsing algorithm that makes use of this information. For an example of a construction where ordering the non-head daughters is useful, consider sentences with AcI verbs like I see him laugh. Under the typical HPSG analy-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This optimization only applies to epsilon-free grammars. Further work in this regard can involve determining the minumum and maximum yields of each category; some optimizations involving this information can be found in(Haji- Abdolhosseini and Penn, 2003).6 The symbol is used to denote the set of all categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The grammar and example sentences are intended as a formal illustration, not a linguistic theory; because of this and space limitations, we have not provided glosses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Constituency and word order in French subject inversion",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Bonami",
"suffix": ""
},
{
"first": "Dani\u00e8le",
"middle": [],
"last": "Godard",
"suffix": ""
},
{
"first": "Jean-Marie",
"middle": [],
"last": "Marandin",
"suffix": ""
}
],
"year": 1999,
"venue": "Constraints and Resources in Natural Language Syntax and Semantics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olivier Bonami, Dani\u00e8le Godard, and Jean-Marie Marandin. 1999. Constituency and word order in French subject inversion. In Gosse Bouma et al., editor, Constraints and Resources in Natural Language Syntax and Semantics. CSLI.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Domains in Warlpiri",
"authors": [
{
"first": "Cathryn",
"middle": [],
"last": "Donohue",
"suffix": ""
},
{
"first": "Ivan",
"middle": [
"A"
],
"last": "Sag",
"suffix": ""
}
],
"year": 1999,
"venue": "Abstracts of the Sixth Int. Conference on HPSG",
"volume": "",
"issue": "",
"pages": "101--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cathryn Donohue and Ivan A. Sag. 1999. Domains in Warlpiri. In Abstracts of the Sixth Int. Confer- ence on HPSG, pages 101-106, Edinburgh.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An efficient context-free parsing algorithm",
"authors": [
{
"first": "Jay",
"middle": [],
"last": "Earley",
"suffix": ""
}
],
"year": 1970,
"venue": "Communications of the ACM",
"volume": "13",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jay Earley. 1970. An efficient context-free parsing algorithm. Communications of the ACM, 13(2).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Generalized Phrase Structure Grammar",
"authors": [
{
"first": "Gerald",
"middle": [],
"last": "Gazdar",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"K"
],
"last": "Pullum",
"suffix": ""
},
{
"first": "Ivan",
"middle": [
"A"
],
"last": "Sag",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerald Gazdar, Ewan Klein, Geoffrey K. Pullum, and Ivan A. Sag. 1985. Generalized Phrase Structure Grammar. Harvard University Press.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Computational Environments for Grammar Development and Linguistic Engineering",
"authors": [
{
"first": "Thilo",
"middle": [],
"last": "G\u00f6tz",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Meurers",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Madrid",
"suffix": ""
},
{
"first": "Thilo",
"middle": [],
"last": "G\u00f6tz",
"suffix": ""
},
{
"first": "Gerald",
"middle": [],
"last": "Penn",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the EACL Workshop",
"volume": "134",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thilo G\u00f6tz and W. Detmar Meurers. 1997. The ConTroll system as large grammar development platform. In Proceedings of the EACL Workshop \"Computational Environments for Grammar De- velopment and Linguistic Engineering\", Madrid. Thilo G\u00f6tz and Gerald Penn. 1997. A proposed linear specification language. Volume 134 in Ar- beitspapiere des SFB 340, T\u00fcbingen.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "ALE reference manual",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Haji",
"suffix": ""
},
{
"first": "-Abdolhosseini",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Gerald",
"middle": [],
"last": "Penn",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Haji-Abdolhosseini and Gerald Penn. 2003. ALE reference manual. Univ. Toronto.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Parsing with discontinuous constituents",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 1985,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson. 1985. Parsing with discontinuous constituents. In Proceedings of ACL, Chicago.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Extraposition via complex domain formation",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Kathol",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Pollard",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "174--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Kathol and Carl Pollard. 1995. Extraposi- tion via complex domain formation. In Proceed- ings of ACL, pages 174-180, Boston.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Linearization-Based German Syntax",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Kathol",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Kathol. 1995. Linearization-Based Ger- man Syntax. Ph.D. thesis, Ohio State University.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Head-driven parsing",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Kay",
"suffix": ""
}
],
"year": 1990,
"venue": "Current Issues in Parsing Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Kay. 1990. Head-driven parsing. In Masaru Tomita, editor, Current Issues in Parsing Tech- nology. Kluwer, Dordrecht.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Deutsche Syntax deklarativ",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan M\u00fcller. 1999. Deutsche Syntax deklarativ. Niemeyer, T\u00fcbingen.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Continuous or discontinuous constituents? A comparison between syntactic analyses for constituent order and their processing systems",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2004,
"venue": "Research on Language and Computation",
"volume": "2",
"issue": "2",
"pages": "209--257",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan M\u00fcller. 2004. Continuous or discontinuous constituents? A comparison between syntactic analyses for constituent order and their process- ing systems. Research on Language and Compu- tation, 2(2):209-257.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Linearization and WHextraction in HPSG: Evidence from Serbo-Croatian",
"authors": [
{
"first": "Gerald",
"middle": [],
"last": "Penn",
"suffix": ""
}
],
"year": 1999,
"venue": "Slavic in HPSG. CSLI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerald Penn. 1999. Linearization and WH- extraction in HPSG: Evidence from Serbo- Croatian. In Robert D. Borsley and Adam Przepi\u00f3rkowski, editors, Slavic in HPSG. CSLI.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Head-Driven Phrase Structure Grammar",
"authors": [
{
"first": "Carl",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "Ivan",
"middle": [
"A"
],
"last": "Sag",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carl Pollard and Ivan A. Sag. 1994. Head- Driven Phrase Structure Grammar. University of Chicago Press, Chicago.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Direct parsing with discontinuous phrases",
"authors": [
{
"first": "Allan",
"middle": [
"M"
],
"last": "Ramsay",
"suffix": ""
}
],
"year": 1999,
"venue": "Natural Language Engineering",
"volume": "5",
"issue": "3",
"pages": "271--300",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allan M. Ramsay. 1999. Direct parsing with dis- continuous phrases. Natural Language Engi- neering, 5(3):271-300.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Parsing bounded discontinuous constituents: Generalisations of some common algorithms",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Reape",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Reape. 1991. Parsing bounded discontinuous constituents: Generalisations of some common algorithms. In Mike Reape, editor, Word Order in Germanic and Parsing. DYANA R1.1.C.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A Formal Theory of Word Order: A Case Study in West Germanic",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Reape",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Reape. 1993. A Formal Theory of Word Or- der: A Case Study in West Germanic. Ph.D. the- sis, University of Edinburgh.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "On the left periphery of German finite sentences",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Richter",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Sailer",
"suffix": ""
}
],
"year": 2001,
"venue": "Constraint-Based Approaches to Germanic Syntax. CSLI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Richter and Manfred Sailer. 2001. On the left periphery of German finite sentences. In W. Det- mar Meurers and Tibor Kiss, editors, Constraint- Based Approaches to Germanic Syntax. CSLI.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Direct parsing of ID/LP grammars",
"authors": [
{
"first": "M",
"middle": [],
"last": "Stuart",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shieber",
"suffix": ""
}
],
"year": 1984,
"venue": "Linguistics & Philosophy",
"volume": "7",
"issue": "",
"pages": "135--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stuart M. Shieber. 1984. Direct parsing of ID/LP grammars. Linguistics & Philosophy, 7:135-154.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Computational aspects of a grammar formalism for languages with freer word order. Diplomarbeit",
"authors": [
{
"first": "Oliver",
"middle": [],
"last": "Suhre",
"suffix": ""
}
],
"year": 1999,
"venue": "Arbeitspapiere des SFB",
"volume": "154",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oliver Suhre. 1999. Computational aspects of a grammar formalism for languages with freer word order. Diplomarbeit. (= Volume 154 in Ar- beitspapiere des SFB 340, 2000).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Head corner parsing for discontinuous constituency",
"authors": [
{
"first": "",
"middle": [],
"last": "Gertjan Van Noord",
"suffix": ""
}
],
"year": 1991,
"venue": "ACL Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gertjan van Noord. 1991. Head corner parsing for discontinuous constituency. In ACL Proceedings.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Long-distance scrambling via partial compaction",
"authors": [
{
"first": "",
"middle": [],
"last": "Shuichi Yatabe",
"suffix": ""
}
],
"year": 1996,
"venue": "Formal Approaches to Japanese Linguistics 2. MITWPL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuichi Yatabe. 1996. Long-distance scrambling via partial compaction. In Masatoshi Koizumi, Masayuki Oishi, and Uli Sauerland, editors, Formal Approaches to Japanese Linguistics 2. MITWPL.",
"links": null
}
},
"ref_entries": {}
}
}