|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:27:19.032729Z" |
|
}, |
|
"title": "LinPP: a Python-friendly algorithm for Linear Pregroup Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Irene", |
|
"middle": [], |
|
"last": "Rizzo", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Oxford", |
|
"location": { |
|
"country": "Oxford" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We define a linear pregroup parser, by applying some key modifications to the minimal parser defined in (Preller, 2007a). These include handling words as separate blocks, and thus respecting their syntactic role in the sentence. We prove correctness of our algorithm with respect to parsing sentences in a subclass of pregroup grammars. The algorithm was specifically designed for a seamless implementation in Python. This facilitates its integration within the DisCopy module for QNLP and vastly increases the applicability of pregroup grammars to parsing real-world text data.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We define a linear pregroup parser, by applying some key modifications to the minimal parser defined in (Preller, 2007a). These include handling words as separate blocks, and thus respecting their syntactic role in the sentence. We prove correctness of our algorithm with respect to parsing sentences in a subclass of pregroup grammars. The algorithm was specifically designed for a seamless implementation in Python. This facilitates its integration within the DisCopy module for QNLP and vastly increases the applicability of pregroup grammars to parsing real-world text data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Pregroup grammars (PG), firstly introduced by J.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Lambek in (Lambek, 1997) , are becoming popular tools for modelling syntactic structures of natural language. In compositional models of meaning, such as DisCoCat (Coecke et al., 2010) and DisCo-Circ (Coecke, 2019) , grammatical composition is used to build sentence meanings from words meanings. Pregroup types mediate this composition by indicating how words connect to each other, according to their grammatical role in the sentence. In DisCoCat compositional sentence embeddings are represented diagrammatically; these are used as a language model for QNLP, by translating diagrams into quantum circuits via the Z-X formalism (Zeng and Coecke, 2016; Coecke et al., 2020a,b; Meichanetzidis et al., 2020b,a) . DisCopy, a Python implementation of most elements of DisCoCat, is due to Giovanni Defelice, Alexis Toumi and Bob Coecke (Defelice et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 24, |
|
"text": "(Lambek, 1997)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 163, |
|
"end": 184, |
|
"text": "(Coecke et al., 2010)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 200, |
|
"end": 214, |
|
"text": "(Coecke, 2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 630, |
|
"end": 653, |
|
"text": "(Zeng and Coecke, 2016;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 654, |
|
"end": 677, |
|
"text": "Coecke et al., 2020a,b;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 678, |
|
"end": 709, |
|
"text": "Meichanetzidis et al., 2020b,a)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 832, |
|
"end": 855, |
|
"text": "(Defelice et al., 2020)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "An essential ingredient for a full implementation of the DisCoCat model, as well as any syntactic model based on pregroups, is a correct and efficient pregroup parser. Pregroup grammars are weakly equivalent to context-free grammars (Buszkowski, 2009) . Thus, general pregroup parsers based on this equivalence are poly-time, see e.g. (Earley, 1970) . Examples of cubic pregroup parsers exist by Preller (Degeilh and Preller, 2005) and Moroz (Moroz, 2009b) (Moroz, 2009a) . The latter have been implemented in Python and Java. A faster Minimal Parsing algorithm, with linear computational time, was theorised by Anne Preller in (Preller, 2007a) . This parser is correct for the subclass of pregroup grammars characterised by guarded dictionaries. The notion of guarded is defined by Preller to identify dictionaries, whose criticalities satisfy certain properties (Preller, 2007a) . In this paper we define LinPP, a new linear pregroup parser, obtained generalising Preller's definition of guards and applying some key modifications to the Minimal Parsing algorithm. LinPP was specifically designed with the aim of a Python implementation. Such implementation is currently being integrated in the DisCopy package (github:oxford-quantum-group/discopy).", |
|
"cite_spans": [ |
|
{ |
|
"start": 233, |
|
"end": 251, |
|
"text": "(Buszkowski, 2009)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 335, |
|
"end": 349, |
|
"text": "(Earley, 1970)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 404, |
|
"end": 431, |
|
"text": "(Degeilh and Preller, 2005)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 436, |
|
"end": 456, |
|
"text": "Moroz (Moroz, 2009b)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 457, |
|
"end": 471, |
|
"text": "(Moroz, 2009a)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 628, |
|
"end": 644, |
|
"text": "(Preller, 2007a)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 864, |
|
"end": 880, |
|
"text": "(Preller, 2007a)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The need for a linear pregroup parser originated from the goal of constructing a grammar inference machine learning model for pregroup types, i.e. a Pregroup Tagger. training and evaluation of such model is likely to involve parsing of several thousand sentences. Thus, LinPP will positively affect the overall efficiency and performance of the Tagger. The Tagger will enable us to process real world data and test the DisCoCat pregroup model against the state-of-the-art with respect to extensive tasks involving real-world language data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We recall the concepts of monoid, preordered monoid and pregroup.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Definition 2.1. A monoid P, \u2022, 1 is a set P together with binary operation \u2022 and an element 1, such that", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "(x \u2022 y) \u2022 z = x \u2022 (y \u2022 z)", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "x", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2022 1 = x = 1 \u2022 x (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "for any x, y \u2208 P . We refer to \u2022 as monoidal product, and we often omit it, by simply writing xy in place of of x \u2022 y.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Definition 2.2. A preordered monoid is a monoid together with a reflexive transitive relation P \u2192 P such that:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "x \u2192 y =\u21d2 uxv \u2192 uyv (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Definition 2.3. A pregroup is a preordered monoid P, \u2022, 1 , in which every object x has a left and a right adjoint, respectively written as x l and x r , such that:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "contraction rules x l x \u2192 1; xx r \u2192 1 expansion rules 1 \u2192 x r x; 1 \u2192 xx l", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Adjoints are unique for each object.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the context of natural language, pregroups are used to model grammatical types. This approach was pioneered by J. Lambek, who introduced the notion of Pregroup Grammars (Lambek, 1997) . These grammars are constructed over a set of basic types, which represent basic grammatical roles. For example, {n, s} is a set consisting of the noun type and the sentence type.", |
|
"cite_spans": [ |
|
{ |
|
"start": 172, |
|
"end": 186, |
|
"text": "(Lambek, 1997)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Definition 2.4. Let B be a set of basic types. The free pregroup over B, written P B , is the free pregroup generated by the set B \u222a \u03a3, where \u03a3 is the set of iterated adjonts of the types in B.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In order to easily write iterated adjoints, we define the following notation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Definition 2.5. Given a basic type t, we write t l n to indicate its n-fold left adjoint, and t r n for its n-fold right adjoint.E.g. we write t l 2 to indicate (t l )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "l .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Thanks to the uniqueness of pregroup adjoints we can mix the right and left notation. E.g. (t r 2 ) l is simply t r . We write t l 0 = t = t r 0 . We now define pregroup grammars, following the notation of (Shiebler et al., 2020) . Definition 2.6. A pregroup grammar is a tuple P G = {V, B, D, P B , s} where:", |
|
"cite_spans": [ |
|
{ |
|
"start": 206, |
|
"end": 229, |
|
"text": "(Shiebler et al., 2020)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "1. V is the vocabulary, i.e. a set of words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "2. B is a set of basic grammatical types.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3. P B is the free pregroup over B.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "4. D \u2282 V \u00d7 P B is the dictionary, i.e it contains correspondences between words and their assigned grammatical types.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "5. s \u2208 P B is a basic type indicating the sentence type.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Example 2.7. Consider the grammar given by V = {Alice, loves, Bob}, B = {n, s} and a dictionary with the following type assignments:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "D = {(Alice, n), (Bob, n), (loves, n r sn l )}", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Note that the grammatical types for Alice and Bob are so-called simple types, i.e basic types or their adjoints. On the other hand, the type of the transitive verb is a monoidal product. The type of this verb encodes the recipe for creating a sentence: it says give me a noun type on the left and a noun type on the right and I will output a sentence type.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In other words, by applying iterated contraction rules on nn r sn l n we obtain the type s. Diagrammatically we represent the string as Then, after applying the contraction rules, we obtain a sentence diagram:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "This diagram is used to embed the sentence meaning. This framework -introduced by Coecke et al. in 2010 -is referred to as DisCoCat and provides a mean to equip distributional semantics with compositionality. The composition is mediated by the sentence's pregroup contractions, as seen in the example above. (Coecke et al., 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 308, |
|
"end": 329, |
|
"text": "(Coecke et al., 2010)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The iterated application of contraction rules yields a reduction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Definition 2.8. Let S := t 1 ....t n be a string of simple types, and let T S := t j 1 ....t jp with j i \u2208 [1, n] for all i. We say that R : S \u2192 T S is a reduction if R is obtained by iterating contraction rules only. We say that T S is a reduced form of S. If T S cannot be contracted any further, we say that it is irreducible and we often write R : S =\u21d2 T S . Note that neither reductions nor irreducible forms are unique, as often we are presented with different options on where to apply contraction rules.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the context of pregroup grammars, we are interested in reducing strings to the sentence type s, whenever this is possible. Thus, we give such reduction a special name (Shiebler et al., 2020) :", |
|
"cite_spans": [ |
|
{ |
|
"start": 170, |
|
"end": 193, |
|
"text": "(Shiebler et al., 2020)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Definition 2.9. a reduction R : S =\u21d2 T S is called a parsing of S, if T s is the simple type s. A string S is a sentence if there exists a parsing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Often, we want to keep track of the types as they get parsed:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Definition 2.10. The set of reductions of R : S \u2192 T S is a set containing index pairs {i, j} such that t i t j is the domain of a contraction in R. These pairs are referred to as underlinks, or links (Preller, 2007a) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 216, |
|
"text": "(Preller, 2007a)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pregroup Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We now discuss critical and linear types in a pregroup grammar. We first need to introduce the notion of complexity (Preller, 2007a , Definition 5) [Preller] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 131, |
|
"text": "(Preller, 2007a", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 148, |
|
"end": 157, |
|
"text": "[Preller]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear vs critical", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Definition 3.1. A pregoup grammar with dictionary D has complexity k if, for every type t \u2208 D, any left (right) adjoint t l n (t r n ) in D is such that n < k.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear vs critical", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Complexity 1 indicates a trivial grammar that contains only basic types (no adjoints). Complexity 2 allows for dictionaries containing at most basic types and their 1-fold left and right adjoints, e.g. n l and n r . As proven in (Preller, 2007b) , every pregroup grammar is strongly equivalent to a pregroup grammar with complexity 2. This means that the subclass of complexity 2 pregroup grammars has the same expressive power of the whole class of pregroup grammars.", |
|
"cite_spans": [ |
|
{ |
|
"start": 229, |
|
"end": 245, |
|
"text": "(Preller, 2007b)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear vs critical", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We now introduce critical types (Preller, 2007a) . We say that a grammar is linear if all types in the dictionary are linear types. Given a string from a linear grammar, its reduction links are unique (Preller, 2007a, Lemma 7) . In fact, a very simple algorithm can be used to determine whether a linear string is a sentence or not.", |
|
"cite_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 48, |
|
"text": "(Preller, 2007a)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 201, |
|
"end": 226, |
|
"text": "(Preller, 2007a, Lemma 7)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear vs critical", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The Lazy Parsing algorithm produces parsing for all linear sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lazy Parsing", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Definition 3.3. Consider a linear string S. Let St be an initially empty stack, and R an initially empty set of reductions. The Lazy Parsing algorithm reduces the string as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lazy Parsing", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "1. The first type in S is read and added to St.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lazy Parsing", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "2. Any following type t n is read. Letting t i indicate the top of the stack St up until then, if t i t n \u2192 1 then St is popped and the link is added to R. Otherwise t n is added to St and R remains unchanged.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lazy Parsing", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "By (Preller, 2007a, Lemma8, Lemma9 ) Lazy Parsing reduces a linear string to its unique irreducible form, thus a linear string is a sentence if and only if the Lazy Parsing reduces it to s. Unfortunately linear pregroup grammars do not hold a lot of expressive power, and criticalities are immediately encountered when processing slightly more complex sentences than 'subject + verb + object'. Thus, defining parsing algorithms that can parse a larger class of pregroup grammars becomes essential.", |
|
"cite_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 34, |
|
"text": "(Preller, 2007a, Lemma8, Lemma9", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lazy Parsing", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In order to discuss new parsing algorithms in the next sections, we introduce some useful notions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Definition 3.4. Given a reduction R, a subset of nested links is a called a fan if the right endpoints of the links form a segment in the string. A fan is critical if the right endpoints are critical types (Preller, 2007a) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 206, |
|
"end": 222, |
|
"text": "(Preller, 2007a)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Below, we define guards, reformulating the notion introduced by Preller in (Preller, 2007a) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 91, |
|
"text": "(Preller, 2007a)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Definition 3.5. Let us consider a string S := t 1 ....t b = Xt p Y , containing a critical type t p . Let S reduce to 1. We say that t b is a guard of t p in S if the following conditions are satisfied:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "1. X contains only linear types and there exists a reduction R : X =\u21d2 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "2. There exists a link {j, k} of R such that t k t p \u2192 1 and t j t b \u2192 1 are contractions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "3. There exist subreductions R1, R 2 \u2282 R such that R 1 : t k+1 ..t p\u22121 =\u21d2 1 and R 2 :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "t 1 ...t j\u22121 =\u21d2 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "4. There exists a reduction R y :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Y =\u21d2 t b .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "If such guard exists, we say that the critical type is guarded and we say that {j, b} is a guarding link for the critical type.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Let us adapt this definition to critical fans.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Definition 3.6. Let us consider the segment S := t 1 .....t n = XT c Y . Let us assume there exists a reduction S =\u21d2 1, that contains a critical fan with right end points t p ....t p+q =: T c . We say that the fan is guarded in S if:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "1. X is linear and there exists a reduction R :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "X =\u21d2 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "2. There exist links", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "{j i , k i } \u2208 R, for i \u2208 [p, p + q], with k p > ... > k p+q , j p < ... < j p+q and t k p+q ...t kp T c =\u21d2 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "3. The segments t 1 ...t jp and t kp+1 ..t p\u22121 , as well as the ones in between each t k or t j and the next ones, have reductions to 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "4. There exists a reduction R y :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Y =\u21d2 T l c .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guards", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Critical types are particularly well behaved in dictionaries of complexity 2, as they are exactly the right adjoints t r of basic types t. We recall the following results from (Preller, 2007a, Lemma 17 & 18) . We assume complexity 2 throughout.", |
|
"cite_spans": [ |
|
{ |
|
"start": 176, |
|
"end": 207, |
|
"text": "(Preller, 2007a, Lemma 17 & 18)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Critical types in complexity 2 grammars", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Lemma 3.7. Let R : t 1 ...t m =\u21d2 1. Let t p be the leftmost critical type in the string and let R link {k, p}. Let t i be the top of the stack produced by Lazy Parsing, then i \u2264 k. Moreover, if k > i, there are j, b with i < j < k and b > p, such that Lazy Parsing links {j, k} and R links {j, b}.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Critical types in complexity 2 grammars", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Corollary 3.8. Let t p be the leftmost critical type of a sentence S. With i as above, if t i t p reduce to the empty type, then all reductions to type s will link {i, p}.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Critical types in complexity 2 grammars", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We prove the following result.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Critical types in complexity 2 grammars", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Lemma 3.9. Let S := s 1 ....s n be a string with m \u2265 2 critical types. Let them all be guarded. Let s p be a critical type, and let s q be the next one. Let s bp and s bq be their guards respectively. Assume the notation of the previous definitions. Then, either j q > p and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Critical types in complexity 2 grammars", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "b q < b p , or j q > b p .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Critical types in complexity 2 grammars", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Proof. By assumption, s p is guarded, and by definition of guard, the segment s p+1 ....s bp\u22121 reduces to the empty type. For the sake of contradiction, assume j q < p. Then, because crossings are not allowed, we must have j q < k p . Since j q is a left adjoint of a basic type, it can only reduce on its right, and we have k q < k p . However, the segment Y p does contain s q , and does not contain its reduction s kq , thus Y p cannot reduce to type s bp , which is a contradiction. Thus j q > p, and to avoid crossings, it is either j q > p and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Critical types in complexity 2 grammars", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "b q < b p or j q > b p .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Critical types in complexity 2 grammars", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The lemmas above also hold for guarded critical fans.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Critical types in complexity 2 grammars", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In this section we define M iniP P , a minimal parsing algorithm complexity 2 pregroup grammars.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "M inP P : Minimal Parsing Algorithm", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "MinPP pseudo-code. Let sentence : t 1 ....t m be a string of types from a dictionary with complexity 2. We associate each processing step of the algorithm with a stage S n . Let S 0 be the initial stage, and S n := {a, n} with n \u2265 1 be the stage processing the type a in position n. Let R n and St n be respectively the set of reductions and the reduced stack at stage S n . Let us write (St n ) for the function returning the top element of the stack at stage n and pop(St n ) for the function popping the stack. The steps of the algorithms are defined as follows. At stage S 0 , we have R 0 = \u2205 and St 0 = \u2205. At stage S 1 , R 1 = \u2205 and St 1 = t 1 . At stages S n , n > 1, let t i = (St n\u22121 ). We define the following cases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "M inP P : Minimal Parsing Algorithm", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 If t i t n \u2192 1:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "M inP P : Minimal Parsing Algorithm", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "St n =pop(St n\u22121 ) R n =R n\u22121 \u222a {i, n} \u2022 Elif t n is linear: St n =St n\u22121 + t n R n =R n\u22121", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "M inP P : Minimal Parsing Algorithm", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 Else (t n is critical):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "M inP P : Minimal Parsing Algorithm", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "1. while types are critical read sentence forward starting from t n and store read types. Let T r := t n ...t n+v , v \u2265 0, be the segment of stored types.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "M inP P : Minimal Parsing Algorithm", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "2. Create a new empty stack St back . Process sentence backward, starting from T r and not reading further than t i+1 . 3. If St back is never found empty, set St n = St n\u22121 + T r , R n = R n\u22121 and move to stage S n+v+1 i.e. the first type after the critical fan. If instead St back becomes empty, proceed as follows. 4. St back being empty means that T r was reduced with some types T . By construction, T had been initially reduced with some types T l by the forward process. Set St n = St n\u22121 + T l . Write R Tprec for the set of links that originally reduced T l T . Write R T for the set of links for the T T r reduction, as found by the backward process. Set", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "M inP P : Minimal Parsing Algorithm", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "R n = (R n\u22121 \u222a R T )/R Tprec .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "M inP P : Minimal Parsing Algorithm", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Move to the next stage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "M inP P : Minimal Parsing Algorithm", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In this section we prove the correctness of M inP P with respect to reducing strings to an irreducible form, given some restrictions on the grammar. First we prove that M inP P is a sound and terminating parsing algorithm for complexity 2 pregroup grammars. Then, we prove that it is also correct with respect to a subclass of complexity 2 pregroup grammars identified bt certain restrictions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formal Verification", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Theorem 4.1. Let str be a string of types from a complexity 2 pregroup grammar. If we feed str to M inP P , then:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formal Verification", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "1. Termination: M inP P eventually halts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formal Verification", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Completeness: If str is a sentence, M inP P reduces str to sentence type s.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "If str is not a sentence, then M inP P will reduce it to an irreducible form different from s.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soundness:", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Proof. Let t i always indicate the top of the stack. Termination. Let us consider strings of finitely many types. We prove that at each stage the computation is finite, and that there are a finite number of stages. A stage S n is completed once its corresponding stack St n and set of reductions R n is computed. If t n is linear, updating St n and R n only involves two finite computations: checking whether t i t n \u2192 1 (done via a terminating truth value function), and popping or adding t n to the stack. In the case of t n being critical, if", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soundness:", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "t i t n \u2192 1,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soundness:", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "this is handled like in the linear case. Else, the following computations are involved: first, the algorithm will read forward to identify a critical fan. This will halt when either reading the last critical type of the fan, or the last type in the string. Then the string is processed backward. This computation involves finite steps as in the forward case, and halts when reading t i or the first type in the string, or when the stack is empty. The next computations involve updating the stack and reduction sets via finite functions. This proves that each step of the process is finite and that M inP P terminates.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soundness:", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "We prove it by induction on the number of critical fans. Base Case Consider a string with one critical fan with right endpoints T r , and assume it is not a sentence. The case in which the fan reduces with the stack is trivial, so we assume otherwise. We have two cases: # 1: Let T r have a left reduction T . Assuming the notation above, consider segments \u03b8 prec , \u03b8, \u03b8 post . \u03b8 reduces to the empty type. So we must have \u03b8 prec \u03b8 post \u2192 C, with C = s. Since this string is linear M inP P will reduce the full string to C. # 2: Assume T r doesn't have a left reduction. Then the backward stack will not become empty, and once the backward parsing will reach t i , M inP P will add T r to the forward stack. At this stage, the remaining string will be CT r D with C possibly empty. D is linear and cannot contain right reductions for T r since the complexity is 2. Thus M inP P will reduce it by Lazy Parsing to its unique irreducible form T r U = s. Inductive Hypothesis Assume that M inP P reduces any non-sentences to to an irreducible form different from s, given that the string has no more than m critical fans.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soundness.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We consider a string with m + 1 critical fans, and no reduction to the sentence type. # 1: Assume the notation above and let T r have left reductions. Then, we remove the segment \u03b8. M inP P : \u03b8 =\u21d2 1. The remaining string has m critical fans and no reduction to sentence type, so by induction hypothesis, M inP P won't reduce it to the sentence type. # 2: Assume that T r has no left adjoints in the string. Then, M inP P will add T r to the to the top of the forward stack. The remaining string to process is CT r D, with C linear, irreducible and possibly empty, and D containing m critical fans. Thus, M inP P will correctly parse D to its irreducible form, by inductive hypothesis or by proof of completeness (depending on whether D has a reduction to s or not). Therefore M inP P will reduce CT r D to an irreducible form, that must be different from s since T r cannot contain s and cannot reduce further.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inductive Step", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In order to prove completeness we need to restrict our grammars further.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inductive Step", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Theorem 4.2. Let str be a string of types from a complexity 2 pregroup grammar. Let also assume that all critical fans are guarded or their critical types contract with the top of the stack of the corresponding stages. If we feed str to M inP P , then: (Completeness) If str is a sentence, M inP P reduces str to sentence type s.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inductive Step", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Proof. We prove it by induction on the number of critical fans. Base case Let us consider a sentence with one critical fan, with right-end points T r := t p ....t p+n . At stage S p , we have two cases: # 1: Let t i t p \u2192 1. Then, by 3.8 all reductions of the string to the sentence type will link i, p. Since links cannot cross, we have k q < i for all q. Thus all critical types are linked to types in the stack. Thus, their links are unique and will be reduced by Lazy Parsing. By assumption, all types other than the critical fan are linear, thus their links are unique. Thus, Lazy Parsing will correctly reduce this sentence, and, by construction, so will M inP P . # 2: Let t i t p 1, let R be an arbitrary reduction of the string to sentence type. Then, by 3.7, R links each critical type t q on the left with some t kq , such that i < k q < p. Moreover, since the fan is guarded, the backward stack will become empty when the type t k p+n is read. At this point, the segment T l := t jp ....t j p+n is added to the forward stack. The remaining reductions are linear and T l will be linearly reduced by Lazy parsing, since the fan is guarded. Thus, M inP P will correctly reduce this string to the sentence type. Inductive Hypothesis Assume M inP P parses any sentence with at most m guarded critical fans.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inductive Step", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Step Consider a string with m + 1 guarded critical types. Consider the leftmost critical fan, and write T r := t p ...t p+n for the segment given by its right end points. Let R be a reduction of the string to the sentence type. We have again two cases:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inductive", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "# 1: Let R reduce T r with T in the top of the stack computed by Lazy Parsing. M inP P will reduce T T r \u2192 1 by lazy parsing. After this stage, consider the string P obtained by appending the remaining unprocessed string to St. P contains m critical fans and reduces to sentence type, thus, by inductive hypothesis, M inP P will parse it. # 2: Assume T r does not reduce with types in the stack. Let T := t l p+n ...t l p be the types in the string which are reduced with T r . Their index must be larger than i. Write \u03b8 := t l p+n ...T r . Write \u03b8 prec for the segment preceding \u03b8, and \u03b8 post for the segment following \u03b8. \u03b8 prec is linear, so its irreducible form D is unique. Moreover, by construction, we must have D = CT l . Then M inP P : \u03b8 prec =\u21d2 CT l by Lazy Parsing. Since T r is guarded, the backward stack will eventually be empty and M inP P : \u03b8 prec \u03b8 =\u21d2 CT l . The remaining string CT l \u03b8 post contains m guarded critical types and, since T r is guarded, it has a reduction to sentence type. By inductive hypothesis, M inP P : C\u03b8 post =\u21d2 s.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inductive", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that this proves that M inP P is correct for the class of complexity 2 pregroup grammars identified by the above restrictions on the critical fans. We recall that complexity 2 grammars hold the same expressive power of the whole class of pregroup grammars. We now verify that M inP P parses string in quadratic computational time. Lemma 4.3. M inP P parses a string in time proportional to the square of the length of the string.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inductive", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Proof. Let N be the number of simple types in the processed string. M inP P sees each type exactly once in forward processing. This includes either attempting reductions with the top of the stack or searching for a critical fan. In both cases these processes are obtained via functions with constant time d. Thus the forward processing happens overall in time dN . Then, for each critical fan, we read the string backward. This process is done in time dN 2 at most. Finally, when backward critical reductions are found, we correct the stack and set of reductions. The correction functions have constant time c, so all corrections happen in time cN at most. Summing these terms we obtain:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inductive", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "time = dN 2 + (d + c)N.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inductive", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Certain words are typically assigned compound types by the dictionary, e.g. T := n r sn l for transitive verbs. It might be the case that a compound type T W of a word W , is not irreducible. Both M inP P and the parsers mentioned in the Introduction will read types in T W and reduce T W to an irreducible form. However, the main purpose of grammatical pregroup types is to tell us how to connect different words. Reducing words internally defeats this purpose. We want to overcome this limitation and construct an algorithm that ignores intra-word reductions. Given a word W 1 let T 1 be its corresponding type (simple or compound). In M inP P we defined stages S n corresponding to each simple type t n being read. Let us write Z 1 for the super stage corresponding to word W 1 being read. Z 1 contains one or more S n corresponding to each simple type in T 1 . We modify M inP P as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LinP P : Linear Pregroup Parsing algorithm", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 At stage Z 1 , we add T 1 = t 1 ...t j to the stack. We immediately jump to super stage Z 2 and stage S j+1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LinP P : Linear Pregroup Parsing algorithm", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 When each new word W m , with m > 1 and T m := t m 1 ...t m k is processed, We try to contract t i t m 1 . While types contracts we keep reducing the types t m j with the top of the stack. We stop when either a pair t i t m j does not contract or when we reach the end of the word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LinP P : Linear Pregroup Parsing algorithm", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 If t i t m j 1 and t m j is linear, we add t m j ...t m k to the stack and jump to stages Z m+1 , S m k +1 . If t m j is critical, we handle it as in M inP P : if a backward reduction is found, the stack and reduction set are updated and we move to S m j +1 ; if the backward reduction is not found, we add t m j ...t m k to the stack and move to the next word as above.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LinP P : Linear Pregroup Parsing algorithm", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In other words, LinP P follows the same computational steps of M inP P , while only checking reductions between types of separate words. By assuming dictionaries whose sentences do not involve intra-word reductions, the above proof of correctness can be adapted to hold for LinP P . Modifications are trivial. We previously highlighted the importance of a linear parser; up to this point LinP P computes parsing in quadratic time. Below we impose some further restrictions on the input data, which enable linear computational time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LinP P : Linear Pregroup Parsing algorithm", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Definition 5.1. We say that a dictionary of complexity 2 is critically bounded if, given a constant K \u2208 N, for each critical type t c in a string, exactly one of the following is true:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LinP P : Linear Pregroup Parsing algorithm", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 t c reduces when processing the substring t c\u2212K ...t c backwards;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LinP P : Linear Pregroup Parsing algorithm", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 t c does not not reduce in the string.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LinP P : Linear Pregroup Parsing algorithm", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In other words, critical underlinks cannot exceed lenght K.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LinP P : Linear Pregroup Parsing algorithm", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Lemma 5.2. Assume the restrictions of section 4.1, no-intra word reductions, and critically bounded dictionaries. Then LinP P parses strings in linear computational time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LinP P : Linear Pregroup Parsing algorithm", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Proof. Assume a string of length N . LinP P forward processing involves reading each type at most once. Thus it happens at most in time dN , with d as in section 4.1. Moreover, when a critical fan is read, the string is parsed backward, reading at most K types. This process takes dK time per critical fan. Thus it takes overall times dKN . Finally there is an extra linear term, cN , given by the time spent to correct the stack and reduction set. Summing up those terms, we obtain overall computational time CN , with C = d(1 + K) + c being a constant specific to each bounded dictionary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LinP P : Linear Pregroup Parsing algorithm", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this paper we first defined a quadratic pregroup parser, M inP P , inspired by Preller's minimal parser. We proved its correctness with respect to reducing strings to irreducible forms, and in particular to parse sentences to the sentence type, in the class of pregroup grammar charactorised by complexity 2 and guarded critical types. Note that our definition of guards differs from the one given in (Preller, 2007a) . We then modified M inP P in order to remove intra-words links. We proved that the obtained algorithm, LinP P , is linear, given that the dictionaries are critically bounded. LinP P was implemented in Python and it's soon to be integrated in the DisCopy package. The reader can find it at github:oxford-quantum-group/discopy. LinP P is an important step towards the implementation of a supervised pregroup tagger, which will enable extensive testing of the DisCoCat model on task involving larger data-sets. Future theoretical work and implementations will involve researching a probabilistic pregroup parser based on LinP P . Future work might also involve investigation the connection between pregroup parsers and compositional dynamical networks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 404, |
|
"end": 420, |
|
"text": "(Preller, 2007a)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The author thanks Giovanni Defelice and Alexis Toumi for the constructive discussions and feedback on the parser and its Python implementation. The author thanks their supervisors, Bob Coecke and Stefano Gogioso, for directions and feedback. Many Thanks to Antonin Delpeuch for insights on cubic pregroup parsers. Last but not least, the author thanks Anne Preller for the precious input in reformulating the definition of guards and for the insightful conversation on the topic of this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Lambek grammars based on pregroups. Logical Aspects of Computa-tional Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Buszkowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "2099", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W. Buszkowski. 2009. Lambek grammars based on pregroups. Logical Aspects of Computa-tional Lin- guistics, LNAI 2099.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The mathematics of text structure", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Coecke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1904.03478" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Coecke. 2019. The mathematics of text structure. arXiv: 1904.03478 [cs.CL].", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Mathematical foundations for a compositional distributional model of meaning", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Coecke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Sadrzadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--34", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1003.4394v1" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Coecke, M. Sadrzadeh, and S. Clark. 2010. Math- ematical foundations for a compositional distri- butional model of meaning. arXiv:1003.4394v1 [cs.CL], pages 1-34.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Konstantinos Meichanetzidis, and Alexis Toumi. 2020a. Foundations for Near-Term Quantum Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "Bob", |
|
"middle": [], |
|
"last": "Coecke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giovanni", |
|
"middle": [], |
|
"last": "De", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felice", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2012.03755" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bob Coecke, Giovanni de Felice, Konstantinos Me- ichanetzidis, and Alexis Toumi. 2020a. Foundations for Near-Term Quantum Natural Language Process- ing. arXiv:2012.03755 [quant-ph].", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Quantum natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Bob", |
|
"middle": [], |
|
"last": "Coecke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giovanni", |
|
"middle": [], |
|
"last": "De Felice", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Konstantinos", |
|
"middle": [], |
|
"last": "Meichanetzidis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Toumi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefano", |
|
"middle": [], |
|
"last": "Gogioso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolo", |
|
"middle": [], |
|
"last": "Chiappori", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bob Coecke, Giovanni de Felice, Konstantinos Me- ichanetzidis, Alexis Toumi, Stefano Gogioso, and Nicolo Chiappori. 2020b. Quantum natural lan- guage processing.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Discopy: Monoidal categories in python", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Defelice", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Toumi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Coecke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2005.02975" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. Defelice, A. Toumi, and B. Coecke. 2020. Discopy: Monoidal categories in python. arXiv:2005.02975 [math.CT].", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Efficiency of pregroups and the french noun phrase", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Degeilh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Preller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Journal of Language, Logic and Information", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "423--444", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Degeilh and A. Preller. 2005. Efficiency of pre- groups and the french noun phrase. Journal of Lan- guage, Logic and Information, 4:423-444.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "An efficient context-free parsing algorithm", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Earley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1970, |
|
"venue": "Communications of the AMC", |
|
"volume": "13", |
|
"issue": "", |
|
"pages": "94--102", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Earley. 1970. An efficient context-free parsing algo- rithm. Communications of the AMC, 13:94-102.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Type grammars revisited", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Lambek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--27", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Lambek. 1997. Type grammars revisited. LACL 1997, pages 1-27.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Alexis Toumi, and Bob Coecke. 2020a. Quantum Natural Language Processing on Near-Term Quantum Computers", |
|
"authors": [ |
|
{ |
|
"first": "Konstantinos", |
|
"middle": [], |
|
"last": "Meichanetzidis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefano", |
|
"middle": [], |
|
"last": "Gogioso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giovanni", |
|
"middle": [], |
|
"last": "De Felice", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicol\u00f2", |
|
"middle": [], |
|
"last": "Chiappori", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2005.04147" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Konstantinos Meichanetzidis, Stefano Gogioso, Gio- vanni De Felice, Nicol\u00f2 Chiappori, Alexis Toumi, and Bob Coecke. 2020a. Quantum Natural Lan- guage Processing on Near-Term Quantum Comput- ers. arXiv:2005.04147 [quant-ph].", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Grammar-Aware Question-Answering on Quantum Computers", |
|
"authors": [ |
|
{ |
|
"first": "Konstantinos", |
|
"middle": [], |
|
"last": "Meichanetzidis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Toumi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giovanni", |
|
"middle": [], |
|
"last": "De Felice", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bob", |
|
"middle": [], |
|
"last": "Coecke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2012.03756" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Konstantinos Meichanetzidis, Alexis Toumi, Giovanni de Felice, and Bob Coecke. 2020b. Grammar- Aware Question-Answering on Quantum Computers. arXiv:2012.03756 [quant-ph].", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Parsing pregroup grammars in polynomial time", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Moroz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "International Multiconference on Computer Science and Information Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Moroz. 2009a. Parsing pregroup grammars in poly- nomial time. International Multiconference on Com- puter Science and Information Technology.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A savateev-style parsing algorithm for pregroup grammars", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Moroz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "International Conference on Formal Grammar", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "133--149", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Moroz. 2009b. A savateev-style parsing algorithm for pregroup grammars. International Conference on Formal Grammar, pages 133-149.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Linear processing with pregoups", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Preller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Studia Logica", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Preller. 2007a. Linear processing with pregoups. Studia Logica.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Towards discourse prepresentation via pregroup grammars", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Preller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "JoLLI", |
|
"volume": "16", |
|
"issue": "", |
|
"pages": "173--194", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Preller. 2007b. Towards discourse prepresentation via pregroup grammars. JoLLI, 16:173-194.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Incremental monoidal grammars", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Shiebler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Toumi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Sadrzadeh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2001.02296v2" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Shiebler, A. Toumi, and M. Sadrzadeh. 2020. Incre- mental monoidal grammars. arXiv:2001.02296v2 [cs.FL].", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Quantum Algorithms for Compositional Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Zeng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bob", |
|
"middle": [], |
|
"last": "Coecke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Electronic Proceedings in Theoretical Computer Science", |
|
"volume": "221", |
|
"issue": "", |
|
"pages": "67--75", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.4204/EPTCS.221.8" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William Zeng and Bob Coecke. 2016. Quantum Al- gorithms for Compositional Natural Language Pro- cessing. Electronic Proceedings in Theoretical Com- puter Science, 221:67-75.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Definition 3.2. A type c is critical, if there exists types a, b \u2208 D such that ab \u2192 1 and bc \u2192 1. A type is linear if it is not critical." |
|
} |
|
} |
|
} |
|
} |