|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:43:08.330872Z" |
|
}, |
|
"title": "Graph Matching and Graph Rewriting: GREW tools for corpus exploration, maintenance and conversion", |
|
"authors": [ |
|
{ |
|
"first": "Bruno", |
|
"middle": [], |
|
"last": "Guillaume", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "CNRS", |
|
"location": { |
|
"postCode": "F-54000", |
|
"settlement": "Nancy", |
|
"region": "Inria, LORIA", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This article presents a set of tools built around the Graph Rewriting computational framework which can be used to compute complex rule-based transformations on linguistic structures. Application of the graph matching mechanism for corpus exploration, error mining or quantitative typology are also given.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This article presents a set of tools built around the Graph Rewriting computational framework which can be used to compute complex rule-based transformations on linguistic structures. Application of the graph matching mechanism for corpus exploration, error mining or quantitative typology are also given.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The motivation of GREW is to have an effective tool to design rule-based transformations of linguistic structures. When designing GREW, our goal was to be able to manipulate at least syntactic and semantic representations of natural language (one of the first application of GREW was the modeling of a syntax-semantics interface). In a naive view, we can say that syntactic structures are trees and semantic ones are graphs. Then, if we want to work with both kinds of structures in a common framework, we can use the fact that a tree can be considered as a graph and hence consider that all structures are graphs. 1 Now, if we consider all structures as graphs, how to describe rule-based transformation on these structures? In practice, these transformations can of course be computed with some programs but when it becomes complex and implies many rules, it is difficult to maintain and to debug. To deal with this, we propose to use the graph rewriting formalism to describe these transformations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 615, |
|
"end": 616, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Graph rewriting is a well-defined mathematical formalism and we know that any computable transformation can be expressed by a graph rewriting system. In this approach, a global transformation is decomposed in a successive application of small and local transformations which are described by rules; linguistic transformations can be decomposed in a modular way in atomic steps which are easier to manage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Several graph rewriting tools already exist but some specificities of NLP made it useful to build a system dedicated to this domain. In GREW tools, a built-in notion of feature structure is available and rules can be parametrised by lexical information. Moreover, transformations on dependency structures often requires to change head of substructures and a dedicated command ease this kind of operation (see Section 3.5).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In Section 2, we give a more precise definition of our graphs and graph rewriting framework and the next parts present examples about rewriting (Section 3) and about matching (Section 4).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The book (Bonfante et al., 2018) gives a complete description of the graphs and graph rewriting system used in GREW. We give here a short description on the main aspects.", |
|
"cite_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 32, |
|
"text": "(Bonfante et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graphs and graph rewriting", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In our framework, a graph is defined by a set of nodes labelled by non-recursive feature structure and a set of labelled edges (note that edges encode relations and hence, we do not consider multiple edges with the same label on the same pair of nodes). In addition to the usual graph mathematical definition of graphs, we also add a notion of order on nodes. For each graph, a sub-part of the nodes are ordered. The subset of ordered nodes can contains all the nodes (for instance in dependency structures like in Figure 1 ); it can be empty (for instance in semantic graphs like AMR structures shown in Section 4.2); but we can also have structures where a strict subpart is ordered, for instance with phrase structure trees where lexical nodes are ordered following the tokens order in the input sentence whereas non-lexical nodes are unordered.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 515, |
|
"end": 523, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Graphs and graph rewriting", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Global transformations of graphs are decomposed in small steps; each step is described as a rule. A rule encodes a local transformation and is composed in two parts: the left-hand side which expresses the conditions for the application of the rule and the right-hand part which describes the modifications to be done on the graph.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graphs and graph rewriting", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Formally, the conditions of application are described by a pattern which is itself a graph. Graph matching is used to decide if a pattern can be found in a graph. The pattern can be refined by a set of NAP (negative application patterns) which are used to filter out some occurrences given by the first pattern. The main pattern is introduced by the keyword pattern and NAPs are introduced with the keyword without (see examples in the next section).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graphs and graph rewriting", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To avoid complex mathematical definitions and to propose an operational way to modify graph, GREW describes the modifications of the graph through a sequence of atomic commands for edge deletion, edge creation, feature updating. . .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graphs and graph rewriting", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "When the number of rules increases, it may become tricky to control the order in which they should be applied; a dedicated notion of rewriting strategies was design to let the user control these applications.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graphs and graph rewriting", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "When using rewriting, confluence and termination are important aspects. These questions are discussed on examples in the next section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graphs and graph rewriting", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The goal of this section is to present through examples the usage of the rewriting part of GREW. Some important concepts like confluence and termination will be also discussed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph rewriting in practice", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The conversion between different formats is one the common usage of GREW. We will use the example of the conversion from one dependency annotation format (used in the Sequoia project (Candito and Seddah, 2012)) to Universal Dependencies (UD) (Nivre et al., 2016) . The Figure 1 shows the annotations of a French sentence in both formats.", |
|
"cite_spans": [ |
|
{ |
|
"start": 242, |
|
"end": 262, |
|
"text": "(Nivre et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 269, |
|
"end": 277, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "First rules", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The whole transformation is decomposed into small steps which are described by rules. When GREW is used to rewrite an input graph, a strategy describes how rules should be applied. In the first examples below, the strategy consists in just one rule.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First rules", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In our conversion example, we need a rule to change the POS for adjectives: A is used in Sequoia and ADJ in UD. The GREW rule for this transformation is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First rules", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "rule adj { pattern { N [upos=A] } commands { N.upos = ADJ } }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First rules", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The application of this rule on the input graph produces, as expected the graph below:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First rules", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Deux deux D autres autre ADJ photos photo N sont \u00eatre V montr\u00e9es montrer V du de P+D doigt doigt N mod aux.pass mod obj.p det suj", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First rules", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We can then imagine others similar rules for other POS tags:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First rules", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "P is Sequoia becomes ADP in UD, N is Sequoia becomes NOUN in UD. rule prep { pattern { N [upos=P] } commands { N.upos = ADP } } rule noun { pattern { N [upos=N] } commands { N.upos = NOUN } }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First rules", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "But applying the rule prep to the input graph produces an empty set and the application of noun the input graph produced two different graphs (one with photos tagged as NOUN, the other with doigt tagged as NOUN)!", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First rules", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In fact, the result of the application of a rule on a graph is a set of graphs, one for each occurence of the pattern found in the input graph. This set is then empty if the pattern is not found (like pattern {N [upos=P]}) or contains two graphs if the pattern if found twice (like pattern {N [upos=N]}). To iterate the application of a rule, one has to use more complex strategies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First rules", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The strategy Onf(noun) 2 iterates the application of the strategy noun on the input graph. With the same input graph (of Figure 1) , the application of GREW with the strategy Onf(noun) produces a graph where the two nouns have the new tag NOUN.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 130, |
|
"text": "Figure 1)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "First rules", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Note that Onf(...) always outputs exactly one graph. With the strategy Onf(prep) for instance, the rewriting process will output one graph, identical to the input graph, obtained after 0 application of the prep rule.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First rules", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In previous examples, we considered rules separately, but in a global transformation all the previous rules must be used in the same global transformation. A solution to use several rules in the The package name POS can be used as a strategy name for rewriting. Applying the package POS corresponds to the application of one of the rules of the package. With our input graph, it produces three different graphs, obtained either by the application of the rule adj or by the two possible applications of the rule noun.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First rules", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In order to iterate the package, we need the strategy Onf(POS). As before with Onf, exactly one graph is produced with three successive applications of the rules: ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First rules", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "One key problem that may arise when using rewriting is the non-termination of the process. If we go on with the previous example about POS and consider verbs: the same tag V should be converted to AUX or to VERB. One way to decide that the new POS must be AUX is the presence of the relation aux.pass. We can propose the rule:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Termination", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "rule aux_1 { pattern { M -[aux.pass]-> N } commands { N.upos = AUX } }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Termination", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "But the process of rewriting with strategy Onf(aux_1) is not terminating because nothing prevents the rule to be applied again and again, the pattern is still present after the application of the rule. In practice, a bound can be set on the number of rules applied 3 and an error is thrown when this bound is reached, in order to avoid non-terminating computation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Termination", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "A way to solve this problem is to make the pattern stricter. With the rule below and the strategy Onf(aux_2), the expected output is obtained after one application of the rule.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Termination", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "rule aux_2 { pattern { M -[aux.pass]-> N; N[upos=V] } commands { N.upos = AUX } }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Termination", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Of course, in a more general setting, we can have loops which imply more than one rule and which are more difficult to manage. Unfortunately, it is not possible to decide algorithmically if some rewriting system is terminating or not.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Termination", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Anyway, in NLP applications like conversions from format A to format B, it is often easy to ensure termination be defining measure which stands for the fact that we are \"closer\" to the B format after each rule application. For instance, in all the non-looping rules above, if we count the number of Sequoia POS tag in the graph, it is strictly decreasing at each rule application.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Termination", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Another well-known issue with rewriting is the problem of confluence. As said earlier, the Sequoia tag V may be converted to AUX or VERB. A naive way to encode this in rules is to write the package:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Confluence", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "package v_1 { rule aux { pattern { N [upos = V] } commands { N.upos = AUX } } rule verb { pattern { N [upos = V] } commands { N.upos = VERB } } }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Confluence", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The two rules overlap: each time a POS V is found, both rules can be used and produces a different output! We call this kind of system nonconfluent. Anyway, the strategy Onf(v_1) still produced exactly one graph by choosing (in a way which cannot be controlled) one of the possible ways to rewrite.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Confluence", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "What should we do with non-confluent system? There are two possible situations: (1) The two rules are correct and there is a real (linguistic) ambiguity and all solutions must be considered or (2) There is no ambiguity, the rules must be corrected.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Confluence", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In our example, we are clearly in the second case, but we consider briefly the other case for the explanation on how to deal with really non-confluent setting. Let us suppose that we are interested in all possible solutions. GREW provides a strategy Iter(v_1) to do this: this strategy applied to the same input graph produces 4 different graphs with different combinations of either AUX or VERB for the two words sont and montr\u00e9es.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Confluence", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Of course, in our POS tags conversion example, the correct solution is to design more carefully our two rules, in order to produce the correct output: Here, the two rules are clearly exclusive: the same clause M -[aux.pass]-> N is used first in the pattern part of rule aux and in the without part of rule verb. With these two new rules, the system is confluent, and there is only one possible output. This can be tested with the Iter(v_2) strategy which produces all possible graphs, exactly one in this case.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Confluence", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "package", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Confluence", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Of course, the strategy Onf(v_2) produces the same output in this setting. When a package p is confluent, the two strategies Onf(p) and Iter(p) give the same result. In practice, the strategy Onf(p) must be preferred because it is much more efficient to compute.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Confluence", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In Figure 1 , we can observe that in addition to a different POS tagset, the UD format also uses a different tokenisation. The word du of the input sentence is a token with a POS P+D in Sequoia but this is in fact an amalgam of two lexical units: a preposition and a determiner 4 . In UD, such combined tag are not allowed and the sentence is annotated with two tokens de and le for the word du. Hence, we have to design a rule to make this new tokenisation. The rule below computes this transformation: This is our first rule with more than one commands. In general, the transformation is described by a sequence of commands which are applied successively to the current graph. The application of this rule to our input graph builds:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 11, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "More commands", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Deux deux D autres autre A photos photo N sont \u00eatre V montr\u00e9es montrer V de de ADP le DET doigt doigt N mod aux.pass mod det det suj obj.p", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "More commands", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Note that N -[obj.p]-> M is not required to find a place where the rule must be applied, but we need it to get access to the node with identifier M and to define properly the command add_edge.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "More commands", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "For transformation between different syntactic annotation frameworks, we often have to deal with the fact that heads of constituents may change. For instance, with the sentence je vois que tu es malade [en: I see that you are sick]. The head of the clause que tu es malade is es in Sequoia and malade in UD. In practice, we have to realise the transformation between the two graphs described by Figure 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 395, |
|
"end": 403, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Changing head", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "We can use what was presented before to remove the edge ats, to add a new edge cop and to change the POS of es; but we need something more: moving all other edges incident to the old head es towards the new head malade. GREW provides a dedicated command shift to compute this. In the rule below, the command shift V ==> ATS means: change all edges starting (resp. ending) on the node V to make them start (resp. end) on ATS. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Changing head", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "Above, we have seen how to handle atomic transformations through rules. But, in order to define a complete transformation system, some larger set of rules are needed. It is important to be able to control the order in which subset of rules should applied. In practice, large transformation system are divided in several steps and sub-systems are applied successively. In our example (Sequoia to UD), the global transformation can be divided into: 1) change POS and tokenisation, 2) change relation labels, 3) make needed head changes. The can be expressed in GREW by a strategy Seq(POS, relations, heads), where POS, relations and heads correspond to dedicated subset of rules.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "More strategies", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "Graph matching is a subpart of the system used to describe left part of rewriting rules, but it is also useful alone as a way to make requests on a graph or a set of graphs. In practice, it can be used for searching examples of a given construction, for checking consistencies of annotations or for error mining. This subpart of GREW is now proposed as a separate tool, named GREW-MATCH and freely available as a web service 5 . This graph matching system is also available in the ARBORATORGREW tool (Guibon et al., 2020) 6 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 500, |
|
"end": 523, |
|
"text": "(Guibon et al., 2020) 6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Application of graph matching", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "A screenshot of the GREW-MATCH interface is shown in Figure 3 . With the top bar and the list on the left, the user can chose a corpus (all 183 UD and SUD 2.7 corpora and a few other freely available corpora can be requested). A Request is entered and the user can visualise the occurrences found in the corpora with elements of the pattern highlighted in the sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 61, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Application of graph matching", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "It is difficult in general to ensure consistent annotations in large corpora. GREW-MATCH can be used to detect this kind of inconsistencies by making linguistic observation on some corpus. The Figure 3 illustrates the first step of such usage with the request: find nsubj relations where there is a Number disagreement (the head and the dependant of the relation both have a Number feature but with different values). In version 2.7 of UD_ENGLISH-GUM, 120 occurrences of the pattern are found, but there are not all errors, as the example of the figure shows. We can then refine the request by adding some negative patterns (with the without keyword), for instance to exclude occurrences with a copula linked to the head:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 189, |
|
"end": 201, |
|
"text": "The Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Error mining", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "pattern { M -[nsubj]-> N; M.Number <> N.Number; } without { M -[cop]-> C }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error mining", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The new request returns 25 occurrences which can be manually inspected: we have found a mix of annotation errors, irregularities (institution plural name used as a singular the United Nations rates. . . ) or misspelled sentences. The same approach can be used for many aspect: searching for verbs without subjects, for unwanted multiple relation (more than one obj on the same node).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error mining", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "More generally, GREW-MATCH can be used for any kind of data exploration. Here, we use the example of AMR (Banarescu et al., 2013) annotations, this will allow us to show examples where the graph matching used cannot be reduced as a tree matching. Two corpora are available from the AMR website 7 : the English translation of the Saint-Exup\u00e9ry's novel The Little Prince and some PubMed articles. With the pattern below, we search for a node which is the ARG0 argument of two different related concepts. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 129, |
|
"text": "(Banarescu et al., 2013)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "data exploration", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The pattern matching mechanism is also available in a count subcommand for GREW. Given a set of corpora and a set of requests, a table with the number of occurrences of each pattern on each corpora is returned. For instance, with the two patterns below, we can compute the ratio of nsubj relations which are use with or without a copula construction. The chart below shows these ratios, sorted by increasing values on the 141 corpora of UD 2.7 with more than 1000 sentences. Most corpora have a ratio between 0% and 25% with all value represented and a few corpora have a significantly higher proportion. 3 are above 30%: FAROESE-OFT with 67%, FRENCH-FQB with 43% and PERSIAN-SERAJI with 34%. Many implementations of graph rewriting or graph transformation exist in other research areas. But the massive usage of feature structures in linguistic unit description, the usage of dedicated technical formats like CoNLL-U or the need for specific kinds of transformations (like the shift operation described above) make general graph transformation system difficult to use in NLP applications. Such applications would require several encodings of the data and they will not allow for a straightforward expression of linguistic transformations. Among existing rule-based software for transformations of linguistic structures, we can cite OGRE 8 (Ribeyre, 2016) and Depedit 9 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1340, |
|
"end": 1355, |
|
"text": "(Ribeyre, 2016)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Typology", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "OGRE uses a notion of rules which is very closed to the ones used in GREW, but it does not provide interface with lexicons and there is no notion of strategies for the description of complex graph transformations which imply a large number of rules.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Typology", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Depedit can be used as a separate tool or as a Python library. It is specifically designed to manipulate only dependency trees. Contrary to GREW, it does not proposed a built-in notion of strategies and does not handle not confluent rewriting processing. Moreover, the notion of rules is also more restricted: there are no NAP and it is not possible to express additional contraints on morphological features like the one we used in Section 4.1: M.Number <> N.Number.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Typology", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "A large number of online query tools are available online. Some of them have a more restrictive query language like SETS 10 or Kontext 11 . In these two tool, there is no notion of NAP and the kind of contraints that can be expressed is limited.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tools for corpora querying", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The PML Tree Query 12 and INESS 13 offers a query language with the same expressive power as the one proposed in GREW. An advantage of GREW is that it is interfaced in the larger annotation tool ARBORATORGREW 14 . With ARBORATOR-GREW, the user may query on his own treebank and then have access to a manual editing mode on the query output or to automatic updating through GREW rules.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tools for corpora querying", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "GREW was used in many tasks of corpus conversion. It is used for instance for conversion between UD and SUD (Gerdes et al., 2018 (Gerdes et al., , 2019 : all UD corpora are converted into SUD with it. GREW is implemented in Ocaml and is quite efficient: for instance the conversion of UD 2.7 (1.48M sentences, 26.5M tokens) into SUD uses 100 rules and takes 5,500 seconds on a labtop (around 267 graphs rewritten by second).", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 128, |
|
"text": "(Gerdes et al., 2018", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 129, |
|
"end": 151, |
|
"text": "(Gerdes et al., , 2019", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "GREW is available as a command line program or through a Python library. Installation proce-dures and usage documentation are given on the GREW website: https://grew.fr. A web-based interface for the usage of the rewriting part of the software will be provided soon.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In this article, examples are given on dependency syntax and on semantic representations like AMR. A more complete set of examples is given in (Bonfante et al., 2018) . Many other linguistic structures can be encoded as graphs and we plan to extend the experiments to other kind of semantic representations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 143, |
|
"end": 166, |
|
"text": "(Bonfante et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We may lose information if the order between the child nodes of a given node (see Section 2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Onf stands for \"one normal form\"; it will be explained more in detail later with other strategies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "10,000 by default", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This is exactly what the tag P+D means.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://match.grew.fr 6 https://arborator.github.io", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://amr.isi.edu/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://gitlab.etermind.com/cribeyre/ OGRE 9 https://corpling.uis.georgetown.edu/ depedit", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://depsearch-depsearch.rahtiapp. fi/ds_demo 11 http://lindat.mff.cuni.cz/services/ kontext 12 http://lindat.mff.cuni.cz/services/ pmltq 13 http://clarino.uib.no/iness 14 https://arborator.github.io", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Thanks to all GREW users for their feedback and their requests which helps improve the software.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Abstract meaning representation for sembanking", |
|
"authors": [ |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Banarescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Bonial", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shu", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Madalina", |
|
"middle": [], |
|
"last": "Georgescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kira", |
|
"middle": [], |
|
"last": "Griffitt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulf", |
|
"middle": [], |
|
"last": "Hermjakob", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "178--186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguis- tic Annotation Workshop and Interoperability with Discourse, pages 178-186.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Application of Graph Rewriting to Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Bonfante", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruno", |
|
"middle": [], |
|
"last": "Guillaume", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guy", |
|
"middle": [], |
|
"last": "Perrier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Bonfante, Bruno Guillaume, and Guy Per- rier. 2018. Application of Graph Rewriting to Nat- ural Language Processing, volume 1 of Logic, Lin- guistics and Computer Science Set. ISTE Wiley.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Le corpus Sequoia : annotation syntaxique et exploitation pour l'adaptation d'analyseur par pont lexical", |
|
"authors": [ |
|
{ |
|
"first": "Marie", |
|
"middle": [], |
|
"last": "Candito", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Djam\u00e9", |
|
"middle": [], |
|
"last": "Seddah", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "TALN 2012 -19e conf\u00e9rence sur le Traitement Automatique des Langues Naturelles", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marie Candito and Djam\u00e9 Seddah. 2012. Le corpus Sequoia : annotation syntaxique et exploitation pour l'adaptation d'analyseur par pont lexical. In TALN 2012 -19e conf\u00e9rence sur le Traitement Automa- tique des Langues Naturelles, Grenoble, France.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "SUD or Surface-Syntactic Universal Dependencies: An annotation scheme nearisomorphic to UD", |
|
"authors": [ |
|
{ |
|
"first": "Kim", |
|
"middle": [], |
|
"last": "Gerdes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruno", |
|
"middle": [], |
|
"last": "Guillaume", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Kahane", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guy", |
|
"middle": [], |
|
"last": "Perrier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Universal Dependencies Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kim Gerdes, Bruno Guillaume, Sylvain Kahane, and Guy Perrier. 2018. SUD or Surface-Syntactic Uni- versal Dependencies: An annotation scheme near- isomorphic to UD. In Universal Dependencies Workshop 2018, Brussels, Belgium.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Improving Surface-syntactic Universal Dependencies (SUD): surface-syntactic relations and deep syntactic features", |
|
"authors": [ |
|
{ |
|
"first": "Kim", |
|
"middle": [], |
|
"last": "Gerdes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruno", |
|
"middle": [], |
|
"last": "Guillaume", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Kahane", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guy", |
|
"middle": [], |
|
"last": "Perrier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "TLT 2019 -18th International Workshop on Treebanks and Linguistic Theories", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kim Gerdes, Bruno Guillaume, Sylvain Kahane, and Guy Perrier. 2019. Improving Surface-syntactic Universal Dependencies (SUD): surface-syntactic relations and deep syntactic features. In TLT 2019 -18th International Workshop on Treebanks and Lin- guistic Theories, Paris, France.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "When Collaborative Treebank Curation Meets Graph Grammars", |
|
"authors": [ |
|
{ |
|
"first": "Ga\u00ebl", |
|
"middle": [], |
|
"last": "Guibon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marine", |
|
"middle": [], |
|
"last": "Courtin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kim", |
|
"middle": [], |
|
"last": "Gerdes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruno", |
|
"middle": [], |
|
"last": "Guillaume", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "LREC 2020 -12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ga\u00ebl Guibon, Marine Courtin, Kim Gerdes, and Bruno Guillaume. 2020. When Collaborative Treebank Cu- ration Meets Graph Grammars. In LREC 2020 - 12th Language Resources and Evaluation Confer- ence, Marseille, France.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Universal dependencies v1: A multilingual treebank collection", |
|
"authors": [ |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Hajic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sampo", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natalia", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Silveira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of LREC 2016", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1659--1666", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Hajic, Christopher D Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceed- ings of LREC 2016, pages 1659-1666.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "M\u00e9thodes d'analyse supervis\u00e9e pour l'interface syntaxe-s\u00e9mantique : De la r\u00e9\u00e9criture de graphes \u00e0 l'analyse par transitions", |
|
"authors": [ |
|
{ |
|
"first": "Corentin", |
|
"middle": [], |
|
"last": "Ribeyre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Corentin Ribeyre. 2016. M\u00e9thodes d'analyse super- vis\u00e9e pour l'interface syntaxe-s\u00e9mantique : De la r\u00e9\u00e9criture de graphes \u00e0 l'analyse par transitions. Ph.D. thesis, Universit\u00e9 Paris 7 Diderot & Inria.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "= \"de\"; N.upos = ADP; D.form = \"le\"; D.upos = DET; add_edge M -[det]-> D } }" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "pattern { P1 -> P2; P1 -[ARG0]-> N; P2 -[ARG0]-> N; }" |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "GREW-MATCH main interface 211 occurrences of this pattern are found in The Little Prince. Two of them are showed below, for the two sentences: \"What are you trying to say?\" and \"I administer them,\" replied the businessman." |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "-based transformations of linguistic structures" |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"html": null, |
|
"text": "Annotation of the sentence Deux autres photos sont montr\u00e9es du doigt [en: Two other photos are pointed out] in Sequoia (above) and in UD (below) same rewriting process is to put them in the same package construction, for instance with the 3 rules above:", |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>det</td><td/><td>suj</td><td/><td/><td/><td/></tr><tr><td/><td/><td>mod</td><td/><td>aux.pass</td><td>mod</td><td>obj.p</td><td/></tr><tr><td>Deux deux</td><td>autres autre</td><td>photos photo</td><td>sont \u00eatre</td><td colspan=\"2\">montr\u00e9es montrer</td><td>du de</td><td>doigt doigt</td></tr><tr><td>D</td><td>A</td><td>N</td><td>V</td><td>V</td><td/><td>P+D</td><td>N</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">obl:mod</td><td/></tr><tr><td/><td>nummod</td><td/><td>nsubj:pass</td><td/><td/><td>case</td><td/></tr><tr><td/><td>amod</td><td/><td colspan=\"2\">aux:pass</td><td/><td/><td>det</td></tr><tr><td>Deux deux</td><td>autres autre</td><td>photos photo</td><td>sont \u00eatre</td><td>montr\u00e9es montrer</td><td>de de</td><td>le le</td><td>doigt doigt</td></tr><tr><td>NUM</td><td>ADJ</td><td>NOUN</td><td>AUX</td><td>VERB</td><td>ADP</td><td>DET</td><td>NOUN</td></tr><tr><td>Figure 1: package POS {</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>rule adj { ... }</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>rule prep { ... }</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>rule noun { ... }</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>}</td><td/><td/><td/><td/><td/><td/><td/></tr></table>" |
|
} |
|
} |
|
} |
|
} |