Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C14-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:20:59.264744Z"
},
"title": "A Three-Step Transition-Based System for Non-Projective Dependency Parsing",
"authors": [
{
"first": "Oph\u00e9lie",
"middle": [],
"last": "Lacroix",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LINA -University of Nantes",
"location": {
"addrLine": "2 Rue de la Houssini\u00e8re",
"postCode": "44322",
"settlement": "Nantes Cedex 3"
}
},
"email": "[email protected]"
},
{
"first": "Denis",
"middle": [],
"last": "B\u00e9chet",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LINA -University of Nantes",
"location": {
"addrLine": "2 Rue de la Houssini\u00e8re",
"postCode": "44322",
"settlement": "Nantes Cedex 3"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a non-projective dependency parsing system that is transition-based and operates in three steps. The three steps include one classical method for projective dependency parsing and two inverse methods predicting separately the right and left non-projective dependencies. Splitting the parsing allows to increase the scores on both projective and non-projective dependencies compared to state-of-the-art non-projective dependency parsing. Moreover, each step is performed in linear time.",
"pdf_parse": {
"paper_id": "C14-1023",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a non-projective dependency parsing system that is transition-based and operates in three steps. The three steps include one classical method for projective dependency parsing and two inverse methods predicting separately the right and left non-projective dependencies. Splitting the parsing allows to increase the scores on both projective and non-projective dependencies compared to state-of-the-art non-projective dependency parsing. Moreover, each step is performed in linear time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dependency parsing is a particularly studied task and could be a significant step in various natural language processes. That is why dependency parsers should tend to get speed and precision. In recent years, various methods for dependency parsing were proposed (K\u00fcbler et al., 2009) . Among these methods, transition-based systems are particularly suitable.",
"cite_spans": [
{
"start": 262,
"end": 283,
"text": "(K\u00fcbler et al., 2009)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The first methods developed for transition-based parsers proposed to produce projective dependency structures (including no crossing dependencies). Then, extended methods were developed to handle the non-projective cases. The non-projective dependency structures admit non-projective dependencies (a dependency is non-projective if at least one word located between the head and the dependent of the dependency does not depend directly or inderectly on the head, see Figure 1 for example). Handling the non-projective cases has been the foundation of the first work concerning the dependency representations (Tesni\u00e8re, 1959; Melcuk, 1988) . Moreover, it is important to successfully parse the non-projective sentences which can be very helpful in processes such as question-answering.",
"cite_spans": [
{
"start": 608,
"end": 624,
"text": "(Tesni\u00e8re, 1959;",
"ref_id": "BIBREF11"
},
{
"start": 625,
"end": 638,
"text": "Melcuk, 1988)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 467,
"end": 475,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The transition-based parsers achieve interesting overall results for both projective and non-projective analyses. But, in practice, the non-projective methods achieve far lower and variable scores on nonprojective dependencies than on projective dependencies. Finding these dependencies is more difficult because the non-projective dependencies are often distant ones. It is then essential to achieve descent scores on non-projective dependencies as well as on projective ones because some languages contain a high rate of non-projective dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here we propose to predict separately the projective dependencies from the non-projective ones. Using a mixed dependency representation including both projective and non-projective dependency annotations in one representation, we aim at predicting the projective dependencies in a first step. Taking advantage of the good results of projective dependency parsing, we aim at predicting the non-projective dependencies in a second step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The formal dependency representation on which we base our work results from the formalism of categorial dependency grammars (CDG) (Dekhtyar and Dikovsky, 2008) . It allows to handle the discontinuities of the natural languages. The dependency representation induced is mixed: it associates projective and non-projective dependencies to represent complementary syntactic information in one dependency Figure 1: Dependency structure of the sentence \"He went there, supported by his family.\" Anchors are shown below the sentence. Non-projective dependencies appear using a dash line. The other dependencies are plain projective dependencies. structure. In this representation, each non-projective dependency is paired with a projective one called an anchor. From any dependency structure a projective tree 1 can be extracted.",
"cite_spans": [
{
"start": 130,
"end": 159,
"text": "(Dekhtyar and Dikovsky, 2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach is to predict the projective dependency trees first, using a standard and efficient method for projective dependency parsing. In a second step, we use the information (the projective/anchor labelled dependencies) given by the projective parsing to predict the non-projective dependencies. This second step is split into two inverse methods which predict independently the right and left non-projective dependencies. The advantage of the splitting is to perform the parsing in linear time and achieve better scores on non-projective dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finally, in order to evaluate the efficiency of our method, we apply it on data annotated according to the formalism of the categorial dependency grammar. The data consists on a treebank containing both projective and non-projective trees associated with sentences of French.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach is similar to a post-processing method for retrieving the non-projective dependencies. In a way, our work is then analogous to the work of Hall and Nov\u00e1k (2005) who apply a post-processing method after converting constituency trees into dependency ones since the conversion can not automatically recover the non-projective relations.",
"cite_spans": [
{
"start": 152,
"end": 173,
"text": "Hall and Nov\u00e1k (2005)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Moreover, taking advantage of the efficiency of projective dependency methods to predict the nonprojective dependencies is a technique used by Nivre and Nilsson (2005) in their pseudo-projective method. They projectivize the dependency trees before parsing in order to apply a projective method first and apply an inverse transformation to retrieve the non-projective dependencies. For our method, we do not need to projectivize the trees since the dependency representation we use includes both projective and non-projective annotations in one representation. But we can employ the projectivization method to build such data adding the generated projective dependencies to the non-projective structure as if they were artificial anchors. Consequently, our approach can be applied on treebank containing standard non-projective trees.",
"cite_spans": [
{
"start": 143,
"end": 167,
"text": "Nivre and Nilsson (2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The advantage of our method is that the information that is useful for retrieving the non-projective dependencies is not predicted during the projective parsing which makes the projective and non-projective steps completely independent from each other. Moreover, the non-projective steps are data-driven and remain linear.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our work is based on dependency structures combining projective and non-projective annotations in one representation. In such a representation the projective dependencies bring both local and syntactic information while the non-projective dependencies bring only syntactic information (i.e. the relation shared by the dependents). Thus, each non-projective dependency is paired with a projective relation (called anchor) determining the position of the dependent in the sentence. Figure 1 presents a nonprojective dependency structure of a sentence which illustrates the use of a projective relation (anchor) and a non-projective dependency to represent a discontinuous relation: \"supported\" is a modifier for the pronoun \"he\".",
"cite_spans": [],
"ref_spans": [
{
"start": 480,
"end": 488,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Representation and Formalism",
"sec_num": "3"
},
{
"text": "The dependency representation is induced by a particular formalism: the class of the categorial dependency grammars (CDG). The categories of the grammars correspond to the dependency labels. The rules L 1 , I 1 and \u2126 1 , presented in Table 1 , are the classical left elimination rules of categorial grammars. Only the left rules are shown but there are symmetrical right rules. These rules allow to define the projective dependencies and anchors. Moreover, CDGs are classical categorial grammars in which the notion of polarized valencies was added. Each of the three first rules includes the concatenation of potentials (such as P , P 1 , P 2 ) which are lists of polarized valencies. The polarized valencies are label names associated with a polarity (south-west , north-west , north-east and south-east ). They represent the ends of the non-projective dependencies. The south polarities indicate an incoming non-projective dependency and the north valencies indicate an outgoing non-projective dependency. The rule D 1 allows the elimination of dual pairs of polarized valencies, following the FA principle.",
"cite_spans": [],
"ref_spans": [
{
"start": 234,
"end": 241,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Representation and Formalism",
"sec_num": "3"
},
{
"text": "First Available (FA) principle: the closest dual polarized valencies with the same name are paired.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Formalism",
"sec_num": "3"
},
{
"text": "Thus, the elimination of the dual pairs ( C) ( C) and ( C) ( C) defines respectively left and right non-projective dependencies labelled by C. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Formalism",
"sec_num": "3"
},
{
"text": "L 1 C P 1 [C\\\u03b2] P 2 [\u03b2] P 1 P 2 I 1 C P 1 [C * \\\u03b2] P 2 [C * \\\u03b2] P 1 P 2 \u2126 1 [C * \\\u03b2] P [\u03b2] P D 1 \u03b1 P 1 ( C)P ( C)P 2 \u03b1 P 1 P P 2 , if ( C)( C) satisfies the FA principle",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Formalism",
"sec_num": "3"
},
{
"text": "We conduct a three-step transition-based parsing. We choose the arc-eager method of Nivre (2008) to perform the first step. Note that any projective method for dependency parsing would also be appropriate to perform this step. The second and third steps are methods which go through the sentence (respectively from left to right and from right to left) in order to find the non-projective dependencies.",
"cite_spans": [
{
"start": 84,
"end": 96,
"text": "Nivre (2008)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "4"
},
{
"text": "The arc-eager method is an efficient transition-based method for projective dependency parsing. A transition system is composed of a set of configurations (states), a set of transitions (operations on the configurations), an initial configuration and a set of terminal configurations. The transition-based parsing consists in applying a sequence of transitions to configurations in order to build a dependency structure. For the arc-eager method, a configuration is a triplet \u03c3, \u03b2, A where:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Projective Dependency Parsing",
"sec_num": "4.1"
},
{
"text": "\u2022 \u03c3 is a stack of partially treated words;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Projective Dependency Parsing",
"sec_num": "4.1"
},
{
"text": "\u2022 \u03b2 is a buffer of non-treated words;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Projective Dependency Parsing",
"sec_num": "4.1"
},
{
"text": "\u2022 A is a set of dependencies (the partially built dependency structure).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Projective Dependency Parsing",
"sec_num": "4.1"
},
{
"text": "The dependencies are described by triplets such as (k, l, i) where k is the position of the head, l is the label of the dependency and i is the position of the dependent. The set of transitions includes three transitions which are evolutions of the standard transitions of the system of Yamada and Matsumoto (2003) plus the Reduce transition which allows to delete the first word of the stack when this one shares no dependency with the first word of the buffer. The standard Right-Arc and Left-Arc are renamed respectively as Local-Right and Local-Left since these transitions only add local dependencies (whitout distinction between projective ones and anchors). The Shift transition pops the first word from the buffer and pushes it into the stack. The Reduce transition pops the first word from the stack. The effects of the transitions on configurations are detailed in Table 2 . For a given sentence W = w 1 ...w n , the initial configuration of the transition-based system is defined as follows:",
"cite_spans": [
{
"start": 287,
"end": 314,
"text": "Yamada and Matsumoto (2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 875,
"end": 882,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Projective Dependency Parsing",
"sec_num": "4.1"
},
{
"text": "Transition Application Condition Local-Left(l) (\u03c3 | w i , w j | \u03b2, A) \u21d2 (\u03c3, w j | \u03b2, A \u222a {(j, l, i)}) i = 0 \u2227 \u00ac\u2203k\u2203l (k, l , i) \u2208 A Local-Right(l) (\u03c3 | w i , w j | \u03b2, A) \u21d2 (\u03c3 | w i w j , \u03b2, A \u222a {(i, l, j)}) \u00ac\u2203k\u2203l (k, l , j) \u2208 A Reduce (\u03c3 | w i , \u03b2, A) \u21d2 (\u03c3, \u03b2, A) \u2203k\u2203l(k, l, i) \u2208 A Shift (\u03c3, w i | \u03b2, A) \u21d2 (\u03c3 | w i , \u03b2, A)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Projective Dependency Parsing",
"sec_num": "4.1"
},
{
"text": "([w 0 ], [w 1 , ..., w n ], \u2205)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Projective Dependency Parsing",
"sec_num": "4.1"
},
{
"text": "where w 0 is the root of the structure. And any terminal configuration is of the form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Projective Dependency Parsing",
"sec_num": "4.1"
},
{
"text": "([w 0 ], [], A )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Projective Dependency Parsing",
"sec_num": "4.1"
},
{
"text": "where A contains the fully projective dependency/anchor structure for the sentence W 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Projective Dependency Parsing",
"sec_num": "4.1"
},
{
"text": "This step should produce the projective dependency structure of Figure 2 for the sentence \"Il y est all\u00e9, soutenu par sa famille\" (french equivalent of the sentence seen in Figure 1 ). ",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 72,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 173,
"end": 181,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Projective Dependency Parsing",
"sec_num": "4.1"
},
{
"text": "With the aim of retrieving non-projective dependencies we propose two inverse methods also inspired by transition-based systems. For these methods, the configuration is a quadruplet \u03c3, \u03b2, \u03b8, A where \u03c3, \u03b2 and A are the same stack, buffer and set of arcs as those defined for projective parsing in the previous subsection and \u03b8 is a list of polarized valencies. The valencies have the same role here as in the formalism of the categorial dependency grammars (detailed in section 3). They define the ends of the non-projective dependencies. Therefore, our idea is to go through the sentence in order to predict, for each word, whether a non-projective dependency could end on the word (by adding valency l or l in the list \u03b8) or should start from it (by adding valency l or l in the list \u03b8). As soon as dual valencies are collected in \u03b8, they are removed from it (according to the FA principle) and the corresponding nonprojective dependency is added to the set of dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Non-Projective Dependencies",
"sec_num": "4.2"
},
{
"text": "In the second step, the valencies associated with the left dependencies are computed, i.e. the valencies of the form l and l. The sentence is linearly covered from left to right, as in the previous projective step. Details of the transitions are presented in Table 3 . The Shift transition is the same as during the previous step and allows to cover the sentence classically from left to right. The PutValency transition makes possible to predict, for the first word of the buffer, exactly one southwest valency l, which means that a left dependency labelled l can end on this word. In addition, the valency is concatenated at the end of \u03b8. The transition Dist-Left is applied when the first word of the buffer receives the dual valency (i.e. a valency of the form l). If at least one valency l belongs to \u03b8 then the last one is removed from \u03b8 and the non-projective dependency corresponding to the pair of dual valencies l l (left non-projective labelled l) is added to A. Therefore, for a given sentence, the initial configuration of this system is (",
"cite_spans": [],
"ref_spans": [
{
"start": 259,
"end": 266,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Adding Non-Projective Dependencies",
"sec_num": "4.2"
},
{
"text": "Transition Application Condition PutValency( l) (\u03c3, w i | \u03b2, \u03b8, A) \u21d2 (\u03c3 | w i , \u03b2, \u03b8 l i , A) l i / \u2208 \u03b8 Dist-Left( l) (\u03c3, w j | \u03b2, \u03b8 1 l i \u03b8 2 , A) \u21d2 (\u03c3, w j | \u03b2, \u03b8 1 \u03b8 2 , A \u222a {(j, l, i)}) l / \u2208 \u03b8 2 \u2227\u2200k k i / \u2208 \u03b8 1 \u03b8 2 Shift (\u03c3, w i | \u03b2, \u03b8, A) \u21d2 (\u03c3 | w i , \u03b2, \u03b8, A)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Non-Projective Dependencies",
"sec_num": "4.2"
},
{
"text": "[w 0 ], [w 1 , ..., w n ], (), A )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Non-Projective Dependencies",
"sec_num": "4.2"
},
{
"text": "where A is the projective dependency structure predicted by the arc-eager method. And the terminal configuration is a quadruplet of the form ([w 0 , ..., w n ], [], \u03b8 , A ) where \u03b8 could contain southwest valencies which did not match with their dual and A is a partially non-projective dependency structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Non-Projective Dependencies",
"sec_num": "4.2"
},
{
"text": "The third step uses the inverse method of the previous step and allows to predict right non-projective dependencies. In this method, the sentence is linearly covered from right to left. The initial configuration ([w 0 , .., w n\u22121 ], [w n ], (), A ) contains the partial dependency structure A produced by the last method and the terminal configuration ([w 0 ], [w 1 , ..., w n ], \u03b8 , A ) contains the fully non-projective dependency structure A . The transitions used here are presented in Table 4 . This time, the PutValency transition adds only southeast valencies ( l) at the beginning of \u03b8 and pops the first word of \u03c3 to push it into \u03b2. The Dist-Right transition adds a right non-projective dependency in the set of arcs by predicting a dual valency of the form l. Finally, the RShift transition pops the first word of \u03c3 to push it in \u03b2. The splitting of the non-projective dependencies prediction on two different methods is essential to find the right non-projective dependencies as well as the left ones. Practically, finding the head (i.e. the l and l valencies) of a non-projective dependency is easier once the dependent (i.e. the l and l valencies) has been previously predicted. Indeed, the prediction system benefits of information about the presence of the head valency in \u03b8 to predict the dual valency. Moreover, the heads are predicted more efficiently whether the projective dependency associated with the word was predicted with the right label during the first parsing step. The next section presents the prediction system and the features needed to proceed good transition predictions.",
"cite_spans": [],
"ref_spans": [
{
"start": 490,
"end": 497,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Adding Non-Projective Dependencies",
"sec_num": "4.2"
},
{
"text": "Condition PutValency( l) (\u03c3 | w i , \u03b2, \u03b8, A) \u21d2 (\u03c3, w i | \u03b2, l i \u03b8, A) l i / \u2208 \u03b8 Dist-Right( l) (\u03c3 | w j , \u03b2, \u03b8 1 l i \u03b8 2 , A) \u21d2 (\u03c3 | w j , \u03b2, \u03b8 1 \u03b8 2 , A\u222a{(j, l, i)}) l / \u2208 \u03b8 1 \u2227\u2200k k i / \u2208 \u03b8 1 \u03b8 2 RShift (\u03c3 | w i , \u03b2, \u03b8, A) \u21d2 (\u03c3, w i | \u03b2, \u03b8, A)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition Application",
"sec_num": null
},
{
"text": "The application of these two steps on the sentence seen in Figure 2 are shown on Table 5 . The",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 67,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 81,
"end": 88,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Transition Application",
"sec_num": null
},
{
"text": "Transition Configuration ([w 0 ], [Il,...,.], (), A) Shift \u21d2 ([w 0 ,Il], [y,...,.], (), A) PutValency( clit-l-obj) \u21d2 ([w 0 ,Il], [y,...,.], ( clit-l-obj), A) Shift \u21d2 ([w 0 ,...,y], [est,...,.], ( clit-l-obj), A) Shift \u21d2 ([w 0 ,...,est], [alle,...,.], ( clit-l-obj), A) DistLeft( clit-l-obj) \u21d2 ([w 0 ,...,est], [alle,...,.], (), A 1 = A \u222a {(4,clit-l-obj, 2)}) Shift (x6) \u21d2 ([w 0 ,...,.], [], (), A 1 ) ([w 0 ,... ,famille], [.], (), A 1 ) RShift \u21d2 ([w 0 ,...,], [famille,.], (), A 1 ) RShift (x3) \u21d2 ([w 0 ,...,,], [soutenu,...,.], (), A 1 ) PutValency( modif) \u21d2 ([w 0 ,...,,], [soutenu,...,.], ( modif), A 1 ) RShift (x5) \u21d2 ([w 0 ], [il,...,.], ( modif), A 1 ) DistLeft( modif) \u21d2 ([w 0 ], [il,...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition Application",
"sec_num": null
},
{
"text": "A 2 = A 1 \u222a {(1,modif, 6)}) Table 5 : Transition sequences of the left and right non-projective steps on the sentence in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 35,
"text": "Table 5",
"ref_id": null
},
{
"start": 121,
"end": 129,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": ",.], (),",
"sec_num": null
},
{
"text": "projective structure built during the first step ( Figure 2 ) is substituted to the set of arcs A in the initial configuration of the left non-projective step. The non-projective dependency structure A 2 provided at the end of the right (final) non-projective step is presented in Figure 3 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 59,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 281,
"end": 289,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": ",.], (),",
"sec_num": null
},
{
"text": "The transition-based systems are particularly interesting for deterministic data-driven parsing. Associated with a statistical method, such as a probabilistic graphical model or a linear classifier, and suitable features, the prediction of the transitions is very efficient. It ensures a deterministic parsing in linear time for both the projective arc-eager method and our two non-projective post-processing methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle",
"sec_num": "4.3"
},
{
"text": "Previous work such as (Yamada and Matsumoto, 2003) shows that support vector machines (SVM) allow to achieve good scores on dependency parsing when associated with a transition-based system. Therefore, we chose to use this classifier to predict the transitions of our two post-processing methods. Moreover, the arc-eager method (i.e. nivreeager) being already successfully implemented and optimized, we decided to use the MaltParser (Nivre et al., 2007) to perform the projective dependency parsing.",
"cite_spans": [
{
"start": 22,
"end": 50,
"text": "(Yamada and Matsumoto, 2003)",
"ref_id": "BIBREF12"
},
{
"start": 433,
"end": 453,
"text": "(Nivre et al., 2007)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle",
"sec_num": "4.3"
},
{
"text": "For this projective step, the features are composed of classical features such as the word forms, POStags and dependency labels of the current words (the first elements of the stack and the buffer), their neighbors and their attached dependents. For the two non-projective steps the feature pattern includes in addition some features on the projective head of the first word of the buffer and the list of the valencies remaining in \u03b8. The feature pattern is presented in Table 6 . Nevertheless, the SVM model bears only numerical features. And each feature must be converted into a binary feature determining its absence or presence. For the valencies, the features denotes the absence or presence of each possible valency label in \u03b8.",
"cite_spans": [],
"ref_spans": [
{
"start": 471,
"end": 478,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Oracle",
"sec_num": "4.3"
},
{
"text": "\u2022 Word forms:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Pattern",
"sec_num": null
},
{
"text": "\u2022 POS-tags: w {i\u22121,i+1} t {i\u22122,i+2} w j t j \u2022 Labels: \u2022 Valencies: l j (projective dependency label) (v 0 , ..., v k ) (the list of valencies in \u03b8) (l j 1 , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Pattern",
"sec_num": null
},
{
"text": ".., l jn ) (the list of dependency labels) Table 6 : Features for the prediction of transition in the two inverse methods. i is the position of the first word in \u03b2, j is the position of the head of w i , the list of dependency labels is the list of labels of the right or left dependents of the head (depending on the right or left method).",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 50,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature Pattern",
"sec_num": null
},
{
"text": "In order to evaluate the efficiency of our approach, we decided to experiment on a dependency treebank for which the data were annotated following the formalism of the categorial dependency grammars 3 . We call this treebank the CDG Treebank 1. Moreover, in order to evaluate the adaptation of our method to standard treebanks we would like to perform the method on data for which the anchors would have been artificially created. Therefore, we build a second treebank from the first one, which we call the CDG Treebank 2, in which the original anchors are replaced by artificial anchors generated by the projectivization step of the pseudo-projective method of Nivre and Nilsson (2005) .",
"cite_spans": [
{
"start": 662,
"end": 686,
"text": "Nivre and Nilsson (2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "The CDG Treebank 1 contains 3030 sentences of French, each paired with a dependency structure. The dependency structures are composed of both projective and non-projective dependencies. Out of the 37580 dependencies (excluding the anchor ones), 3.8% are non-projective. Hence, 41% of the dependency structures of the treebank contain at least one non-projective dependency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Projective Dependency Treebank",
"sec_num": "5.1"
},
{
"text": "The data were annotated semi-automatically using the CDG Lab (Alfared et al., 2011) , a development environment dedicated to large scale grammar and treebank development. Thus, the annotations followed the formalism proposed by the categorial dependency grammar of French (Dikovsky, 2011) . The labels of the dependencies are the 117 categories used by the grammar. Most of the dependency labels 89are exclusively associated with projective dependencies. 23 labels can be associated both with projective and non-projective dependencies. Among these ones the most frequent are clitics, negatives, objects, reflexives and copredicates. In most of the cases, clitics, negatives and reflexives are associated with short dependencies (generally, one or two words separate the head from the dependent) whereas copredicates or apposition are often associated with distant dependencies (the heads and dependents can be located at the opposite ends of the sentence). Four dependency labels are exclusively associated with non-projective dependencies, they are particular cases of aggregation, copula, comparison and negation.",
"cite_spans": [
{
"start": 61,
"end": 83,
"text": "(Alfared et al., 2011)",
"ref_id": "BIBREF0"
},
{
"start": 272,
"end": 288,
"text": "(Dikovsky, 2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Projective Dependency Treebank",
"sec_num": "5.1"
},
{
"text": "The grammar and the treebank were developed simultaneously. Consequently, a large part of the sentences were used to develop the grammar and were chosen to cover as much as possible the syntactic phenomenon of French. The treebank contains sentences from newspaper, 19 th and 20 th century literary works and plain language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Projective Dependency Treebank",
"sec_num": "5.1"
},
{
"text": "To build the CDG Treebank 2, we removed the anchors of the dependency structures of the CDG Treebank 1 and added the projective dependencies generated by projectivization 4 . Note that, 90.9% of the anchors are the same between the two CDG treebanks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Projective Dependency Treebank",
"sec_num": "5.1"
},
{
"text": "We evaluate our method through a 10-fold cross-validation on the non-projective dependency treebank. First, we train the prediction models (the MaltParser training model and the SVM model) on each training set containing 90% sentences of the treebank. Second, each fold of our testing data sets is tagged with Part-Of-Speech tags using Melt (Denis and Sagot, 2009) , a POS-tagger that achieves high score on French. Then the sentences are parsed.",
"cite_spans": [
{
"start": 341,
"end": 364,
"text": "(Denis and Sagot, 2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.2"
},
{
"text": "In order to estimate the benefit of our method, our results are compared with those obtained by the methods proposed by the MaltParser. The table shows the results of the methods that give the best results among the non-projective ones and the best results among the projective ones (associated with the pseudo-projective method (Nivre and Nilsson, 2005) ):",
"cite_spans": [
{
"start": 329,
"end": 354,
"text": "(Nivre and Nilsson, 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.2"
},
{
"text": "\u2022 the covnonproj (non-projective) method inspired by Covington (2001) ;",
"cite_spans": [
{
"start": 53,
"end": 69,
"text": "Covington (2001)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.2"
},
{
"text": "\u2022 the nivreeager (projective) method associated with the pseudo-projective method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.2"
},
{
"text": "For a fair comparison, the scores are computed on the same data for each experiments, i.e. on the nonprojective structures minus the anchors and the dependencies combined with punctuations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.2"
},
{
"text": "Moreover, in order to demonstrate that our method can be applied successfully on standard treebanks, the experiments are performed on the CDG Treebank 1 an 2. The comparison scores that are used in these experiments are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.2"
},
{
"text": "\u2022 the label accuracy (LA), i.e. the percentage of words for which the correct label is assigned;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.2"
},
{
"text": "\u2022 the unlabelled attachment score (UAS), i.e. the percentage of words for which the correct dependency is assigned;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.2"
},
{
"text": "\u2022 the labelled attachment score (LAS), i.e. the percentage of words for which the correct labelled dependency is assigned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.2"
},
{
"text": "The results of the experiments are presented in Table 7 . First, we notice that the scores relating to projective dependencies of our method, both for CDG Treebank 1 (3) and CDG Treebank 2 (4), are better than those obtained by the covnonproj method (1) and equivalent to the pseudo-projective method (2). We assume that finding non-projective dependencies at the same time as the projective ones is more difficult than finding projective dependencies only. Moreover, the scores on non-projective dependencies (2) with ours (3).",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 55,
"text": "Table 7",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5.3"
},
{
"text": "are particularly interesting. Our method achieves far better scores on non-projective dependencies than the other two. The label accuracy (LA) achieves significantly better scores (+6.8) than the covnonproj method. Indeed, the projective step allows to find the anchors which are a kind of projective dependencies, so there are easier to predict than the non-projective dependencies. Thus, the label accuracy of the non-projective dependencies takes advantage of the good results of the anchors which were not paired with a non-projective dependency during the second and third parsing steps. Concerning the attachment scores, our method still outperforms the two others. Globally, our method allows to recover the head of the non-projective dependencies more successfully. The non-projective dependencies can be also compared depending on their direction. The left nonprojective dependencies achieve far better scores (75.0% LAS) than the right non-projective dependencies (42.7% LAS). We know that the non-projective step performed from right to left is essential to recover the right non-projective dependencies. In fact, finding the right non-projective dependencies by performing the non-projective step from left to right seems almost infeasible because it is essential to find the dependent first. Therefore, the problem comes essentially from the bad prediction of the anchors during the projective step. Indeed, only 51.4% of the words associated with a right non-projective dependency receive the correct label (LA), compared with 84.2% for those associated with left nonprojective dependencies. The under-representation of the right non-projective dependencies (25% of the non-projective dependencies) in the treebank is a first explanation. But, even the more frequent labels (associated with right non-projective dependencies) achieve low scores. Moreover, we noticed that even the right projective dependencies always achieve lower scores than the left projective dependencies. This problem may suggest that the use of a left-to-right projective method is not appropriate to predict the right dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5.3"
},
{
"text": "Furthermore, we note that our method achieve equivalent scores on CDG Treebank 1 and CDG Treebank 2, and even slightly better for non-projective dependencies with the use of artificial anchors. This suggest that our method could be succesfully applied to standard treebanks in which artificial anchors would have been added.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5.3"
},
{
"text": "We propose a three-step method retrieving separately the projective dependencies and anchors, the left non-projective dependencies and the right non-projective dependencies through the use of a mixed dependency representation. The projective step and the two non-projective steps are performed in linear time and allow to outperform state-of-the-art transition-based scores on non-projective dependencies. The method needs a learning corpus that associate to each non-projective dependency a projective anchor. Thus the method is well adapted to CDG treebanks. But we showed that the method can be applied to standard treebanks by adding artificial anchors with the use of a method of projectivization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "One of the advantages of our method is a significant improvement on the label accuracy for the nonprojective dependencies. The efficiency of the two non-projective methods depends on the good results of the projective parsing. Moreover, performing the non-projective parsing from left-to-right and from rightto-left raises interesting questions on how to recover the right and left dependencies for both projective and non-projective methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "This work is licensed under a Creative Commons Attribution 4.0 International Licence. Page numbers and proceedings footer are added by the organisers. Licence details: http://creativecommons.org/licenses/by/4.0/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Composed of projective dependencies and anchors of non-projective dependencies, see Section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The treebank is not yet publicly available. But the authors have made it available to us.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The labels of the artificial anchors do not contain additional encoded information. They are identical to the labels of the non-projective dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The pseudo-projective method were applied with the option \"path\" for projectivization and deprojectivization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We want to thank Dani\u00e8le Beauquier and Alexander Dikovsky for giving us the CDG Treebank on which we experimented our system. Moreover, we want to thank all our reviewers : the anonymous reviewers of Coling for their accurate reviews, and the members of the team TALN of the University of Nantes (Colin de la Higuera, Florian Boudin and the master students) who reviewed our work with a fresh eye.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "CDG Lab: a Toolbox for Dependency Grammars and Dependency Treebanks Development",
"authors": [
{
"first": "Ramadan",
"middle": [],
"last": "Alfared",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "B\u00e9chet",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Dikovsky",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the International Conference on Dependency Linguistics",
"volume": "",
"issue": "",
"pages": "272--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramadan Alfared, Denis B\u00e9chet, and Alexander Dikovsky. 2011. CDG Lab: a Toolbox for Dependency Gram- mars and Dependency Treebanks Development. In Proceedings of the International Conference on Dependency Linguistics, DEPLING 2011, pages 272-281, Barcelona, Spain, September.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A fundamental algorithm for dependency parsing",
"authors": [
{
"first": "Michael",
"middle": [
"A"
],
"last": "Covington",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual ACM Southeast Conference",
"volume": "",
"issue": "",
"pages": "95--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael A. Covington. 2001. A fundamental algorithm for dependency parsing. In Proceedings of the 39th Annual ACM Southeast Conference, pages 95-102.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Generalized categorial dependency grammars",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Dekhtyar",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Dikovsky",
"suffix": ""
}
],
"year": 2008,
"venue": "Trakhtenbrot/Festschrift",
"volume": "4800",
"issue": "",
"pages": "230--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Dekhtyar and Alexander Dikovsky. 2008. Generalized categorial dependency grammars. In Trakhten- brot/Festschrift, LNCS 4800, pages 230-255. Springer.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Coupling an Annotated Corpus and a Morphosyntactic Lexicon for Stateof-the-Art POS Tagging with Less Human Effort",
"authors": [
{
"first": "Pascal",
"middle": [],
"last": "Denis",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Pacific Asia Conference on Language, Information and Computation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascal Denis and Beno\u00eet Sagot. 2009. Coupling an Annotated Corpus and a Morphosyntactic Lexicon for State- of-the-Art POS Tagging with Less Human Effort. In Proceedings of the Pacific Asia Conference on Language, Information and Computation, PACLIC 2009, Hong Kong, China.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Categorial Dependency Grammars: from Theory to Large Scale Grammars",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Dikovsky",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the International Conference on Dependency Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Dikovsky. 2011. Categorial Dependency Grammars: from Theory to Large Scale Grammars. In Proceedings of the International Conference on Dependency Linguistics, DEPLING 2011, September.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Corrective modeling for non-projective dependency parsing",
"authors": [
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "V\u00e1clav",
"middle": [],
"last": "Nov\u00e1k",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Ninth International Workshop on Parsing Technology, IWPT 2005",
"volume": "",
"issue": "",
"pages": "42--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keith Hall and V\u00e1clav Nov\u00e1k. 2005. Corrective modeling for non-projective dependency parsing. In Proceedings of the Ninth International Workshop on Parsing Technology, IWPT 2005, pages 42-52.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Dependency Parsing",
"authors": [
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2009,
"venue": "Synthesis Lectures on Human Language Technologies",
"volume": "1",
"issue": "1",
"pages": "1--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandra K\u00fcbler, Ryan McDonald, and Joakim Nivre. 2009. Dependency Parsing. Synthesis Lectures on Human Language Technologies, 1(1):1-127.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Dependency syntax : Theory and Practice",
"authors": [
{
"first": "Igor",
"middle": [],
"last": "Melcuk",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Igor Melcuk. 1988. Dependency syntax : Theory and Practice. State University of New York Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Pseudo-projective Dependency Parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05",
"volume": "",
"issue": "",
"pages": "99--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre and Jens Nilsson. 2005. Pseudo-projective Dependency Parsing. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05, pages 99-106, Ann Arbor, Michigan.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "MaltParser: A Language-Independent System for Data-Driven Dependency Parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
},
{
"first": "Atanas",
"middle": [],
"last": "Chanev",
"suffix": ""
},
{
"first": "Glsen",
"middle": [],
"last": "Eryigit",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "Kbler",
"suffix": ""
},
{
"first": "Svetoslav",
"middle": [],
"last": "Marinov",
"suffix": ""
},
{
"first": "Erwin",
"middle": [],
"last": "Marsi",
"suffix": ""
}
],
"year": 2007,
"venue": "Natural Language Engineering",
"volume": "13",
"issue": "",
"pages": "95--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, Glsen Eryigit, Sandra Kbler, Svetoslav Marinov, and Er- win Marsi. 2007. MaltParser: A Language-Independent System for Data-Driven Dependency Parsing. Natural Language Engineering, 13:95-135, 6.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Algorithms for Deterministic Incremental Dependency Parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2008,
"venue": "Comput. Linguist",
"volume": "34",
"issue": "4",
"pages": "513--553",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre. 2008. Algorithms for Deterministic Incremental Dependency Parsing. Comput. Linguist., 34(4):513-553, December.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "\u00c9l\u00e9ments de syntaxe structurale",
"authors": [
{
"first": "Lucien",
"middle": [],
"last": "Tesni\u00e8re",
"suffix": ""
}
],
"year": 1959,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucien Tesni\u00e8re. 1959.\u00c9l\u00e9ments de syntaxe structurale. Klincksieck.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Statistical Dependency Analysis with Support Vector Machines",
"authors": [
{
"first": "Hiroyasu",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the International Conference on Parsing Technologies, IWPT 2003",
"volume": "",
"issue": "",
"pages": "195--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical Dependency Analysis with Support Vector Machines. In Proceedings of the International Conference on Parsing Technologies, IWPT 2003, pages 195-206.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"text": "Projective dependency structure of the sentence \"Il y est all\u00e9, soutenu par sa famille\".",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "Non-projective dependency structure of the sentence inFigure 2",
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"text": ".",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table/>",
"html": null,
"text": "",
"num": null,
"type_str": "table"
},
"TABREF1": {
"content": "<table/>",
"html": null,
"text": "Transitions of the arc-eager method.",
"num": null,
"type_str": "table"
},
"TABREF2": {
"content": "<table/>",
"html": null,
"text": "",
"num": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table/>",
"html": null,
"text": "",
"num": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table/>",
"html": null,
"text": "Results of the non-projective dependency parsing comparing the MaltParser methods (1) and",
"num": null,
"type_str": "table"
}
}
}
}