|
{ |
|
"paper_id": "S13-1044", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:42:57.184615Z" |
|
}, |
|
"title": "Bootstrapping Semantic Role Labelers from Parallel Data", |
|
"authors": [ |
|
{ |
|
"first": "Mikhail", |
|
"middle": [], |
|
"last": "Kozhevnikov", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Ivan Titov Saarland University", |
|
"location": { |
|
"postCode": "15 11 50 66041", |
|
"settlement": "Postfach, Saarbr\u00fccken", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "mkozhevn|[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present an approach which uses the similarity in semantic structure of bilingual parallel sentences to bootstrap a pair of semantic role labeling (SRL) models. The setting is similar to co-training, except for the intermediate model required to convert the SRL structure between the two annotation schemes used for different languages. Our approach can facilitate the construction of SRL models for resource-poor languages, while preserving the annotation schemes designed for the target language and making use of the limited resources available for it. We evaluate the model on four language pairs, English vs German, Spanish, Czech and Chinese. Consistent improvements are observed over the self-training baseline.", |
|
"pdf_parse": { |
|
"paper_id": "S13-1044", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present an approach which uses the similarity in semantic structure of bilingual parallel sentences to bootstrap a pair of semantic role labeling (SRL) models. The setting is similar to co-training, except for the intermediate model required to convert the SRL structure between the two annotation schemes used for different languages. Our approach can facilitate the construction of SRL models for resource-poor languages, while preserving the annotation schemes designed for the target language and making use of the limited resources available for it. We evaluate the model on four language pairs, English vs German, Spanish, Czech and Chinese. Consistent improvements are observed over the self-training baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The success of statistical modeling methods in a variety of natural language processing (NLP) tasks in the last decade depended crucially on the availability of annotated resources for their training. And while sizable resources for most standard tasks are only available for a few languages, the human effort required to achieve reasonable performance on such tasks for other languages may be significantly reduced by leveraging existing resources and the similarities between languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This idea has lead to the development of crosslingual annotation projection approaches, which make use of parallel corpora (Pad\u00f3 and Lapata, 2009) , as well as attempts to adapt models directly to other languages (McDonald et al., 2011) . In this paper we consider correspondences between SRL structures in translated sentences from a different perspective. Most cross-lingual annotation projection approaches transfer the source language annotation scheme to the target language without modification, which makes it hard to combine their output with existing target language resources, as annotation schemes may vary significantly. We instead address the problem of information transfer between two existing annotation schemes (figure 1) for a pair of languages using an intermediate model of role correspondence (RCM). An RCM models the probability of a pair of corresponding arguments being assigned a certain pair of roles. We then use it to guide a pair of monolingual models toward compatible predictions on parallel data in order to extend the coverage and/or accuracy of one or both models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 123, |
|
"end": 146, |
|
"text": "(Pad\u00f3 and Lapata, 2009)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 213, |
|
"end": 236, |
|
"text": "(McDonald et al., 2011)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Romanian is not taught in their schools .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Ve \u0161kol\u00e1ch se neu\u010d\u00ed rumunsky . The notion of compatibility here is highly nontrivial, even for sentences translated as close to the original as possible. Zhuang and Zong (2010) , for example, observe that in the English-Chinese parallel PropBank (Palmer et al., 2005b) corresponding arguments often bear different labels, even though the same inventory of semantic roles is used for both languages and the annotation guidelines are similar. When different annotation schemes are considered, the problem is further complicated by the difference in the granularity of semantic roles used and varying notions of what is an argument and what is not.", |
|
"cite_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 176, |
|
"text": "Zhuang and Zong (2010)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 246, |
|
"end": 268, |
|
"text": "(Palmer et al., 2005b)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Manually annotated training data for such a model is hard to come by. Instead, we propose an iterative procedure similar to bootstrapping, where the parameters of the RCM are initially estimated from a parallel corpus automatically annotated with semantic roles using the monolingual models independently, and then the RCM is used to refine these annotations via a joint inference procedure, serving to enforce consistency on the predictions of monolingual models on parallel sentences. The obtained annotations on the parallel corpus are expected to be of higher quality than the independent predictions of the models, so they can be used to improve the SRL models' performance and/or coverage. We evaluate this approach by augmenting the original training data with the annotations obtained on parallel data and observing the change in the model's performance. This is especially useful if one of the languages is relatively poor in resources, in which case the proposed procedure will help propagate information from the stronger model to the weaker one. Even if the two models are comparable in their predictive power, we may be able to benefit from the fact that certain semantic roles are realized less ambiguously in one language than in another. We will henceforth refer to these two alternatives as the projection and symmetric setups.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The paper is structured as follows. In the next section we present our approach and discuss the issues of role correspondence modeling, then describe the implementation and datasets used in evaluation in section 3, present the evaluation and results in section 4 and conclude with the discussion of related work in section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We consider bootstrapping a pair of SRL models on a parallel corpus, using the correspondence between their predictions on parallel sentences to guide the learning. The models are forced toward compatible predictions, where the notion of compatibility is defined by a (statistical) role correspondence model. Let us consider a pair of languages, \u03b1 and \u03b2, and their corresponding datasets T 0 \u03b1 and T 0 \u03b2 , annotated with semantic roles (the upper indices here denote the iteration number). We will refer to these as the initial training sets. We also assume that a word-aligned parallel corpus is available for the pair of languages, which we denote P , with the predicates and their respective arguments identified on both sides.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The procedure is then as follows: we train monolingual models M 0 \u03b1 and M 0 \u03b2 on T 0 \u03b1 and T 0 \u03b2 , respectively, apply them to the two sides of the parallel corpus, resulting in a labeling P 0 . We collect the semantic role co-occurrence information and train the role correspondence model C 0 on it, then proceed to the joint inference step involving M 0 \u03b1 , M 0 \u03b2 and C 0 , resulting in a refined labeling P 1 of the parallel corpus. The two sides of the P 1 are then used to augment the initial training sets, yielding T 1 \u03b1 and T 1 \u03b2 , and new models M 1 \u03b2 and M 1 \u03b2 are trained on these. The process can then be repeated using M 1 \u03b1 and M 1 \u03b2 instead of the initial models. We report the model's performance on a held-out test set, drawn from the same corpus as the corresponding initial training set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The procedure can be seen as a form of cotraining (Blum and Mitchell, 1998) of a pair of monolingual SRL models. In our case, however, the question of the models' agreement is not as trivial as in most applications of co-training, requiring a statistical model of its own (C i ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 75, |
|
"text": "(Blum and Mitchell, 1998)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the low-resource (projection) setup our approach is also similar to self-training with weak supervision coming from the stronger model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Note that although the approach is iterative, we have observed no significant improvements from repeating the procedure, possibly owing to the noise introduced by the errors in preprocessing. In the evaluation we run only one iteration. In the notation introduced above, the self-training baseline model (SELF) is trained on P 0 \u03b2 , the joint model (JOINT)on P 1 \u03b2 and the combined model (COMB) -on T 1 \u03b2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "It is necessary to distinguish between semantic roles and their interpretation in a particular context. The former can be defined in a variety of ways, depending on the formalism used. In case of FrameNet (Baker et al., 1998) , for example, the interpretation of a semantic role (frame element) is explicitly provided for each separate frame, so a frame and a frame element label together describe the semantics of an argument. PropBank (Palmer et al., 2005a ) follows a mixed strategy -the labels for a relatively small set of core roles are numbered and their interpretations are provided separately for each predicate (although those of the first two roles, A0 and A1, consistently denote what is known as Proto-Agent and Proto-Patient), while modifiers (Merlo and Leybold, 2001) bear labels that are interpreted consistently across all predicates. Other resources, such as Prague Dependency Treebank (Haji\u010d et al., 2006) , use a single set of semantic roles (functors), which are interpretable across different predicates. From the standpoint of defining the semantic similarity of parallel sentences, the important implication is that we cannot assume that the corresponding arguments should bear the same label, even if the annotation schemes used are compatible (Zhuang and Zong, 2010) . Nor can we write down a single mapping between the roles that will be valid across different predicates (figure 2), which motivates the need for a statistical model of semantic role correspondence. We assume the existence of a one-to-one map-ping between semantic roles for a given predicate pair. As the mappings are not completely independent -at least some roles have the same interpretation across different predicate pairs, -we choose to build a single model, which relies on features derived from the pair of predicates in question, rather than create a separate model for each predicate pair. The model can then make decisions specific to particular predicates or predicate pairs, where sufficient data has been observed or back off to a generic mapping where there is not enough data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 225, |
|
"text": "(Baker et al., 1998)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 437, |
|
"end": 458, |
|
"text": "(Palmer et al., 2005a", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 757, |
|
"end": 782, |
|
"text": "(Merlo and Leybold, 2001)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 904, |
|
"end": 924, |
|
"text": "(Haji\u010d et al., 2006)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1269, |
|
"end": 1292, |
|
"text": "(Zhuang and Zong, 2010)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modeling Role Correspondence", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "For the purpose of this study, we choose to separately model the probability of a target role, given the source one and the necessary contextual information and vice versa. These two components are referred to as projection models and realized as a pair of linear classifiers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modeling Role Correspondence", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Training such a model in a conventional fashion would require a rather specific kind of dataset, namely a parallel corpus annotated with semantic roles, and assuming the availability of such data would severely limit the applicability of the approach proposed, as, to our knowledge, it is currently only available for two language pairs, namely English-Chinese (Palmer et al., 2005b) and English-Czech (Haji\u010d et al., 2012) . We instead use the automatically produced annotations on a parallel corpus, effectively enforcing consistency on the role correspondence in the monolingual models' predictions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 361, |
|
"end": 383, |
|
"text": "(Palmer et al., 2005b)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 402, |
|
"end": 422, |
|
"text": "(Haji\u010d et al., 2012)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modeling Role Correspondence", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The joint inference would have been simplest if the arguments were classified independently. This assumption is too restrictive, though, since the interdependencies between the arguments can be used to improve the accuracy of semantic role labeling (Roth and Yih, 2005 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 249, |
|
"end": 268, |
|
"text": "(Roth and Yih, 2005", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Inference", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In the projection setup we assume that the model for one of the languages, which we will henceforth refer to as source, is much better informed than the one for the other language, referred to as target, so we only have to propagate the information one way. The scoring functions of these two models will be denoted f s and f t , respectively, and that of the projection model from source to target -f st . Source and target sentences are denoted S s and S t , and aligned predicates in these sentences -p s and p t . The task is then to identify the target language role assignment r t that would maximize the objec-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Projection Setup", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "tive L(r t ) = \u03bb t f t (r t , S t , p t ) + \u03bb st f st (r t , r s , p s , p t ), where r s = argmax r f s (r s , S s , p s )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Projection Setup", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "is the role assignment of the source-side arguments as predicted by the monolingual model and \u03bb are the weights associated with the models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Projection Setup", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "The exact maximization of this objective is computationally expensive, so we resort to an approximation. We chose to use the dual decomposition method primarily because it fits the structure of the objective particularly well (in that it is a sum of the objectives of two independent models) and since it allows a wide range of monolingual models to be used in this setup. The only requirement here is that the monolingual model must be able to incorporate a bias toward or away from a certain prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Projection Setup", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "To apply this approximation, we decouple the r t variables into r t and r st and get", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Projection Setup", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "L 1 (r t , r st ) = \u03bb t f t (r t , S t , p t ) + \u03bb st f st (r st , r s , p s , p t )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Projection Setup", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "under the condition that r t = r st . Applying the Lagrangian relaxation, we replace the hard equality constraint on r t and r st with a soft one, using slack variables \u03b4, which results in the following objective:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Projection Setup", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "min \u03b4 max rt,rst L 1 (r t , r st , \u03b4) = \u03bb t f t (r t , S t , p t ) + \u03bb st f st (r st , r s , p s , p t )+ (1) i r\u2208Rt \u03b4 i,r I(r i t = r) \u2212 I(r i st = r) ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Projection Setup", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "where i indexes aligned argument pairs and I is an indicator function. This is equivalent to", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Projection Setup", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "min \u03b4 max rt,rst L 1 (r t , r st , \u03b4) = min \u03b4 max rt g t (r t , S t , p t , \u03b4)+ (2) max rst g st (r st , r s , p s , p t , \u03b4) , where g t (r t , S t , p t , \u03b4) = \u03bb t f t (r t , S t , p t ) + i r\u2208Rt \u03b4 i,r I(r i t = r) g st p(r st , r s , p s , p t , \u03b4) = (3) \u03bb st f st (r st , r s , p s , p t ) \u2212 i r\u2208Rt \u03b4 i,r I(r i st = r)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Projection Setup", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "are the augmented objectives of the two component models, incorporating bias factors on various possible predictions. The minimization with respect to \u03b4 is performed using a subgradient descent algorithm following Sontag et al. (2011) . Whenever the method converges, it converges to the global maximum of the sum of the objectives. We found that in our case it reaches a solution within the first 1000 iterations over 99% of the time.", |
|
"cite_spans": [ |
|
{ |
|
"start": 214, |
|
"end": 234, |
|
"text": "Sontag et al. (2011)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Projection Setup", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "If the models have comparable accuracy, the above inference procedure can be extended to perform projection both ways. Formulating this as a dual decomposition problem would require using three separate components, two for the monolingual models and one for the RCM, which would have to make its own predictions for the semantic roles on both sides without conditioning on the predictions of the monolingual models. This calls for a different kind of model than the one we use -a model that will rely on a (possibly simplified) feature representation of the source and target arguments to jointly predict their labels. Instead, we perform the projection setup inference procedure in both directions simultaneously, interleaving gradient descent steps and allowing the projection models to access the updated predictions of the monolingual models. This results in a block gradient descent algorithm with the following updates:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Symmetric Setup", |
|
"sec_num": "2.2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "r n+1 t = argmax rt g t (r t , S t , p t , \u03b4 n t ) r n+1 s = argmax rt g s (r s , S s , p s , \u03b4 n s ) r n+1 st = argmax rst g st (r st , r n s , p s , p t , \u03b4 n t ) r n+1 ts = argmax rts g ts (r ts , r n t , p t , p s , \u03b4 n s )", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Symmetric Setup", |
|
"sec_num": "2.2.2" |
|
}, |
|
{ |
|
"text": "\u2200 i \u2200 r\u2208Rs \u03b4 n+1,i,r s = \u03b4 n,i,r s + \u03b3 s (n)(I(r n,i ts = r) \u2212 I(r n,i s = r)) \u2200 i \u2200 r\u2208Rt \u03b4 n+1,i,r t = \u03b4 n,i,r t + \u03b3 t (n)(I(r n,i st = r) \u2212 I(r n,i t = r)), where \u03b3 s (n) = \u03b3 t (n) = \u03b3 0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Symmetric Setup", |
|
"sec_num": "2.2.2" |
|
}, |
|
{ |
|
"text": "n+1 is the update rate function for step n, and g s and g ts are defined as in (3), but with the direction reversed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Symmetric Setup", |
|
"sec_num": "2.2.2" |
|
}, |
|
{ |
|
"text": "This procedure allows us to use the same RCM implementation as in the projection setup. Moreover, the inference procedure for projection setup is a special case of this one with \u03b3 s (n) set to 0. The algorithm also demonstrates convergence similar to that of the projection version, although it lacks the optimality guarantees.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Symmetric Setup", |
|
"sec_num": "2.2.2" |
|
}, |
|
{ |
|
"text": "We evaluate our approach on four language pairs, namely English vs German, Spanish, Czech and Chinese, which we will denote en-de, en-es, en-cz and en-zh respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The parallel data for the first three language pairs is drawn from Europarl v6 (Koehn, 2005) and from MultiUN (Eisele and Chen, 2010) for English-Chinese. We applied Stanford Tokenizer for English, tokenizer scripts (Koehn, 2005) provided with the Europarl corpus to German, Spanish and Czech, and Stanford Chinese Segmenter to Chinese, then performed POS-tagging, morphology tagging (where applicable) and dependency parsing using MATE-tools (Bohnet, 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 79, |
|
"end": 92, |
|
"text": "(Koehn, 2005)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 110, |
|
"end": 133, |
|
"text": "(Eisele and Chen, 2010)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 216, |
|
"end": 229, |
|
"text": "(Koehn, 2005)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 443, |
|
"end": 457, |
|
"text": "(Bohnet, 2010)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parallel Data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Word alignments were acquired using GIZA++ (Och and Ney, 2003) with its standard settings. Predicate identification on the parallel data was done using the supervised classifiers of the monolingual SRL systems, except for German, where a simple heuristic had to be used instead, as only some of the predicates are marked in the training data, which makes it hard to train a supervised classifier. Following van der Plas et al. (2011), we then retain only those sentences where all identified predicates were aligned.", |
|
"cite_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 62, |
|
"text": "(Och and Ney, 2003)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parallel Data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In the experiments we used 50 thousand predicate pairs in each case, as increasing the amount further did not yield noticeable benefits, while increasing the running time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parallel Data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The CoNLL'09 (Haji\u010d et al., 2009) datasets were used as a source of annotated data for all languages. Only verbal predicates were considered and predicted syntax was used in evaluation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 13, |
|
"end": 33, |
|
"text": "(Haji\u010d et al., 2009)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotated Data", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We consider subsets of the training data in order to emulate the scenario with a resource-poor language. Due to the different sources the datasets were derived from, sentences contain different proportions of annotated predicates depending on the language. The German corpus, for example, contains about 6 times fewer argument labels per sentence than the English one. We will therefore indicate the sizes of the datasets used in the number of argument labels they contain, referred to as instances, rather than the number of predicates or sentences. The corpus for English, for example, contains 6.2 such instances per sentence on average.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotated Data", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We use the 20 thousand instances of the available data as the training corpus for each language and split the rest equally between the development and the test set. The secondary (\"out-of-domain\") test sets are preserved as they are.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotated Data", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In dependency-based SRL, only heads of syntactic constituents are marked with semantic roles. The heads of corresponding arguments may or may not align, however, even if the arguments are lexically very similar, because their syntactic structure may differ. In general, one would have to identify the whole phrase for each argument and take into account the links between constituents, rather than single words (Pad\u00f3 and Lapata, 2005) . As reconstructing the constituents from the dependency tree is nontrivial (Hwang et al., 2010) , we are using a heuristic to address the most common version of this problem, i.e. a preposition or an auxiliary verb being an argument head. In such a case we also take into account any alignment links involving the head's immediate descendants.", |
|
"cite_spans": [ |
|
{ |
|
"start": 411, |
|
"end": 434, |
|
"text": "(Pad\u00f3 and Lapata, 2005)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 511, |
|
"end": 531, |
|
"text": "(Hwang et al., 2010)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotated Data", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Our system is based on that of Bj\u00f6rkelund et al. (2009) . It is a pipeline system comprised of a set of binary or multiclass linear classifiers. Both here and in the projection model, the classifiers are trained using Liblinear (Fan et al., 2008) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 55, |
|
"text": "Bj\u00f6rkelund et al. (2009)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 218, |
|
"end": 246, |
|
"text": "Liblinear (Fan et al., 2008)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We employed a uniqueness constraint on role labels (Chang et al., 2007) , preventing some of them from being assigned to more than one argument in the same predicate, which appears to be more reliable in a low-resource setting we consider than the reranker the original system employed. The constraint is enforced in the monolingual model inference using a beam-search approximation with the beam size of 10. The label uniqueness information was derived from the training sets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 71, |
|
"text": "(Chang et al., 2007)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Each projection model is realized by a single linear classifier applied to each argument pair independently. It relies on features derived from the source semantic role and source and target predicates, and predicts the semantic role for the argument in the target sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Projection Model", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "The features include the source semantic role and its conjunctions with (lowercased) forms and lemmata of the source and target predicates. For example, assuming the source semantic role is A3 and the source and target predicates are went and ging (past tense of \"gehen\", German), the features would be as shown in figure 3. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Projection Model", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "In case of projection there are two parameters, \u03bb st and \u03bb t , -the weights of the component models in the objective. Only their relative values matter (except in the choice of \u03b3 0 ), so we set \u03bb t to 1 and only tune the weight of the role correspondence model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameters", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "In the symmetric setup, the objective takes the form", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameters", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "L(r t , r s ) = \u03bb t f t (r t , S t , p t ) + \u03bb st f st (r t , r s , p s , p t ) + \u03bb s f s (r s , S s , p s ) + \u03bb ts f ts (r s , r t , p t , p s ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameters", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "Since we assume that the two monolingual models here have comparable performance, we do not tune their relative weights, setting both \u03bb s and \u03bb t to 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameters", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "We also use the same weight for both projection models, \u03bb st = \u03bb ts , and this value plays an important role -it basically indicates how strongly we insist on the role correspondence models' correctness. If this weight is set to 0, the RCM will accept the initial predictions the monolingual models make, and if it is set to a sufficiently large value, the predictions of the monolingual models will be biased until they match the mapping suggested by the RCM. The optimal weight will therefore depend on the language pair, the sizes of the initial training sets and the RCM used. We use the value of 0.7 in all projection experiments and 0.5 in the symmetric setup, however, as excessive tuning may be undesirable in the lowresource setting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameters", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "One important factor in the understanding of the evaluation figures presented is the fact that sources of annotated and parallel data belong to different domains. The former usually contains some sort of newswire text -Wall Street Journal in case of English, Xinhua newswire, Hong Kong news and Sinorama news magazine for Chinese, etc. Parallel data, on the other hand, comes from the proceedings of European Parliament and United Nations, which are quite different. For example, the sentences in the latter domain often start with someone being addressed, either by name or by title, which can hardly be expected to occur as often in a newspaper or a magazine article.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domains", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "As is well-known, the performance of many statistical tools drops significantly outside the domain they were trained on (Pradhan et al., 2008) , and the preprocessing and SRL models used here are no exception, which results in relatively low quality of the initial predictions on the parallel text. The low argument identification performance, in particular, is presumably due to inaccurate dependency parses, on which it heavily relies. Several approached have been proposed to improve the accuracy of dependency parsers and other tools on out-of-domain data, but this is beyond the scope of this paper. In some cases (though seldom), sources of parallel data belonging to the same domain as the annotated training data can be obtained.", |
|
"cite_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 142, |
|
"text": "(Pradhan et al., 2008)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domains", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "Another concern is that the performance of a model trained on automatically labeled parallel data as measured on a test set we use may not reflect the quality of these annotations. To assess the resulting model's coverage, it would be interesting to evaluate it on data outside the original domain, so we consider the out-of-domain (OOD) test sets as provided for the CoNLL Shared Task 2009 where available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domains", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "Perhaps the most interesting one of these is the German OOD test set, which is drawn from Europarl (as is the parallel data we use). It was originally annotated with syntactic dependency trees and se-mantic structure in the SALSA format (Burchardt et al., 2006) for Pad\u00f3 and Lapata (2005) , and then converted into a PropBank-like form for the CoNLL Shared Task 2009 (Haji\u010d et al., 2009) . The OOD test set for English is drawn from the Brown corpus (Francis and Kucera, 1967) and the one for Czech -from a Czech translation of Wall Street Journal articles (Haji\u010d et al., 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 237, |
|
"end": 261, |
|
"text": "(Burchardt et al., 2006)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 266, |
|
"end": 288, |
|
"text": "Pad\u00f3 and Lapata (2005)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 367, |
|
"end": 387, |
|
"text": "(Haji\u010d et al., 2009)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 450, |
|
"end": 476, |
|
"text": "(Francis and Kucera, 1967)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 557, |
|
"end": 577, |
|
"text": "(Haji\u010d et al., 2012)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domains", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "The first question we are interested in is how the joint inference affects the quality of the automatically obtained annotations on the parallel data. To answer this, we will run the monolingual models independently and jointly, then train models on the output of these two procedures and compare their performance on a test set. Note that we do not add the initial training data at this point, so the initial model scores are provided for reference, rather than as a baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "A small initial training set of 600 instances was used here for the target language here and the full training set (20000 instances) for the source one. \u03bb st was set to 0.7 in all experiments in this section. In table 1, we present the accuracy of the model trained on the output of the joint inference (JOINT) against that of the self-training baseline (SELF). The \u2206 SELF column contains the difference between the two. Note that the SELF model is trained on the parallel data automatically annotated using monolingual SRL models (not mixed with the initial training set), since we are interested in the effect of joint inference on the quality of the annotation obtained. Where the improvement is positive and statistically significant with p < 0.005 according to the permutation test (Good, 2000) , they are highlighted in bold.", |
|
"cite_spans": [ |
|
{ |
|
"start": 303, |
|
"end": 310, |
|
"text": "(JOINT)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 787, |
|
"end": 799, |
|
"text": "(Good, 2000)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Projection Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We can see that the refined model (JOINT) outperforms the self-training baseline in most cases by a moderate, but statistically significant margin, which indicates that the joint inference does improve the quality of annotations on the parallel corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Projection Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The slightly higher improvement on the German OOD test set supports our hypothesis that the procedure enhances the performance of the model on parallel data, as the data for this test set is also drawn from the Europarl corpus. The improvement over the initial model (\u2206 INIT ) in this case is statistically significant with p < 0.05. Higher p-value may be attributed to the smaller test set size. Figure 4 shows how the performance of the JOINT model changes with the size of the initial training set. The improvements are smaller for en-cz, ende and en-zh, but they are also statistically significant for initial training sets of up to 2000 instances. Projection to English from other languages performs worse. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 397, |
|
"end": 405, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Projection Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In practice, automatically obtained annotations are usually combined with the existing labeled data. For this purpose, the initial training set is replicated so as to constitute 0.3 (an empirically chosen value that appears to work well in most experiments) of the size of the automatically labeled dataset. We compare the performance of the model trained on the resulting dataset (COMB) with that of the JOINT model and the initial models. The results are presented in table 2. We omit projection from other languages to English, since the JOINT model there fails to outperform the initial model and we do not expect to benefit from adding the automatically annotated data to the initial training set in this case. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In the symmetric setup evaluation, we use a slightly larger initial training set of 1400 instances for both source and target language. The projection model weight is set to 0.5. Table 3 shows the accuracy of the JOINT model and the SELF baseline. Note that here, unlike section 4.1, the joint inference is run once and then a model is trained for each language and evaluated on the corresponding test set(s).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 186, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Symmetric Setup", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The results support our intuition that joint inference helps improve the quality of the resulting annotations, at least in some cases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Symmetric Setup", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "It would be useful to know to what extent the performance of the role correspondence model affects the quality of the output (and thus the performance of the resulting model). The RCM we use is rather simplistic, and we believe it can be substantially improved for any given language pair by incorporating prior knowledge and/or using external sources of information. In order to estimate the potential impact of such improvements, we simulate a better informed projection model, giving it access to the predictions of more accurate monolingual models on the parallel data -those trained on the full training set, rather than the initial training set used in this particular experiment. We refer to the resulting RCM as oracle and assess the difference it makes, compared to a regular one (table 4) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 789, |
|
"end": 798, |
|
"text": "(table 4)", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Oracle RCM", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "There is a number of approaches to semi-supervised semantic role labeling, and most suggest that some external supervision is required for such approaches to work (He and Gildea, 2006) , such as measures of syntactic and semantic similarity (F\u00fcrstenau and Lapata, 2009) or external confidence measures (Goldwasser et al., 2011) . The alternative we propose is primarily motivated by the research on annotation projection (Pad\u00f3 and Lapata, 2009; van der Plas et al., 2011; Annesi and Basili, 2010; Naseem et al., 2012) and direct transfer (Durrett et al., 2012; S\u00f8gaard, 2011; Lopez et al., 2008; McDonald et al., 2011) . The key difference of the present approach compared to annotation projection is that we assume the availability of some amount of training data for the target language, possibly using a different inventory of semantic roles. As mentioned previously, from the training point of view this approach can be seen as similar to cotraining (Blum and Mitchell, 1998) , other applications of which to NLP are too numerous to list here.", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 184, |
|
"text": "(He and Gildea, 2006)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 269, |
|
"text": "(F\u00fcrstenau and Lapata, 2009)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 302, |
|
"end": 327, |
|
"text": "(Goldwasser et al., 2011)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 421, |
|
"end": 444, |
|
"text": "(Pad\u00f3 and Lapata, 2009;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 445, |
|
"end": 471, |
|
"text": "van der Plas et al., 2011;", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 472, |
|
"end": 496, |
|
"text": "Annesi and Basili, 2010;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 497, |
|
"end": 517, |
|
"text": "Naseem et al., 2012)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 538, |
|
"end": 560, |
|
"text": "(Durrett et al., 2012;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 561, |
|
"end": 575, |
|
"text": "S\u00f8gaard, 2011;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 576, |
|
"end": 595, |
|
"text": "Lopez et al., 2008;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 596, |
|
"end": 618, |
|
"text": "McDonald et al., 2011)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 954, |
|
"end": 979, |
|
"text": "(Blum and Mitchell, 1998)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Most closely related is the joint inference in Zhuang and Zong (2010), the main difference being that it relies on a manually annotated parallel corpus, aligned on the argument level, and evaluates only the inference procedure and only on in-domain data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Other related approaches include Kim et al. (2010) , where a cross-lingual transfer of relations is performed (which basically represent parts of the predicate-argument structure considered by SRL methods), and Frermann and Bond (2012) , where semantic structure matching is used to rank HPSG parses for parallel sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 50, |
|
"text": "Kim et al. (2010)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 211, |
|
"end": 235, |
|
"text": "Frermann and Bond (2012)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Unsupervised semantic role labeling methods (Lang and Lapata, 2010; Lang and Lapata, 2011; Titov and Klementiev, 2012a; Lorenzo and Cerisara, 2012) present an alternative to the crosslingual information propagation approaches such as ours, and at least one the methods in this area also makes use of parallel data (Titov and Klementiev, 2012b) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 67, |
|
"text": "(Lang and Lapata, 2010;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 68, |
|
"end": 90, |
|
"text": "Lang and Lapata, 2011;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 91, |
|
"end": 119, |
|
"text": "Titov and Klementiev, 2012a;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 120, |
|
"end": 147, |
|
"text": "Lorenzo and Cerisara, 2012)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 314, |
|
"end": 343, |
|
"text": "(Titov and Klementiev, 2012b)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We have presented an approach to information transfer between SRL systems for different language pairs using parallel data. The task proves challenging due to non-trivial mapping between the role labels used in different SRL annotation schemes and the nature of parallel data -the difference in domains and the limited accuracy of the preprocessing tools. We observe consistent improvements over self-training baseline from using joint inference and the experiments suggest that improving the role correspondence model, for example using languagespecific prior knowledge or external data sources, may dramatically increase the performance of the resulting system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors acknowledge the support of the MMCI Cluster of Excellence and thank Alexandre Klementiev and Manfred Pinkal for valuable suggestions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Cross-lingual alignment of framenet annotations through hidden markov models", |
|
"authors": [ |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Annesi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Basili", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 11 th international conference on Computational Linguistics and Intelligent Text Processing, CICLing'10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "12--25", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paolo Annesi and Roberto Basili. 2010. Cross-lingual alignment of framenet annotations through hidden markov models. In Proceedings of the 11 th interna- tional conference on Computational Linguistics and Intelligent Text Processing, CICLing'10, pages 12-25, Berlin, Heidelberg. Springer-Verlag.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The Berkeley FrameNet project", |
|
"authors": [ |
|
{ |
|
"first": "Collin", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Baker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Charles", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Fillmore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Lowe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the Thirty-Sixth Annual Meeting of the Association for Computational Linguistics and Seventeenth International Conference on Computational Linguistics (ACL-COLING'98)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "86--90", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In Pro- ceedings of the Thirty-Sixth Annual Meeting of the Association for Computational Linguistics and Sev- enteenth International Conference on Computational Linguistics (ACL-COLING'98), pages 86-90, Mon- treal, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Multilingual semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "Bj\u00f6rkelund", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Love", |
|
"middle": [], |
|
"last": "Hafdell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Nugues", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "43--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anders Bj\u00f6rkelund, Love Hafdell, and Pierre Nugues. 2009. Multilingual semantic role labeling. In Pro- ceedings of the Thirteenth Conference on Computa- tional Natural Language Learning (CoNLL 2009): Shared Task, pages 43-48, Boulder, Colorado, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Combining labeled and unlabeled data with co-training", |
|
"authors": [ |
|
{ |
|
"first": "Avrim", |
|
"middle": [], |
|
"last": "Blum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the Workshop on Computational Learning Theory (COLT 98", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Avrim Blum and Tom Mitchell. 1998. Combining la- beled and unlabeled data with co-training. In Proceed- ings of the Workshop on Computational Learning The- ory (COLT 98).", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Top accuracy and fast dependency parsing is not a contradiction", |
|
"authors": [ |
|
{ |
|
"first": "Bernd", |
|
"middle": [], |
|
"last": "Bohnet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23 rd International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "89--97", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a contradiction. In Proceedings of the 23 rd International Conference on Computational Lin- guistics (Coling 2010), pages 89-97, Beijing, China, August. Coling 2010 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "The SALSA corpus: a German corpus resource for lexical semantics", |
|
"authors": [ |
|
{ |
|
"first": "Aljoscha", |
|
"middle": [], |
|
"last": "Burchardt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katrin", |
|
"middle": [], |
|
"last": "Erk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anette", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Kowalski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Pado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manfred", |
|
"middle": [], |
|
"last": "Pinkal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of LREC 2006", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aljoscha Burchardt, Katrin Erk, Anette Frank, Andrea Kowalski, Sebastian Pado, and Manfred Pinkal. 2006. The SALSA corpus: a German corpus resource for lexical semantics. In Proceedings of LREC 2006, Genoa, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Guiding semi-supervision with constraint-driven learning", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Ratinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "51", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M.W. Chang, L. Ratinov, and D. Roth. 2007. Guiding semi-supervision with constraint-driven learning. Ur- bana, 51:61801.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Optimizing chinese word segmentation for machine translation performance", |
|
"authors": [ |
|
{ |
|
"first": "Pi-Chuan", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Galley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Third Workshop on Statistical Machine Translation, StatMT '08", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "224--232", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pi-Chuan Chang, Michel Galley, and Christopher D. Manning. 2008. Optimizing chinese word segmen- tation for machine translation performance. In Pro- ceedings of the Third Workshop on Statistical Machine Translation, StatMT '08, pages 224-232, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Syntactic transfer using a bilingual lexicon", |
|
"authors": [ |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Durrett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Pauls", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--11", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Greg Durrett, Adam Pauls, and Dan Klein. 2012. Syntac- tic transfer using a bilingual lexicon. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1-11, Jeju Island, Korea, July. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "MultiUN: A multilingual corpus from united nation documents", |
|
"authors": [ |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Eisele", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [ |
|
"Chen" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": ".", |
|
"middle": [ |
|
";" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Khalid", |
|
"middle": [], |
|
"last": "Choukri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bente", |
|
"middle": [], |
|
"last": "Maegaard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Mariani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andreas Eisele and Yu Chen. 2010. MultiUN: A multi- lingual corpus from united nation documents. In Nico- letta Calzolari (Conference Chair), Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis, Mike Rosner, and Daniel Tapias, editors, Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta, May. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "LIBLINEAR: A library for large linear classification", |
|
"authors": [ |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Rong-En Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cho-Jui", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiang-Rui", |
|
"middle": [], |
|
"last": "Hsieh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chih-Jen", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "1871--1874", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A li- brary for large linear classification. Journal of Ma- chine Learning Research, 9:1871-1874.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Computing Analysis of Present-day American English", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Francis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Kucera", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1967, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Francis and H. Kucera. 1967. Computing Analysis of Present-day American English. Brown University Press, Providence, RI.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Cross-lingual parse disambiguation based on semantic correspondence", |
|
"authors": [ |
|
{ |
|
"first": "Lea", |
|
"middle": [], |
|
"last": "Frermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Bond", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50 th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "125--129", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lea Frermann and Francis Bond. 2012. Cross-lingual parse disambiguation based on semantic correspon- dence. In Proceedings of the 50 th Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers), pages 125-129, Jeju Island, Ko- rea, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Graph alignment for semi-supervised semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "Hagen", |
|
"middle": [], |
|
"last": "F\u00fcrstenau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "11--20", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hagen F\u00fcrstenau and Mirella Lapata. 2009. Graph alignment for semi-supervised semantic role labeling. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 11- 20, Singapore.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Confidence driven unsupervised semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Goldwasser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Clarke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Goldwasser, R. Reichart, J. Clarke, and D. Roth. 2011. Confidence driven unsupervised semantic pars- ing. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Permutation Tests: A Practical Guide to Resampling Methods for Testing Hypotheses", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Good", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Good. 2000. Permutation Tests: A Practical Guide to Resampling Methods for Testing Hypotheses. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "The CoNLL-2009 shared task: Syntactic and semantic dependencies in multiple languages", |
|
"authors": [ |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Haji\u010d", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimiliano", |
|
"middle": [], |
|
"last": "Ciaramita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Johansson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daisuke", |
|
"middle": [], |
|
"last": "Kawahara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [ |
|
"Ant\u00f2nia" |
|
], |
|
"last": "Mart\u00ed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llu\u00eds", |
|
"middle": [], |
|
"last": "M\u00e0rquez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Meyers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Pad\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Jan\u0161t\u011bp\u00e1nek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Stra\u0148\u00e1k", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan Haji\u010d, Massimiliano Ciaramita, Richard Johans- son, Daisuke Kawahara, Maria Ant\u00f2nia Mart\u00ed, Llu\u00eds M\u00e0rquez, Adam Meyers, Joakim Nivre, Sebastian Pad\u00f3, Jan\u0160t\u011bp\u00e1nek, Pavel Stra\u0148\u00e1k, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The CoNLL- 2009 shared task: Syntactic and semantic dependen- cies in multiple languages. In Proceedings of the Thir- teenth Conference on Computational Natural Lan- guage Learning (CoNLL 2009): Shared Task, pages 1-18, Boulder, Colorado.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Announcing prague czech-english dependency treebank 2.0", |
|
"authors": [ |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Haji\u010d", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Haji\u010dov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jarmila", |
|
"middle": [], |
|
"last": "Panevov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Sgall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Silvie", |
|
"middle": [], |
|
"last": "Cinkov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Fu\u010d\u00edkov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie", |
|
"middle": [], |
|
"last": "Mikulov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Pajas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Popelka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ji\u0159\u00ed", |
|
"middle": [], |
|
"last": "Semeck\u00fd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jana\u0161indlerov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Jan\u0161t\u011bp\u00e1nek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zde\u0148ka", |
|
"middle": [], |
|
"last": "Toman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zden\u011bk\u017eabokrtsk\u00fd", |
|
"middle": [], |
|
"last": "Ure\u0161ov\u00e1", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan Haji\u010d, Eva Haji\u010dov\u00e1, Jarmila Panevov\u00e1, Petr Sgall, Ond\u0159ej Bojar, Silvie Cinkov\u00e1, Eva Fu\u010d\u00edkov\u00e1, Marie Mikulov\u00e1, Petr Pajas, Jan Popelka, Ji\u0159\u00ed Semeck\u00fd, Jana\u0160indlerov\u00e1, Jan\u0160t\u011bp\u00e1nek, Josef Toman, Zde\u0148ka Ure\u0161ov\u00e1, and Zden\u011bk\u017dabokrtsk\u00fd. 2012. Announc- ing prague czech-english dependency treebank 2.0. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Mehmet U\u01e7ur Do\u01e7an, Bente Maegaard, Joseph Mariani, Jan Odijk, and Ste- lios Piperidis, editors, Proceedings of the Eight In- ternational Conference on Language Resources and Evaluation (LREC'12), Istanbul, Turkey, May. Euro- pean Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Self-training and co-training for semantic role labeling: Primary report", |
|
"authors": [ |
|
{ |
|
"first": "Shan", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shan He and Daniel Gildea. 2006. Self-training and co-training for semantic role labeling: Primary report. Technical report, University of Rochester.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Towards a domain independent semantics: Enhancing semantic representation with construction grammar", |
|
"authors": [ |
|
{ |
|
"first": "Jena", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Hwang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rodney", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Nielsen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the NAACL HLT Workshop on Extracting and Using Constructions in Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jena D. Hwang, Rodney D. Nielsen, and Martha Palmer. 2010. Towards a domain independent semantics: Enhancing semantic representation with construction grammar. In Proceedings of the NAACL HLT Work- shop on Extracting and Using Constructions in Com- putational Linguistics, pages 1-8, Los Angeles, Cali- fornia, June. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "A cross-lingual annotation projection approach for relation detection", |
|
"authors": [ |
|
{ |
|
"first": "Seokhwan", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minwoo", |
|
"middle": [], |
|
"last": "Jeong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonghoon", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gary Geunbae", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23 rd International Conference on Computational Linguistics, COLING '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "564--571", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Seokhwan Kim, Minwoo Jeong, Jonghoon Lee, and Gary Geunbae Lee. 2010. A cross-lingual annota- tion projection approach for relation detection. In Pro- ceedings of the 23 rd International Conference on Com- putational Linguistics, COLING '10, pages 564-571, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Europarl: A Parallel Corpus for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Conference Proceedings: the tenth Machine Translation Summit", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "79--86", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Conference Proceedings: the tenth Machine Translation Summit, pages 79-86, Phuket, Thailand. AAMT, AAMT.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Unsupervised induction of semantic roles", |
|
"authors": [ |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Lang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "939--947", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joel Lang and Mirella Lapata. 2010. Unsupervised in- duction of semantic roles. In Human Language Tech- nologies: The 2010 Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics, pages 939-947, Los Angeles, Cal- ifornia, June. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Unsupervised semantic role induction via split-merge clustering", |
|
"authors": [ |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Lang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proc. of Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joel Lang and Mirella Lapata. 2011. Unsupervised se- mantic role induction via split-merge clustering. In Proc. of Annual Meeting of the Association for Com- putational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Cross-Language Parser Adaptation between Related Languages", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Lopez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Zeman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Nossal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Hwa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "IJCNLP-08 Workshop on NLP for Less Privileged Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "35--42", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Lopez, Daniel Zeman, Michael Nossal, Philip Resnik, and Rebecca Hwa. 2008. Cross-Language Parser Adaptation between Related Languages. In IJCNLP-08 Workshop on NLP for Less Privileged Languages, pages 35-42, Hyderabad, India, January.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Unsupervised frame based semantic role induction: application to french and english", |
|
"authors": [ |
|
{ |
|
"first": "Alejandra", |
|
"middle": [], |
|
"last": "Lorenzo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christophe", |
|
"middle": [], |
|
"last": "Cerisara", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the ACL 2012 Joint Workshop on Statistical Parsing and Semantic Processing of Morphologically Rich Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "30--35", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alejandra Lorenzo and Christophe Cerisara. 2012. Un- supervised frame based semantic role induction: ap- plication to french and english. In Proceedings of the ACL 2012 Joint Workshop on Statistical Parsing and Semantic Processing of Morphologically Rich Lan- guages, pages 30-35, Jeju, Republic of Korea, July 12. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Multi-source transfer of delexicalized dependency parsers", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keith", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "62--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing, EMNLP '11, pages 62-72, Stroudsburg, PA, USA. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Automatic distinction of arguments and modifiers: the case of prepositional phrases", |
|
"authors": [ |
|
{ |
|
"first": "Paola", |
|
"middle": [], |
|
"last": "Merlo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Leybold", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the Fifth Computational Natural Language Learning Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "121--128", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paola Merlo and Matthias Leybold. 2001. Automatic distinction of arguments and modifiers: the case of prepositional phrases. In Proceedings of the Fifth Computational Natural Language Learning Workshop (CoNLL-2001), pages 121-128, Toulouse, France.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Selective sharing for multilingual dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Tahira", |
|
"middle": [], |
|
"last": "Naseem", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Globerson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50 th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "629--637", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proceedings of the 50 th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 629-637, Jeju Island, Ko- rea, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "A systematic comparison of various statistical alignment models", |
|
"authors": [ |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Franz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational Linguistics", |
|
"volume": "", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A system- atic comparison of various statistical alignment mod- els. Computational Linguistics, 29(1).", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Crosslinguistic projection of role-semantic information", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Pad\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "859--866", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Pad\u00f3 and Mirella Lapata. 2005. Cross- linguistic projection of role-semantic information. In Proceedings of Human Language Technology Confer- ence and Conference on Empirical Methods in Natu- ral Language Processing, pages 859-866, Vancouver, British Columbia, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Cross-lingual annotation projection for semantic roles", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Pad\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "36", |
|
"issue": "", |
|
"pages": "307--340", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Pad\u00f3 and Mirella Lapata. 2009. Cross-lingual annotation projection for semantic roles. Journal of Artificial Intelligence Research, 36:307-340.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "The Proposition Bank: An annotated corpus of semantic roles", |
|
"authors": [ |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Kingsbury", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Computational Linguistics", |
|
"volume": "31", |
|
"issue": "", |
|
"pages": "71--105", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005a. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics, 31:71- 105.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "A parallel Proposition Bank II for Chinese and English", |
|
"authors": [ |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Babko-Malaya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinying", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Snyder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Workshop on Frontiers in Corpus Annotations II: Pie in the Sky, CorpusAnno '05", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "61--67", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martha Palmer, Nianwen Xue, Olga Babko-Malaya, Jiny- ing Chen, and Benjamin Snyder. 2005b. A parallel Proposition Bank II for Chinese and English. In Pro- ceedings of the Workshop on Frontiers in Corpus An- notations II: Pie in the Sky, CorpusAnno '05, pages 61-67, Stroudsburg, PA, USA. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Towards robust semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Sameer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wayne", |
|
"middle": [], |
|
"last": "Pradhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Computational Linguistics", |
|
"volume": "34", |
|
"issue": "2", |
|
"pages": "289--310", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sameer S. Pradhan, Wayne Ward, and James H. Martin. 2008. Towards robust semantic role labeling. Compu- tational Linguistics, 34(2):289-310.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Integer linear programming inference for conditional random fields", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen-Tau", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "ICML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "736--743", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Roth and Wen-tau Yih. 2005. Integer linear pro- gramming inference for conditional random fields. In ICML, pages 736-743.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Data point selection for crosslanguage adaptation of dependency parsers", |
|
"authors": [ |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49 th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "682--686", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anders S\u00f8gaard. 2011. Data point selection for cross- language adaptation of dependency parsers. In Pro- ceedings of the 49 th Annual Meeting of the Associa- tion for Computational Linguistics: Human Language Technologies: short papers -Volume 2, HLT '11, pages 682-686, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Introduction to dual decomposition for inference", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Sontag", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Globerson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommi", |
|
"middle": [], |
|
"last": "Jaakkola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Optimization for Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Sontag, Amir Globerson, and Tommi Jaakkola. 2011. Introduction to dual decomposition for in- ference. In Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright, editors, Optimization for Machine Learning. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "A Bayesian approach to unsupervised semantic role induction", |
|
"authors": [ |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Klementiev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proc. of European Chapter of the Association for Computational Linguistics (EACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ivan Titov and Alexandre Klementiev. 2012a. A Bayesian approach to unsupervised semantic role in- duction. In Proc. of European Chapter of the Associa- tion for Computational Linguistics (EACL).", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Crosslingual induction of semantic roles", |
|
"authors": [ |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Klementiev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50 th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ivan Titov and Alexandre Klementiev. 2012b. Crosslin- gual induction of semantic roles. In Proceedings of the 50 th Annual Meeting of the Association for Com- putational Linguistics, Jeju Island, South Korea, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Scaling up automatic cross-lingual semantic role annotation", |
|
"authors": [ |
|
{ |
|
"first": "Paola", |
|
"middle": [], |
|
"last": "Lonneke Van Der Plas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Merlo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Henderson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49 th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "299--304", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lonneke van der Plas, Paola Merlo, and James Hender- son. 2011. Scaling up automatic cross-lingual seman- tic role annotation. In Proceedings of the 49 th Annual Meeting of the Association for Computational Linguis- tics: Human Language Technologies: short papers - Volume 2, HLT '11, pages 299-304, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Joint inference for bilingual semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Zhuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chengqing", |
|
"middle": [], |
|
"last": "Zong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "304--314", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tao Zhuang and Chengqing Zong. 2010. Joint inference for bilingual semantic role labeling. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10, pages 304-314, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Role correspondence in parallel sentences, an example.", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Predicate-specific role mapping. Note that A0 corresponds to art0-agt, art1-tem or art2-ben, depending on the predicate.", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "", |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Projection model features example.", |
|
"uris": null |
|
}, |
|
"FIGREF5": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Projection setup, English-Spanish, model performance as a function of the size of the initial training set.", |
|
"uris": null |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>: The effect of adding automatically obtained an-</td></tr><tr><td>notation to the initial training set. Asterisk indicates out-</td></tr><tr><td>of-domain test set, statistically significant improvements</td></tr><tr><td>are highlighted in bold.</td></tr></table>", |
|
"type_str": "table", |
|
"text": "" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td>2.02</td></tr><tr><td>en-cz</td><td>67.56 66.42 66.72</td><td>0.30</td></tr><tr><td colspan=\"2\">en-de* 67.64 66.72 68.57</td><td>1.84</td></tr><tr><td>en-de</td><td>75.13 71.97 73.57</td><td>1.60</td></tr><tr><td>en-es</td><td>68.14 67.80 69.04</td><td>1.24</td></tr><tr><td>en-zh</td><td>76.28 72.96 75.22</td><td>2.26</td></tr><tr><td colspan=\"2\">cz-en* 69.37 66.45 66.22</td><td>-0.23</td></tr><tr><td>cz-en</td><td>77.32 74.72 75.02</td><td>0.31</td></tr><tr><td colspan=\"2\">de-en* 69.37 66.45 66.68</td><td>0.23</td></tr><tr><td>de-en</td><td>77.32 73.56 73.72</td><td>0.17</td></tr><tr><td colspan=\"2\">es-en* 69.37 66.64 66.40</td><td>-0.23</td></tr><tr><td>es-en</td><td>77.32 74.05 74.89</td><td>0.84</td></tr><tr><td colspan=\"2\">zh-en* 69.37 66.08 65.53</td><td>-0.56</td></tr><tr><td>zh-en</td><td>77.32 74.48 74.25</td><td>-0.24</td></tr></table>", |
|
"type_str": "table", |
|
"text": "INIT SELF JOINT \u2206 SELF en-cz* 67.07 66.15 68.18" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>: Comparing JOINT model against the self-</td></tr><tr><td>training baseline in symmetric setup. Asterisk indicates</td></tr><tr><td>out-of-domain test set, statistically significant improve-</td></tr><tr><td>ments are highlighted in bold.</td></tr></table>", |
|
"type_str": "table", |
|
"text": "" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>en-zh</td><td>75.80 73.52 76.75</td><td>3.22</td><td>0.94</td></tr><tr><td colspan=\"2\">cz-en* 66.82 63.95 70.75</td><td>6.80</td><td>3.93</td></tr><tr><td>cz-en</td><td>74.93 71.60 79.70</td><td>8.10</td><td>4.76</td></tr><tr><td colspan=\"2\">de-en* 66.82 63.58 69.46</td><td>5.88</td><td>2.64</td></tr><tr><td>de-en</td><td>74.93 71.31 77.34</td><td>6.03</td><td>2.41</td></tr><tr><td colspan=\"2\">es-en* 66.82 63.95 69.92</td><td>5.97</td><td>3.10</td></tr><tr><td>es-en</td><td>74.93 71.47 79.55</td><td>8.08</td><td>4.62</td></tr><tr><td colspan=\"2\">zh-en* 66.82 64.51 67.19</td><td>2.68</td><td>0.37</td></tr><tr><td>zh-en</td><td>74.93 72.26 76.51</td><td>4.26</td><td>1.58</td></tr></table>", |
|
"type_str": "table", |
|
"text": "INIT SELF JOINT \u2206 SELF \u2206 INIT en-cz* 61.11 60.68 72.49 11.81 11.38 en-cz 62.45 62.15 70.19 8.04 7.74 en-de* 66.81 63.96 76.78 12.82 9.97 en-de 70.39 68.34 79.22 10.88 8.84 en-es 64.20 64.51 75.43 10.92 11.23" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Oracle RCM performance, projection setup: initial model, self-training baseline, refined model and its improvement over the other two. Asterisk indicates outof-domain test set, statistically significant improvements are highlighted in bold." |
|
} |
|
} |
|
} |
|
} |