{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:34:52.264640Z" }, "title": "Multilingual Syntax-aware Language Modeling through Dependency Tree Conversion", "authors": [ { "first": "Shunsuke", "middle": [], "last": "Kando", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Tokyo", "location": {} }, "email": "kando-shunsuke@alumni.u-tokyo.ac.jp" }, { "first": "Hiroshi", "middle": [], "last": "Noji", "suffix": "", "affiliation": { "laboratory": "", "institution": "AIST", "location": {} }, "email": "noji@leapmind.io" }, { "first": "Yusuke", "middle": [], "last": "Miyao", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Tokyo", "location": {} }, "email": "yusuke@is.s.u-tokyo.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Incorporating stronger syntactic biases into neural language models (LMs) is a long-standing goal, but research in this area often focuses on modeling English text, where constituent treebanks are readily available. Extending constituent tree-based LMs to the multilingual setting, where dependency treebanks are more common, is possible via dependency-toconstituency conversion methods. However, this raises the question of which tree formats are best for learning the model, and for which languages. We investigate this question by training recurrent neural network grammars (RNNGs) using various conversion methods, and evaluating them empirically in a multilingual setting. We examine the effect on LM performance across nine conversion methods and five languages through seven types of syntactic tests. On average, the performance of our best model represents a 19 % increase in accuracy over the worst choice across all languages. Our best model shows the advantage over sequential/overparameterized LMs, suggesting the positive effect of syntax injection in a multilingual setting. Our experiments highlight the importance of choosing the right tree formalism, and provide insights into making an informed decision.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Incorporating stronger syntactic biases into neural language models (LMs) is a long-standing goal, but research in this area often focuses on modeling English text, where constituent treebanks are readily available. Extending constituent tree-based LMs to the multilingual setting, where dependency treebanks are more common, is possible via dependency-toconstituency conversion methods. However, this raises the question of which tree formats are best for learning the model, and for which languages. We investigate this question by training recurrent neural network grammars (RNNGs) using various conversion methods, and evaluating them empirically in a multilingual setting. We examine the effect on LM performance across nine conversion methods and five languages through seven types of syntactic tests. On average, the performance of our best model represents a 19 % increase in accuracy over the worst choice across all languages. Our best model shows the advantage over sequential/overparameterized LMs, suggesting the positive effect of syntax injection in a multilingual setting. Our experiments highlight the importance of choosing the right tree formalism, and provide insights into making an informed decision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The importance of language modeling in recent years has grown considerably, as methods based on large pre-trained neural language models (LMs) have become the state-of-the-art for many problems (Devlin et al., 2019; Radford et al., 2019) . However, these neural LMs are based on general architectures and therefore do not explicitly model linguistic constraints, and have been shown to capture only a subset of the syntactic representations typically found in constituency treebanks (Warstadt et al., 2020) . An alternative line of LM research aims to explicitly model the parse tree in order to make the LM syntax-aware. A representative example of this paradigm, reccurent neural network grammar (RNNG, Dyer et al., 2016) , is reported to perform better than sequential LMs on tasks that require complex syntactic analysis (Kuncoro et al., 2019; Hu et al., 2020; Noji and Oseki, 2021) .", "cite_spans": [ { "start": 194, "end": 215, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF3" }, { "start": 216, "end": 237, "text": "Radford et al., 2019)", "ref_id": "BIBREF20" }, { "start": 483, "end": 506, "text": "(Warstadt et al., 2020)", "ref_id": "BIBREF25" }, { "start": 698, "end": 723, "text": "(RNNG, Dyer et al., 2016)", "ref_id": null }, { "start": 825, "end": 847, "text": "(Kuncoro et al., 2019;", "ref_id": "BIBREF10" }, { "start": 848, "end": 864, "text": "Hu et al., 2020;", "ref_id": "BIBREF7" }, { "start": 865, "end": 886, "text": "Noji and Oseki, 2021)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The aim of this paper is to extend LMs that inject syntax to the multilingual setting. This attempt is important mainly in two ways. Firstly, English has been dominant in researches on syntax-aware LM. While multilingual LMs have received increasing attention in recent years, most of their approaches do not explicitly model syntax, such as multilingual BERT (mBERT, Devlin et al., 2019) or XLM-R (Conneau et al., 2020) . Although these models have shown high performance on some cross-lingual tasks (Conneau et al., 2018) , they perform poorly on a syntactic task (Mueller et al., 2020) . Secondly, syntax-aware LMs have interesting features other than their high syntactic ability. One example is the validity of RNNG as a cognitive model under an English-based setting, as demonstrated in Hale et al. (2018) . Since human cognitive functions are universal, while natural languages are diverse, it would be ideal to conduct this experiment based on multiple languages.", "cite_spans": [ { "start": 368, "end": 388, "text": "Devlin et al., 2019)", "ref_id": "BIBREF3" }, { "start": 398, "end": 420, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF1" }, { "start": 501, "end": 523, "text": "(Conneau et al., 2018)", "ref_id": "BIBREF2" }, { "start": 566, "end": 588, "text": "(Mueller et al., 2020)", "ref_id": "BIBREF14" }, { "start": 793, "end": 811, "text": "Hale et al. (2018)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main obstacle for multilingual syntax-aware modeling is that it is unclear how to inject syntactic information while training. A straightforward approach is to make use of a multilingual treebank, such as Universal Dependencies (UD, Nivre et al., 2016; Nivre et al., 2020) , where trees are represented in a dependency tree (DTree) formalism. Matthews et al. (2019) evaluated parsing and language modeling performance on three typologically different languages, using a generative dependency model. Unfortunately, they revealed that dependency-based models are less suited to language modeling than comparable constituencybased models, highlighting the apparent difficulty of extending syntax-aware LMs to other languages using existing resources. This paper revisits the issue of the difficulty of constructing multilingual syntax-aware LMs, by exploring the performance of multilingual language modeling using constituency-based models. Since our domain is a multilingual setting, our focus turns to how dependency-to-constituency conversion techniques result in different trees, and how these trees affect the model's performance. We obtain constituency treebanks from UD-formatted dependency treebanks of five languages using nine tree conversion methods. These treebanks are in turn used to train an RNNG, which we evaluate on perplexity and CLAMS (Mueller et al., 2020) .", "cite_spans": [ { "start": 232, "end": 256, "text": "(UD, Nivre et al., 2016;", "ref_id": null }, { "start": 257, "end": 276, "text": "Nivre et al., 2020)", "ref_id": "BIBREF16" }, { "start": 347, "end": 369, "text": "Matthews et al. (2019)", "ref_id": "BIBREF13" }, { "start": 1357, "end": 1379, "text": "(Mueller et al., 2020)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions are: (1) We propose a methodology for training multilingual syntax-aware LMs through the dependency tree conversion. 2We found an optimal structure that brings out the potential of RNNG across five languages. 3We demonstrated the advantage of our multilingual RNNG over sequential/overparameterized LMs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "RNNGs are generative models that estimate joint probability of a sentence x and a constituency tree (CTree) y. The probability p(x, y) is estimated with top-down constituency parsing actions a = (a 1 , a 2 , \u22ef, a n ) that produce y: Kuncoro et al. (2017) proposed a stack-only RNNG that computes the next action probability based on the current partial tree. Figure 1 illustrates the behavior of it. The model represents the current partial tree with a stack-LSTM, which consists of three types of embeddings: nonterminal, word, and closed-nonterminal. The next action is estimated with the last hidden state of a stack-LSTM. There are three types of actions as follows:", "cite_spans": [ { "start": 233, "end": 254, "text": "Kuncoro et al. (2017)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 359, "end": 367, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Recurrent Neural Network Grammars", "sec_num": "2.1" }, { "text": "p(x, y) = n \u220f t=1 p(a t |a 1 , \u22ef, a t\u22121 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recurrent Neural Network Grammars", "sec_num": "2.1" }, { "text": "\u2022 NT(X): Push nonterminal embedding of X (e X ) onto the stack.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recurrent Neural Network Grammars", "sec_num": "2.1" }, { "text": "\u2022 GEN(w): Push word embedding of w (e w ) onto the stack.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recurrent Neural Network Grammars", "sec_num": "2.1" }, { "text": "\u2022 REDUCE: Pop elements from the stack until a nonterminal embedding shows up. With all the embeddings which are popped, compute closed-nonterminal embedding e X \u2032 using composition funcion COMP:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recurrent Neural Network Grammars", "sec_num": "2.1" }, { "text": "e X \u2032 = COMP(e X , e w 1 , \u22ef, e wm )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recurrent Neural Network Grammars", "sec_num": "2.1" }, { "text": "RNNG can be regarded as a language model that injects syntactic knowledge explicitly, and various appealing features have been reported (Kuncoro et al., 2017; Kuncoro et al., 2017; Hale et al., 2018) . We focus on its high performance on syntactic evaluation, which is described below.", "cite_spans": [ { "start": 136, "end": 158, "text": "(Kuncoro et al., 2017;", "ref_id": "BIBREF9" }, { "start": 159, "end": 180, "text": "Kuncoro et al., 2017;", "ref_id": "BIBREF9" }, { "start": 181, "end": 199, "text": "Hale et al., 2018)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Recurrent Neural Network Grammars", "sec_num": "2.1" }, { "text": "Difficulty in extending to other languages In principle, RNNG can be learned with any corpus as long as it contains CTree annotation. However, it is not evident which tree formats are best in a multilingual setting. Using the same technique as English can be inappropriate because each language has its own characteristic, which can be different from English. This question is the fundamental motivation of this research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recurrent Neural Network Grammars", "sec_num": "2.1" }, { "text": "To investigate the capability of LMs to capture syntax, previous work has attempted to create an evaluation set that requires analysis of the sentence structure (Linzen et al., 2016) . One typical example is a subject-verb agreement, a rule that the form of a verb is determined by the grammatical category of the subject, such as person or number:", "cite_spans": [ { "start": 161, "end": 182, "text": "(Linzen et al., 2016)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-linguistic Syntactic Evaluation", "sec_num": "2.2" }, { "text": "The pilot that the guards love laughs/*laugh. 1In (1), the form of laugh is determined by the subject pilot, not guards. This judgment requires Algorithm 1: lf is short for left-first conversion. We omit right-first conversion because it can be defined just by swapping the codeblocks 6-9 and 10-13 of left-first conversion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-linguistic Syntactic Evaluation", "sec_num": "2.2" }, { "text": "1 Function flat(w, ldeps, rdeps): syntactic analysis; guards is not a subject of target verb laugh because it is in the relative clause of the real subject pilot.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-linguistic Syntactic Evaluation", "sec_num": "2.2" }, { "text": "2 lNT \u2190 [", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-linguistic Syntactic Evaluation", "sec_num": "2.2" }, { "text": "Marvin and Linzen (2018) designed the English evaluation set using a grammatical framework. Mueller et al. (2020) extended this framework to other languages (French, German, Hebrew, and Russian) and created an evaluation set named CLAMS (Cross-Linguistic Assessment of Models on Syntax). CLAMS covers 7 categories of agreement tasks, including local agreement (e.g. The author laughs/*laugh) and non-local agreement that contains an intervening phrase between subject and verb as in (1). They evaluated LMs on CLAMS and demonstrated that sequential LMs often fail to assign a higher probability to the grammatical sentence in cases that involve non-local dependency.", "cite_spans": [ { "start": 92, "end": 113, "text": "Mueller et al. (2020)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-linguistic Syntactic Evaluation", "sec_num": "2.2" }, { "text": "Previous work has attempted to explore the syntactic capabilities of LMs with these evaluation sets. Kuncoro et al. (2019) compared the performance of LSTM LM and RNNG using the evaluation set proposed in Marvin and Linzen (2018) , demonstrating the superiority of RNNG in predicting the agreement. Noji and Takamura (2020) suggested that LSTM LMs potentially have a limitation in handling object relative clauses. Since these analyses are performed on the basis of English text, it is unclear whether they hold or not in a multilingual setting. In this paper, we attempt to investigate this point by learning RNNGs in other languages and evaluating them on CLAMS.", "cite_spans": [ { "start": 101, "end": 122, "text": "Kuncoro et al. (2019)", "ref_id": "BIBREF10" }, { "start": 205, "end": 229, "text": "Marvin and Linzen (2018)", "ref_id": "BIBREF12" }, { "start": 299, "end": 323, "text": "Noji and Takamura (2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-linguistic Syntactic Evaluation", "sec_num": "2.2" }, { "text": "As a source of multilingual syntactic information, we use Universal Dependencies (UD), a collection of cross-linguistic dependency treebanks with a consistent annotation scheme. Since RNNG requires a CTree-formatted dataset for training, we perform DTree-to-CTree conversions, which are completely algorithmic to make it work regardless of language. Our method consists of two procedures: structural conversion and nonterminal labeling; obtaining a CTree skeleton with unlabeled nonterminal nodes, then assigning labels by leveraging syntactic information contained in the dependency annotations. While our structural conversion is identical to the baseline approach of Collins et al. (1999) , we include a novel labeling method that relies on dependency relations, not POS tags.", "cite_spans": [ { "start": 670, "end": 691, "text": "Collins et al. (1999)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Method: Dependency Tree Conversion", "sec_num": "3" }, { "text": "We performed three types of structural conversion: flat, left-first, and rightfirst. Algorithm 1 shows the pseudo code and Figure 2 illustrates the actual conversions. These approaches construct CTree in a top-down manner following this procedure: 1) Introduce the root nonterminal of the head of a sentence (NT give ). 2) For each NT w , introduce new nonterminals according to the dependent(s) of w. Repeat this procedure recursively until w has no dependents.", "cite_spans": [], "ref_spans": [ { "start": 123, "end": 129, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Structural conversion", "sec_num": null }, { "text": "The difference between the three approaches is the ordering of introducing nonterminals. We describe their behaviors based on the example in Figure 2. (a) flat approach lets w and its dependents be children in CTree simultaneously. For example, NT give has four children: NT man , NT give , NT him , NT box , because they are dependents of the head word give. As the name suggests, this approach tends to produce a flat-structured CTree because each nonterminal can have multiple children. (b) left-first approach introduces the nonterminals from the left-most dependent. If there is no left dependent, the right-most dependent is introduced. In the example of Figure 2 , the root NT give has a left child NT man because man is the left-most dependent of the head give. (c) right-first approach is the inversed version of left-first; handling the rightmost dependent first. For methods (b) and (c), the resulting CTree is always a binary tree.", "cite_spans": [], "ref_spans": [ { "start": 141, "end": 147, "text": "Figure", "ref_id": null }, { "start": 661, "end": 669, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Structural conversion", "sec_num": null }, { "text": "Nonterminal labeling We define three types of labeling methods for each NT w ; 1) X-label: Assign \"X\" to all the nonterminals. 2)POS-label: Assign POS tag of w. 3) DEP-label: Assign dependency : The illustration of structural conversion. NT w is a temporal label of nonterminal which will be assigned at nonterminal labeling phase. relation between w and its head. Table 1 shows the actual labels that are assigned to CTrees in Figure 2 . Each method has its own intent. X-label drops the syntactic category of each phrase, which minimizes the structural information of the sentence. POS-label would produce the most common CTree structure because traditionally nonterminals are labeled based on POS tag of the head word. DEPlabel is a more fine-grained method than POS-label because words in a sentence can have the same POS tag but different dependency relation, as in man and box in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 365, "end": 372, "text": "Table 1", "ref_id": "TABREF3" }, { "start": 428, "end": 436, "text": "Figure 2", "ref_id": null }, { "start": 886, "end": 894, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Structural conversion", "sec_num": null }, { "text": "X-label POS-label DEP-label NT The X DETP det NTman X NOUNP nsubj NTgive X VERBP root NT him X PRONP iobj NTa X DETP det NT box X NOUNP obj", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structural conversion", "sec_num": null }, { "text": "Finally, we performed a total of nine types of conversions (three structures \u00d7 three labelings). Although they have discrete features, they are common in that they embody reasonable phrase structures that are useful for capturing syntax. Figure 3 shows the converted structure of an actual instance from CLAMS. In all settings, the main subject phrase is correctly dominated by NT pilot , which should contribute to solving the task.", "cite_spans": [], "ref_spans": [ { "start": 238, "end": 246, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Structural conversion", "sec_num": null }, { "text": "Works Well in Every Language?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What Is the Robust Conversion Which", "sec_num": "4" }, { "text": "In Section 3, we proposed language-independent multiple conversions from DTree to CTree. The intriguing question is; Is there a robust conversion that brings out the potential of RNNG in every language? To answer this question, we conducted a thorough experiment to compare the performances of RNNGs trained in each setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What Is the Robust Conversion Which", "sec_num": "4" }, { "text": "Treebank preparation Following Mueller et al. 2020, we extracted Wikipedia articles of target languages using WikiExtractor 1 to create corpora 2 . We fed it to UDify (Kondratyuk and Straka, 2019) , a multilingual neural dependency parser trained on the entire UD treebanks, to generate a CoNLL-U formatted dependency treebank. Sentences are tokenized beforehand using Stanza (Qi et al., 2020) because UDify requires tokenized text for prediction. The resulting dependency treebank is converted into the constituency treebank using methods proposed in Section 3. Our treebank contains around 10% non-projective DTrees for all the language (between 9% in Russian and 14% in Hebrew), and we omit them in the conversion phase because we cannot obtain valid CTrees from them 3 . As a training set, we picked sentences with 10M tokens at random for each language. For a validation and a test set, we picked 5,000 sentences respectively.", "cite_spans": [ { "start": 167, "end": 196, "text": "(Kondratyuk and Straka, 2019)", "ref_id": "BIBREF8" }, { "start": 376, "end": 393, "text": "(Qi et al., 2020)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "Training details We used batched RNNG (Noji and Oseki, 2021) to speed up our training. Following Noji and Oseki (2021), we used subword units (Sennrich et al., 2016) Figure 3 : Examples of converted CTrees. A sentence is taken from CLAMS, which requires recognition of long distance dependency intervened by object relative clause (sentence (1)). For simplicity, we omit the corresponding word of each nonterminal except for pilot, the main subject of the sentence.", "cite_spans": [ { "start": 38, "end": 60, "text": "(Noji and Oseki, 2021)", "ref_id": "BIBREF17" }, { "start": 142, "end": 165, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 166, "end": 174, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": ". We set the hyperparameters so as to make the model size 35M. We trained each model for 24 hours on a single GPU.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "30K", "sec_num": null }, { "text": "Evaluation metrics To compare the performance among conversions, we evaluated the model trained on each dataset in two aspects: perplexity and syntactic ability based on CLAMS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "30K", "sec_num": null }, { "text": "Perplexity is a standard metric for assessing the quality of LM. Since we adopt subword units, we regard a word probability as a product of its subwords' probabilities. To compute it on RNNG, we performed word-synchronous beam search (Stern et al., 2017) , a default approach implemented in batched RNNG. Following Noji and Oseki (2021), we set a beam size k as 100, a word beam size k w as 10, and fast-track candidates k s as 1. Syntactic ability is assessed by accuracy on CLAMS, which is calculated by comparing the probabilities assigned to a grammatical and an ungrammatical sentence. If the model assigns a higher probability to a grammatical sentence, then we regard it as correct. Chance accuracy is 0.5.", "cite_spans": [ { "start": 234, "end": 254, "text": "(Stern et al., 2017)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "30K", "sec_num": null }, { "text": "We run the experiment three times with different random seeds for initialization of the model, and report the average score with standard deviation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "30K", "sec_num": null }, { "text": "From now on, we refer to each conversion method according to a naming of the procedure, such as \"left-first structure\" or \"flat-POS conversion\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Result", "sec_num": "4.2" }, { "text": "Perplexity Table 2 shows the perplexities in each setting. As a whole, flat structures show the lowest perplexity, followed by left-first and right-first, which is consistent across languages. While flat structure produces stable and relatively low perplexity regardless of labeling methods and languages, left-first and right-first structures perform very poorly on X-label. Syntactic ability Figure 4 shows the accuracies of CLAMS in each setting, and Table 3 shows the average scores. From Table 3 , we observe clear distinctions across methods; the best model (shown in bold) is 19% more accurate in average than the worst one (shown in italic), across all languages, indicating the model's certain preference for the structure. Similar to perplexity, flat structure performs better and more stably than the others, regardless of labels and languages. While Mueller et al. (2020) reported a high variability in scores across languages when an LSTM LM is used, flat structure-based RNNGs do not show such a tendency; almost all the accuracies are above 90%.", "cite_spans": [ { "start": 856, "end": 883, "text": "While Mueller et al. (2020)", "ref_id": null } ], "ref_spans": [ { "start": 11, "end": 18, "text": "Table 2", "ref_id": "TABREF5" }, { "start": 394, "end": 402, "text": "Figure 4", "ref_id": "FIGREF2" }, { "start": 454, "end": 461, "text": "Table 3", "ref_id": "TABREF7" }, { "start": 493, "end": 500, "text": "Table 3", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Result", "sec_num": "4.2" }, { "text": "Looking closely at the Figure 4 , we can see that left-first and right-first structures exhibit unstable behavior depending on the labeling; the accuracy on X-label tends to be lower especially for the categories that require the resolution of a long-distance dependency, such as 'VP coord (long)', 'Across subj. rel.', 'Across obj. rel.', and 'Across prep'. Discussion Basically, we observed a similar tendency in perplexity and CLAMS score; (1) flat structures show the highest scores. (2) left-first and right-first structures perform poorly on X-label. We conjecture that these tendencies are due to the resulting structure of each conversion; while flat structure is non-binary, the rest two are binary. Since nonterminals in a non-binary tree can have multiple words as children, parsing actions obtained from it contain more continuous GEN actions than a binary tree. This nature helps the model to predict the next word by considering lexi-cal relations, which would contribute to its lower perplexity. Although binary trees get better with the hint of informative labels (POS/DEP), it is difficult to reach the performance of flat structures due to their confused actions; GEN actions tend to be interrupted by other actions. Besides, there are too many NT actions in a binary tree, which can hurt the prediction because the information of an important nonterminal (e.g. NT pilot in Figure 3 ) can be diluted through the actions. The situation becomes worse on X-label; the model cannot distinguish the nonterminal of the main subject and that of the other, resulting in missing what the subject is.", "cite_spans": [], "ref_spans": [ { "start": 23, "end": 31, "text": "Figure 4", "ref_id": "FIGREF2" }, { "start": 1392, "end": 1400, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Result", "sec_num": "4.2" }, { "text": "It is worth noting that perplexity does not always reflect the CLAMS accuracy. For example, while right-X conversion produces the worst perplexity for all the languages, it achieves better CLAMS accuracy than left-X conversion for almost all the cases. This observation is in line with Hu et al. (2020) , who report a dissociation between perplexity and syntactic performance for English.", "cite_spans": [ { "start": 286, "end": 302, "text": "Hu et al. (2020)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Result", "sec_num": "4.2" }, { "text": "As one possible reason why flat structure is optimal among the three structures presented, we conjecture that the parseability of the structure is involved. To test this hypothesis, we calculated the F1 score flat left right X 0.80\u00b1.00 0.34\u00b1.00 0.48\u00b1.00 English POS 0.79\u00b1.00 0.57\u00b1.00 0.70\u00b1.00 DEP 0.82\u00b1.01 0.59\u00b1.01 0.70\u00b1.00 X 0.79\u00b1.00 0.37\u00b1.00 0.58\u00b1.00 French POS 0.86\u00b1.00 0.63\u00b1.00 0.74\u00b1.00 DEP 0.86\u00b1.01 0.65\u00b1.01 0.75\u00b1.00 X 0.90\u00b1.00 0.44\u00b1.00 0.59\u00b1.00 German POS 0.85\u00b1.00 0.74\u00b1.00 0.76\u00b1.00 DEP 0.91\u00b1.08 0.76\u00b1.00 0.77\u00b1.00 X 0.81\u00b1.01 0.41\u00b1.00 0.58\u00b1.00 Hebrew POS 0.83\u00b1.00 0.65\u00b1.00 0.73\u00b1.00 DEP 0.83\u00b1.00 0.65\u00b1.00 0.72\u00b1.00 X 0.80\u00b1.00 0.41\u00b1.00 0.59\u00b1.00 Russian POS 0.83\u00b1.00 0.62\u00b1.00 0.73\u00b1.00 DEP 0.82\u00b1.01 0.58\u00b1.00 0.68\u00b1.00 Table 4 : F1 score of predicted CTree. We regard a resulting CTree of each conversion as a gold tree. between the gold CTrees of the test set and the structures predicted by RNNG for each setting. Table 4 shows the result. The tendencies of F1 scores are consistent across languages: 1) Flat structures show highest F1 score. 2) While scores of flat structures are stable regardless of their labelings, the rest two structures exhibit lower score on X-label. As a whole, the result reflects the tendency discussed in Section 4.2, which supports our hypothesis.", "cite_spans": [], "ref_spans": [ { "start": 717, "end": 724, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Why Does Flat Structure Perform Well?", "sec_num": "4.3" }, { "text": "To further investigate the link between parseability and the capability of solving the task, we obtained parse trees of CLAMS examples that are solvable only by flat RNNG across all seeds. We found that only flat RNNG produces a correct constituency tree, and structures obtained from leftfirst and right-first RNNGs are incorrect on a critical point. For example, in Figure 5 , while the relation between the subject \"author\" and the target verb \"laughs\" is analyzed clearly in the flat structure, it is ambiguous in the rest, possibly causing the misinterpretation that the subject is \"guards\".", "cite_spans": [], "ref_spans": [ { "start": 368, "end": 376, "text": "Figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Why Does Flat Structure Perform Well?", "sec_num": "4.3" }, { "text": "These findings indicate the importance of choosing the correct tree structure for syntax-aware language modeling; it should be not only hierarchical, but also as parseable as possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Does Flat Structure Perform Well?", "sec_num": "4.3" }, { "text": "Through analysis of the conversions, we found that (1) flat structure performs stably well in every setting. (2) while CLAMS accuracy of flat structure does not differ significantly depending on its labeling, for perplexity, flat-DEP performs the best for more than half of the languages and no inferiority can be observed for the other languages. Therefore, we conclude that flat-DEP conversion is the most robust conversion among languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Does Flat Structure Perform Well?", "sec_num": "4.3" }, { "text": "In this section, we demonstrate the benefits of injecting syntactic biases into the model in a multilingual setting. We obtained the CLAMS score of RNNG trained on the flat-DEP treebank (flat-DEP RNNG for short) and compared it against baselines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Advantage of Syntax Injection to LMs in a Multilingual Setting", "sec_num": "5" }, { "text": "Experimental setup The experiment was conducted in as close setting to the previous work as possible. Following Mueller et al. (2020), we extracted Wikipedia articles of 80M tokens as training set. The hyperparameters of LSTM LM are set following Noji and Takamura (2020) because it performs the best for the dataset of Marvin and Linzen (2018) 4 . We used subword units with a vocabulary size of 30K, and the sizes of RNNG and LSTM LM are set to be the same (35M).", "cite_spans": [ { "start": 247, "end": 271, "text": "Noji and Takamura (2020)", "ref_id": "BIBREF18" }, { "start": 320, "end": 346, "text": "Marvin and Linzen (2018) 4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Advantage of Syntax Injection to LMs in a Multilingual Setting", "sec_num": "5" }, { "text": "Result Table 5 shows the result. In addition to scores from the models we trained (flat-DEP RNNG, LSTM (N20)), we display scores of LSTM LM and mBERT reported in the original paper (LSTM (M20) and mBERT (M20), Mueller et al., 2020) . Overall, we can see the superiority of RNNG across languages, especially for the tasks that require analysis on long distance dependency; 'VP coord (long)', 'Across subj. rel.', 'Across obj. rel.', and 'Across prep'. While previous work suggested that LSTM LMs potentially have a limitation in handling object relative clauses (Noji and Takamura, 2020) , our result suggests that RNNG does not have such a limitation thanks to explicitly injected syntactic biases. Noji and Takamura (2020) . LSTM (M20) and mBERT (M20) scores are quoted from Table 1 , 2 and 5 in Mueller et al. (2020) . Hyphen means that all focus verb for the corresponding setting were out-of-vocabulary.", "cite_spans": [ { "start": 210, "end": 231, "text": "Mueller et al., 2020)", "ref_id": "BIBREF14" }, { "start": 561, "end": 586, "text": "(Noji and Takamura, 2020)", "ref_id": "BIBREF18" }, { "start": 699, "end": 723, "text": "Noji and Takamura (2020)", "ref_id": "BIBREF18" }, { "start": 797, "end": 818, "text": "Mueller et al. (2020)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 7, "end": 14, "text": "Table 5", "ref_id": "TABREF8" }, { "start": 776, "end": 783, "text": "Table 1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Advantage of Syntax Injection to LMs in a Multilingual Setting", "sec_num": "5" }, { "text": "We discussed the CTree structure that works robustly regardless of the language and the superiority of injecting syntactic bias to the model. Our claim is that we can construct languageindependent syntax-aware LMs by seeking the best structure for learning RNNGs, which is backed up by our experiments based on five languages. To make this claim firm, more investigations are needed from two aspects: fine-grained syntactic evaluation and experiment on typologically diverse languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Fine-grained syntactic evaluation The linguistic phenomenon covered in CLAMS is only an agreement. However, previous works have invented evaluation sets that examine more diverse syntactic phenomena for English (Hu et al., 2020 , Warstadt et al., 2020 . We need such a fine-grained evaluation even in a multilingual setting, as superiority in agreement does not imply superiority in every syntactic knowledge; Kuncoro et al. (2019) suggested that RNNG performs poorer than LSTM LM in capturing sentential complement or simple negative polarity items. It is challenging to design a multiliugnal syntactic test set because even an agreement based on grammatical categories is not a universal phenomenon. It is required to seek reasonable metrics that cover broad syntactic phenomena and are applicable to many languages.", "cite_spans": [ { "start": 211, "end": 227, "text": "(Hu et al., 2020", "ref_id": "BIBREF7" }, { "start": 228, "end": 251, "text": ", Warstadt et al., 2020", "ref_id": "BIBREF25" }, { "start": 410, "end": 431, "text": "Kuncoro et al. (2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Languages included in CLAMS (English, French, German, Hebrew and Russian) are actually not ty-pologically diverse. Apart from language-specific features, all of them take the same ordering of (1) subject, verb, and object (SVO) (2) relative clause and noun (Noun-Relative clause) (3) adposition and noun phrase (preposition), and so on 5 . If we run the same experiment for a typologically different language, the result could be somewhat different.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment on typologically diverse languages", "sec_num": null }, { "text": "Although some previous work focused on syntactic assessment of other languages (Ravfogel et al., 2018; Gulordava et al., 2018) , such attempts are scarce. As future work, it is needed to design an evaluation set based on other languages and explore the extendability to more diverse languages.", "cite_spans": [ { "start": 79, "end": 102, "text": "(Ravfogel et al., 2018;", "ref_id": "BIBREF21" }, { "start": 103, "end": 126, "text": "Gulordava et al., 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment on typologically diverse languages", "sec_num": null }, { "text": "In this paper, we propose a methodology to learn multilingual RNNG through dependency tree conversion. We performed multiple conversions to seek the robust structure which works well multilingually, discussing the effect of multiple structures. We demonstrated the superiority of our model over baselines in capturing syntax in a multilingual setting. Since our research is the first step for multilingual syntax-aware LMs, it is necessary to conduct experiments on more diverse languages to seek a better structure. We believe that this research would contribute to the field of theoretical/cognitive linguistics as well because an ultimate goal of linguistics is finding the universal rule of natural language. Finding a reasonable structure in engineering would yield useful knowledge for that purpose.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "https://github.com/attardi/ wikiextractor 2 AlthoughMueller et al. (2020) publishes corpora they used, we extracted the dataset ourselves because they contain token which would affect parsing.3 Since other language can contain more non-projective DTrees, we have to consider how to handle it in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Since English set of CLAMS is a subset ofMarvin and Linzen (2018), it is reasonable to choose this model to validate the multilingual extendability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Typological information is obtained from WALS: https://wals.info/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This paper is based on results obtained from a project JPNP20006, commissioned by the New Energy and Industrial Technology Development Organization (NEDO). For experiments, computational resource of AI Bridging Cloud Infrastructure (ABCI) provided by National Institute of Advanced Industrial Science and Technology (AIST) was used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A statistical parser for Czech", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Hajic", "suffix": "" }, { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Christoph", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "505--512", "other_ids": { "DOI": [ "10.3115/1034678.1034754" ] }, "num": null, "urls": [], "raw_text": "Michael Collins, Jan Hajic, Lance Ramshaw, and Christoph Tillmann. 1999. A statistical parser for Czech. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 505-512, College Park, Maryland, USA. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Unsupervised cross-lingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8440--8451", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.747" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "XNLI: Evaluating crosslingual sentence representations", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Ruty", "middle": [], "last": "Rinott", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2475--2485", "other_ids": { "DOI": [ "10.18653/v1/D18-1269" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross- lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 2475-2485, Brus- sels, Belgium. Association for Computational Lin- guistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Recurrent neural network grammars", "authors": [ { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Adhiguna", "middle": [], "last": "Kuncoro", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "199--209", "other_ids": { "DOI": [ "10.18653/v1/N16-1024" ] }, "num": null, "urls": [], "raw_text": "Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199-209, San Diego, California. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Colorless green recurrent networks dream hierarchically", "authors": [ { "first": "Kristina", "middle": [], "last": "Gulordava", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1195--1205", "other_ids": { "DOI": [ "10.18653/v1/N18-1108" ] }, "num": null, "urls": [], "raw_text": "Kristina Gulordava, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hi- erarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long Papers), pages 1195-1205, New Orleans, Louisiana. Association for Computa- tional Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Finding syntax in human encephalography with beam search", "authors": [ { "first": "John", "middle": [], "last": "Hale", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Adhiguna", "middle": [], "last": "Kuncoro", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Brennan", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2727--2736", "other_ids": { "DOI": [ "10.18653/v1/P18-1254" ] }, "num": null, "urls": [], "raw_text": "John Hale, Chris Dyer, Adhiguna Kuncoro, and Jonathan Brennan. 2018. Finding syntax in human encephalography with beam search. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2727-2736, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A systematic assessment of syntactic generalization in neural language models", "authors": [ { "first": "Jennifer", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Gauthier", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Ethan", "middle": [], "last": "Wilcox", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1725--1744", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.158" ] }, "num": null, "urls": [], "raw_text": "Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger Levy. 2020. A systematic assessment of syntactic generalization in neural language mod- els. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1725-1744, Online. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "75 languages, 1 model: Parsing Universal Dependencies universally", "authors": [ { "first": "Dan", "middle": [], "last": "Kondratyuk", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2779--2795", "other_ids": { "DOI": [ "10.18653/v1/D19-1279" ] }, "num": null, "urls": [], "raw_text": "Dan Kondratyuk and Milan Straka. 2019. 75 languages, 1 model: Parsing Universal Dependencies univer- sally. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 2779-2795, Hong Kong, China. Association for Com- putational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "What do recurrent neural network grammars learn about syntax?", "authors": [ { "first": "Adhiguna", "middle": [], "last": "Kuncoro", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Lingpeng", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1249--1258", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, and Noah A. Smith. 2017. What do recurrent neural network gram- mars learn about syntax? In Proceedings of the 15th Conference of the European Chapter of the Associa- tion for Computational Linguistics: Volume 1, Long Papers, pages 1249-1258, Valencia, Spain. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Scalable syntaxaware language models using knowledge distillation", "authors": [ { "first": "Adhiguna", "middle": [], "last": "Kuncoro", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Rimell", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3472--3484", "other_ids": { "DOI": [ "10.18653/v1/P19-1337" ] }, "num": null, "urls": [], "raw_text": "Adhiguna Kuncoro, Chris Dyer, Laura Rimell, Stephen Clark, and Phil Blunsom. 2019. Scalable syntax- aware language models using knowledge distillation. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 3472- 3484, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Assessing the ability of LSTMs to learn syntaxsensitive dependencies", "authors": [ { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Dupoux", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "521--535", "other_ids": { "DOI": [ "10.1162/tacl_a_00115" ] }, "num": null, "urls": [], "raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax- sensitive dependencies. Transactions of the Associa- tion for Computational Linguistics, 4:521-535.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Targeted syntactic evaluation of language models", "authors": [ { "first": "Rebecca", "middle": [], "last": "Marvin", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1192--1202", "other_ids": { "DOI": [ "10.18653/v1/D18-1151" ] }, "num": null, "urls": [], "raw_text": "Rebecca Marvin and Tal Linzen. 2018. Targeted syn- tactic evaluation of language models. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Comparing top-down and bottom-up neural generative dependency models", "authors": [ { "first": "Austin", "middle": [], "last": "Matthews", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "227--237", "other_ids": { "DOI": [ "10.18653/v1/K19-1022" ] }, "num": null, "urls": [], "raw_text": "Austin Matthews, Graham Neubig, and Chris Dyer. 2019. Comparing top-down and bottom-up neu- ral generative dependency models. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 227-237, Hong Kong, China. Association for Computational Linguis- tics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Cross-linguistic syntactic evaluation of word prediction models", "authors": [ { "first": "Aaron", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "Garrett", "middle": [], "last": "Nicolai", "suffix": "" }, { "first": "Panayiota", "middle": [], "last": "Petrou-Zeniou", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Talmina", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5523--5539", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.490" ] }, "num": null, "urls": [], "raw_text": "Aaron Mueller, Garrett Nicolai, Panayiota Petrou- Zeniou, Natalia Talmina, and Tal Linzen. 2020. Cross-linguistic syntactic evaluation of word predic- tion models. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 5523-5539, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Universal Dependencies v1: A multilingual treebank collection", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Sampo", "middle": [], "last": "Pyysalo", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Silveira", "suffix": "" }, { "first": "Reut", "middle": [], "last": "Tsarfaty", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "1659--1666", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Haji\u010d, Christopher D. Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth In- ternational Conference on Language Resources and Evaluation (LREC'16), pages 1659-1666, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Universal Dependencies v2: An evergrowing multilingual treebank collection", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Sampo", "middle": [], "last": "Pyysalo", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Tyers", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "4034--4043", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Jan Haji\u010d, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4034-4043, Marseille, France. European Language Resources Association.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Effective batching for recurrent neural network grammars", "authors": [ { "first": "Hiroshi", "middle": [], "last": "Noji", "suffix": "" }, { "first": "Yohei", "middle": [], "last": "Oseki", "suffix": "" } ], "year": 2021, "venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", "volume": "", "issue": "", "pages": "4340--4352", "other_ids": { "DOI": [ "10.18653/v1/2021.findings-acl.380" ] }, "num": null, "urls": [], "raw_text": "Hiroshi Noji and Yohei Oseki. 2021. Effective batching for recurrent neural network grammars. In Find- ings of the Association for Computational Linguis- tics: ACL-IJCNLP 2021, pages 4340-4352, Online. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "An analysis of the utility of explicit negative examples to improve the syntactic abilities of neural language models", "authors": [ { "first": "Hiroshi", "middle": [], "last": "Noji", "suffix": "" }, { "first": "Hiroya", "middle": [], "last": "Takamura", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3375--3385", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.309" ] }, "num": null, "urls": [], "raw_text": "Hiroshi Noji and Hiroya Takamura. 2020. An analysis of the utility of explicit negative examples to im- prove the syntactic abilities of neural language mod- els. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3375-3385, Online. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Stanza: A python natural language processing toolkit for many human languages", "authors": [ { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Yuhao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yuhui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Bolton", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "101--108", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-demos.14" ] }, "num": null, "urls": [], "raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics: System Demonstrations, pages 101-108, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Can LSTM learn to capture agreement? The case of Basque", "authors": [ { "first": "Shauli", "middle": [], "last": "Ravfogel", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Tyers", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "98--107", "other_ids": { "DOI": [ "10.18653/v1/W18-5412" ] }, "num": null, "urls": [], "raw_text": "Shauli Ravfogel, Yoav Goldberg, and Francis Tyers. 2018. Can LSTM learn to capture agreement? The case of Basque. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 98-107, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1715--1725", "other_ids": { "DOI": [ "10.18653/v1/P16-1162" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1715-1725,", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Association for Computational Linguistics", "authors": [ { "first": "Germany", "middle": [], "last": "Berlin", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Berlin, Germany. Association for Computational Lin- guistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Effective inference for generative neural parsing", "authors": [ { "first": "Mitchell", "middle": [], "last": "Stern", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Fried", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1695--1700", "other_ids": { "DOI": [ "10.18653/v1/D17-1178" ] }, "num": null, "urls": [], "raw_text": "Mitchell Stern, Daniel Fried, and Dan Klein. 2017. Ef- fective inference for generative neural parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1695-1700, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "BLiMP: A benchmark of linguistic minimal pairs for English", "authors": [ { "first": "Alex", "middle": [], "last": "Warstadt", "suffix": "" }, { "first": "Alicia", "middle": [], "last": "Parrish", "suffix": "" }, { "first": "Haokun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Anhad", "middle": [], "last": "Mohananey", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Sheng-Fu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Society for Computation in Linguistics 2020", "volume": "", "issue": "", "pages": "409--410", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo- hananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2020. BLiMP: A benchmark of linguis- tic minimal pairs for English. In Proceedings of the Society for Computation in Linguistics 2020, pages 409-410, New York, New York. Association for Com- putational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "The illustration of stack-RNNG behavior. Stack-LSTM represents the current partial tree, in which adjacent vectors are connected in the network. At RE-DUCE action, the corresponding vector is updated with composition function (as underlined).", "num": null }, "FIGREF2": { "type_str": "figure", "uris": null, "text": "Accuracies of CLAMS for RNNGs trained on each setting.", "num": null }, "FIGREF3": { "type_str": "figure", "uris": null, "text": "Structures of a CLAMS example predicted by {flat, left-first, right-first}-POS RNNG. This example is solvable only by flat-POS RNNG across all seeds.", "num": null }, "TABREF1": { "type_str": "table", "html": null, "text": "flat(lw, lw.ldeps, lw.rdeps) for lw in ldeps];", "num": null, "content": "
3 4 5 Function lf(w, ldeps, rdeps): rNT \u2190 [flat(rw, rw.ldeps, rw.rdeps) for rw in rdeps]; return [lNT [w] rNT].removeEmptyList; 6 if ldeps is not empty then / * Pop left-most dependent * / 7 lw \u2190 ldeps.pop(); 8 lNT \u2190 [lf(lw, lw.ldeps, lw.rdeps)]; 9 rNT \u2190 [lf(w, ldeps, rdeps)]; 10 else if rdeps is not empty then / * Pop right-most dependent * /
" }, "TABREF3": { "type_str": "table", "html": null, "text": "", "num": null, "content": "" }, "TABREF5": { "type_str": "table", "html": null, "text": "Test set perplexity of each setting. Lower is better. \"left\" and \"right\" in the table are abbreviations of \"left-first\" and \"right-first\", respectively.", "num": null, "content": "
" }, "TABREF7": { "type_str": "table", "html": null, "text": "CLAMS scores averaged by task category.", "num": null, "content": "
" }, "TABREF8": { "type_str": "table", "html": null, "text": "CLAMS scores for flat-DEP RNNG and baselines. LSTM (N20) is a model of which hyperparameters are set as with", "num": null, "content": "
" } } } }