{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:08:29.147649Z" }, "title": "This is a BERT. Now there are several of them. Can they generalize to novel words?", "authors": [ { "first": "Coleman", "middle": [], "last": "Haley", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": {} }, "email": "chaley7@jhu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recently, large-scale pre-trained neural network models such as BERT have achieved many state-of-the-art results in natural language processing. Recent work has explored the linguistic capacities of these models. However, no work has focused on the ability of these models to generalize these capacities to novel words. This type of generalization is exhibited by humans (Berko, 1958), and is intimately related to morphology-humans are in many cases able to identify inflections of novel words in the appropriate context. This type of morphological capacity has not been previously tested in BERT models, and is important for morphologically-rich languages, which are under-studied in the literature regarding BERT's linguistic capacities. In this work, we investigate this by considering monolingual and multilingual BERT models' abilities to agree in number with novel plural words in English, French, German, Spanish, and Dutch. We find that many models are not able to reliably determine plurality of novel words, suggesting potential deficiencies in the morphological capacities of BERT models.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Recently, large-scale pre-trained neural network models such as BERT have achieved many state-of-the-art results in natural language processing. Recent work has explored the linguistic capacities of these models. However, no work has focused on the ability of these models to generalize these capacities to novel words. This type of generalization is exhibited by humans (Berko, 1958), and is intimately related to morphology-humans are in many cases able to identify inflections of novel words in the appropriate context. This type of morphological capacity has not been previously tested in BERT models, and is important for morphologically-rich languages, which are under-studied in the literature regarding BERT's linguistic capacities. In this work, we investigate this by considering monolingual and multilingual BERT models' abilities to agree in number with novel plural words in English, French, German, Spanish, and Dutch. We find that many models are not able to reliably determine plurality of novel words, suggesting potential deficiencies in the morphological capacities of BERT models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In recent years, large-scale pre-trained neural network models have transformed the landscape of natural language processing (NLP) research. This approach to NLP became prominent after several models such as BERT (Devlin et al., 2019) achieved new state of the art performance on a wide range of NLP tasks such as natural language inference. The successful performance of BERT and other models like it on natural language understanding tasks suggests that they may be learning valuable general linguistic competencies. However, it is not clear whether these models are able to generalize these competencies to unseen words. With the large training sets of these models ( 3.3 billion tokens in Devlin et al. (2019) ), their state-of-the-artestablishing performance may feasibly have been achieved without ever being tested on a word that was not in the training set.", "cite_spans": [ { "start": 213, "end": 234, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" }, { "start": 693, "end": 713, "text": "Devlin et al. (2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Nevertheless, BERT may need be concerned about unseen words. Increasingly, there is an interest in creating BERT and BERT-like models trained on large corpora of languages other than English. In comparison to English, many of the world's languages exhibit a much greater amount of inflectional morphology. However, most of the results motivating this explosion of BERT models are in English NLP. It is unclear, then, how well BERT will generalize to languages with complex morphology. While BERT models are being developed for other languages, many of these models have been less comprehensively evaluated than English BERT. For instance, the publicly available Turkish (Schweter, 2020) BERT model (one of the most morphologically complex languages for which a BERT model is available) has only been evaluated on named entity recognition and part-of-speech tagging. It is unclear, then, how well the model would fare on more complex NLP tasks.", "cite_spans": [ { "start": 670, "end": 686, "text": "(Schweter, 2020)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we investigate BERT's ability to capture this type of information by studying its ability to identify the correct plural form of novel words in English, French, Spanish, Dutch, and German. We find that BERT is able to distinguish plural and singular forms to perform number agreement significantly above chance in all languages. However, many BERT models perform substantially worse on novel words than on words in the training set, even when prompted with an example that shows the singular form, a task which humans are known to be capable of (Berko, 1958) . This indicates that even simple morphological capacities are not reliably acquired in a human-like way in the BERT training paradigm, showing room for improvement in future models.", "cite_spans": [ { "start": 559, "end": 572, "text": "(Berko, 1958)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "BERT is part of a growing research direction of pretraining deep learning models, often a variant of a \"Transformer\" architecture (Vaswani et al., 2017) , on large amounts of natural language data using some variant of a language modelling objective. This line of research includes other such successful models as ELMo (Peters et al., 2018) and XLNet (Yang et al., 2019) . All of these models are trained on very large corpora, with ELMo being the smallest (trained on 1 billion tokens in Peters et al. (2018) -in contrast to BERT's 4 billion tokens in Devlin et al. (2019) ). All of these models are also highly computationally intensive to train, so it is desirable to avoid training new BERT models.", "cite_spans": [ { "start": 130, "end": 152, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF24" }, { "start": 319, "end": 340, "text": "(Peters et al., 2018)", "ref_id": "BIBREF19" }, { "start": 351, "end": 370, "text": "(Yang et al., 2019)", "ref_id": "BIBREF27" }, { "start": 489, "end": 509, "text": "Peters et al. (2018)", "ref_id": "BIBREF19" }, { "start": 553, "end": 573, "text": "Devlin et al. (2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "BERT uses a transformer-based architecture, making it bidirectionally sensitive. It is trained on a masked language modelling objective, meaning that it takes in as input a sequence with some words replaced with a [MASK] token, and is expected to output the original sequence. To enable this, a final fully connected layer and softmax is added after the transformer encoder to produce the desired output. This means BERT is \"out of the box\" capable of answering exactly those questions that can be posed as replacing [MASK] tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "BERT-like models are also generally so-called open-vocabulary language models, meaning they can assign a probability to any string. This enables them to give probabilities to novel words and novel forms of known words, giving BERT the capacity to learn morphological generalizations. This is achieved through the use of subword segmentation, in which a strategy such as byte-pair encoding (BPE) (Sennrich et al., 2016) or Unigram LM segmentation (such as WordPiece (Kudo, 2018) and the related SentencePiece) is used to turn words into a sequence of multi-character tokens.", "cite_spans": [ { "start": 395, "end": 418, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF22" }, { "start": 465, "end": 477, "text": "(Kudo, 2018)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "These segmentation strategies use statistical methods to determine which multi-character tokens are added to their vocabularies, meaning that high-frequency sub-word strings will more likely be added as tokens. These tokens may or may not correspond to morpheme boundaries. If they do not, then models that rely on them will encounter the same morpheme expressed in many distinct tokens, requiring the model to learn agreement for all tokens which may contain, e.g., the plural affix. This may mean that uncommon segments containing inflectional affixes will be less reliable in agreement, since they have no relation in representation to frequently-occurring subwords containing the same inflection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Previous work has explored the types of generalizations predicted by linguistic and psycholinguistic theory that have been learned by the English BERT models. This work has focused primarily on syntactic generalizations. Initial work by Goldberg found that BERT models showed promise at modelling short-and long-distance subject-verb agreement as well as reflexive anaphora phenomena (Goldberg, 2019). van Schijndel et al. (2019) revisited these results without giving a bidirectional context to BERT and found it performed at best no better than existing LSTM models (contrasting with Goldberg's work). Ettinger (2020) differentiates her work from these works by noting their primarily syntactic focus, and promises to test more diverse linguistic capacities, but focuses on semantic and pragmatic capacities, showing among other things that BERT fails to fully model the meaning of negation. Recently, Mueller et al. (2020) presented cross-linguistic targeted syntactic evaluation of BERT, but only considered multilingual BERT. Most of the work on the formal linguistic capacities has not considered monolingual BERT models for languages other than English (one recent exception being Edmiston (2020)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BERT and linguistic competence", "sec_num": "2.1" }, { "text": "Very recently, a few works have considered the morphological aspects of BERT. Bostrom and Durrett (2020) argue that byte-pair encoding less faithfully expresses English morphology than Unigram segmentation, and show a performance improvement in downstream tasks with a unigramsegmentation-based BERT model. Hofmann et al. (2020) show that BERT can be fine-tuned with a classification layer to complete a derivational morphology cloze task, finding that imposing morpheme boundaries with hyphenation on the input side ultimately improved BERT's performance at this task. Finally, Edmiston (2020) investigates several monolingual BERT models for representations of morphological information. Edmiston shows that many morphological features can be extracted by training a simple classifier on a BERT layer. He also identifies a small number of attention heads in each model that seem to pay attention to the morphologically marked words in agreement phenomena over other words. However, this agreement experiment makes no attempt to isolate the mor-phological information from words which BERT has seen, allowing for the possibility of morphological \"memorization\" rather than true human-like generalization.", "cite_spans": [ { "start": 78, "end": 104, "text": "Bostrom and Durrett (2020)", "ref_id": "BIBREF1" }, { "start": 307, "end": 328, "text": "Hofmann et al. (2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "BERT and linguistic competence", "sec_num": "2.1" }, { "text": "Previous work in psycholinguistics has investigated the human capacity for morphological generalization, and it is this work we intend to build on to explore BERT's morphological capacity. Specifically, Berko (1958) presents the Wug test, a simple test for productive morphology in which speakers are prompted with a sentence containing one form of an unknown word and prompted to complete a sentence with another form. We present a task inspired by this one in which the ability to recognize an unseen form of a word is probed through the ability to correctly agree with that word's form. In this work, we specifically investigate subject-verb number agreement.", "cite_spans": [ { "start": 203, "end": 215, "text": "Berko (1958)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "BERT and linguistic competence", "sec_num": "2.1" }, { "text": "This work focuses on BERT's ability to recognize novel words as singular or plural. This construction was chosen for its testability (through number agreement on verbs) and its disparity in complexity between languages. In English, French, Dutch and Spanish, a large majority of plurals are derived according to rules that can be expressed simply in terms of adding a suffix corresponding to the suffix of the base noun. Further, in French and Spanish, the plurality of a noun is unambiguous if it is preceded by a determiner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "3" }, { "text": "In written English, the plural of most nouns is formed by one of three strategies: either 1. -s is added to the end of the noun, 2. -es is added to the end, or 3. a copy of the final letter followed by -es is added to the end. Strategy 2 is used after sibilant sounds, and Strategy 3 is generally used after sibilant sounds which are preceded by a lax vowel. Strategy 1 is used in all other cases (except known irregulars). The words selected for this study were chosen such that their spelling indicates an obvious phonetic realization, and that they are distributed across these 3 strategies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Plural formations of the languages", "sec_num": "3.1" }, { "text": "The French and Spanish plural constructions are arguably simpler than in English. In French, plural nouns are generally formed by adding -s to the end; unless the noun ends in s, z, or x, in which case nothing is added, in eau, in which case -x is added, or in -al or -ail in which case the suffix may be removed and -aux added. In addition to inflecting the word, French marks plurality in its definite determiner, making it unambiguous from the determiner whether a noun is singular or plural.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Plural formations of the languages", "sec_num": "3.1" }, { "text": "On the other hand, the German plural construction is significantly more complex than in English. Like French and Spanish, German marks for plurality in the determiner, but the determiner used to indicate plurality in the nominative case is shared with that used to mark feminine noun gender, meaning that noun gender cannot be determined purely from the determiner. Consider for example the woman\u2192the women, which in Spanish is la mujer\u2192las mujeres, but in German is die Frau\u2192die Frauen. Further, German uses several different strategies to form the plural, including adding nothing to the word (-\u2205), adding -e, adding -(e)r, adding -(e)n, and adding -s. These strategies (with the exception of -(e)n) may also be combined with \"umlautification\" of the stressed vowel in the noun, yielding a total of 7 possible plural markers, none of which consitute a majority of examples (K\u00f6pcke, 1988; Wiese, 2000 Table 1 : The German plural cannot be predicted from the form of the singular word. Here, we see similar singular words that form the plural in different ways.", "cite_spans": [ { "start": 875, "end": 889, "text": "(K\u00f6pcke, 1988;", "ref_id": "BIBREF11" }, { "start": 890, "end": 901, "text": "Wiese, 2000", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 902, "end": 909, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Plural formations of the languages", "sec_num": "3.1" }, { "text": "The literature on the German plural generally considers it to be a phenomenon over lexical classes which are not phonologically predictable. Several tendencies can be observed in German plural formation, though few are universal. For example, nouns ending in -e typically form their plural by adding -n (Trommer, 2020) . Nevertheless, even near-minimal pairs of nouns may form their plural in distinct ways (see Table 1 ). Indeed, adult German speakers often vary widely in their choices for novel words (Zaretsky et al., 2013; McCurdy et al., 2020) . Accordingly, substantial prior work has suggested the German plural may be a challenging pattern for neural networks to learn (Feldman, 2005; Marcus et al., 1995; McCurdy et al., 2020) .", "cite_spans": [ { "start": 303, "end": 318, "text": "(Trommer, 2020)", "ref_id": "BIBREF23" }, { "start": 504, "end": 527, "text": "(Zaretsky et al., 2013;", "ref_id": "BIBREF28" }, { "start": 528, "end": 549, "text": "McCurdy et al., 2020)", "ref_id": "BIBREF17" }, { "start": 678, "end": 693, "text": "(Feldman, 2005;", "ref_id": "BIBREF6" }, { "start": 694, "end": 714, "text": "Marcus et al., 1995;", "ref_id": "BIBREF14" }, { "start": 715, "end": 736, "text": "McCurdy et al., 2020)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 412, "end": 419, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Plural formations of the languages", "sec_num": "3.1" }, { "text": "The Dutch plural represents an interesting intermediate case. As in German, the determiner gives some ambiguous information about plurality, with the determiners het and de both being used for singular nouns, but only de used with plural nouns. The plural in Dutch is constructed using either the ending -en or -s. Generally, -en is used to form the plural of nouns ending with a stressed syllable, and -s is used with nouns ending in an unstressed syllable, although this generalization is not perfect (van der Hulst and Kooij, 1998).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Plural formations of the languages", "sec_num": "3.1" }, { "text": "This experiment probes the ability of BERT to recognize the plurals of novel words as such. We probe this indirectly, though a number agreement task following the setup in van Schijndel et al. (2019). As in that study, We use the challenge set from Marvin and Linzen (2018) as a starting point. Number agreement was chosen as a task because it is not fully understood how to treat BERT as a generative model. Therefore, we probe plural recognition as an auxiliary task which BERT has been shown to succeed at (Goldberg, 2019) . This task is formulated as a forced choice between a plural verb form or singular verb form. The Marvin and Linzen (2018) challenge set was translated into English, German, Dutch, Spanish, and French by fluent speakers with an elementary background in formal linguistics. These languages each have a singular-plural distinction and subjectverb number agreement. Syntactic constructions not possible in all five languages were omitted. Some verbs in each dataset were changed to ensure each verb was a single token for all models in that language. The datasets in each language were then modified to replace the subject of the targeted verb with a non-word. For each language, 24 non-words were used. English, French, Spanish, and Dutch non-words were manually created by fluent speakers, while the 24 German non-words were taken from McCurdy et al. (2020), to account for the fact that the German plural of non-words is known to be inconsistent across speakers. The plural formation chosen by a plurality of German speakers in McCurdy et al. (2020) for each German non-word was used; genders were chosen to be distributed uniformly.", "cite_spans": [ { "start": 509, "end": 525, "text": "(Goldberg, 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental setup", "sec_num": "3.2" }, { "text": "The BERT models were evaluated on number agreement on the original datasets and the nonword datasets. Models were evaluated bidirectionally, as in Goldberg (2019) , to provide a maximallycharitable estimate of BERT's morphological capacity in each language.", "cite_spans": [ { "start": 147, "end": 162, "text": "Goldberg (2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental setup", "sec_num": "3.2" }, { "text": "Finally, models were reevaluated on the nonword data with a \"prime\" for the non-word. In English, the prime takes the form of the sentence \"This is a \", where the blank was replaced with the singular form of the novel noun in the target sentence and the appropriate determiner for the noun's gender was selected. This construction was translated into each language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental setup", "sec_num": "3.2" }, { "text": "While it may seem unintuitive that BERT could benefit from the use of this prime at test time, since it is unable to adjust its weights, with self-attention it is theoretically possible to encode a simple \"rule\" for using the number of a noun seen for the first time (as disambiguated via subject-verb agreement) to influence number agreement for a noun with a similar form. It is this possibility, as well as the human capacity for this type of generalization, that motivates this condition. Examples of stimuli in each condition for English are presented in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 560, "end": 567, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experimental setup", "sec_num": "3.2" }, { "text": "We consider several cased BERT models, both monolingual and multilingual. The BERT BASE size was used for all languages for comparability between models, as not all languages have a BERT LARGE model available. The models used are summarized in Table 3 Experiments were run on a single Nvidia GeForce GTX 1080 Ti, and take under an hour to run. 4 Table 4 presents the results for the simple agreement tests with bidirectional context. Here, \"simple\" refers to sentences consisting of a subject immediately followed by an intransitive verb (e.g., \"The man laughed.\"). The number of singular sen-tences ranged between 212-672 depending on language and non-word condition. As in Goldberg (2019) , ceiling performance is found on the original dataset in English. CamemBERT also performed near ceiling. Since the task is a forced choice between 2 verb forms (singular or plural), and there are an equal number of singular and plural subjects, chance performance is 0.5. Agreement performance on the non-word sentences was much better than chance, even without the inclusion of a prime for the non-word (p < 0.001). This indicates the model is often able to guess whether an noun not seen in training is likely to be singular or plural. Notably, not all models across languages succeeded completely at subject-verb agreement even with real words on simple sentences-FlauBERT for French and mBERT for Dutch and Spanish achieved less than 0.95 accuracy in this simple task.", "cite_spans": [ { "start": 675, "end": 690, "text": "Goldberg (2019)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 244, "end": 251, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 346, "end": 353, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experimental setup", "sec_num": "3.2" }, { "text": "Cross-linguistically, there is no consistent trend in whether the model is able to use the prime to achieve better performance. While FlauBERT was the only model to achieve statistically significant gains (p < 0.01) in the non-word case with the addition of the prime, this gain was also significant (p < 0.05) in the real word case, suggesting deeper issues with this model's agreement capabilities generally. Many models were slightly hurt by the inclusion of the prime, suggesting that they may be spuriously agreeing with the prime, even across a sentence boundary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "As one might expect, the German BERT models had the lowest average performance on the nonword conditions, with no model surpassing 0.80 accuracy. However, only French models achieved an accuracy of greater than 0.90 in any non-word case. Given that the correct form for French and Spanish agreement can be determined from the noun's article alone, it is surprising that the Spanish models do not fully utilize this heuristic; this heuristic may explain the high French performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "mBERT performs about as well as the monolingual BERT models in French and German, but performs worse in Dutch and Spanish. In no case did it significantly out-perform a monolingual model (p > 0.1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "To investigate whether the lower novel-word performance was related to the segmentations of the novel words, we measured how often each non-word was associated with an error. We found inconsistent results across models. BERTje was found to perform especially poorly on 4 out of 24 non-words, incorrectly choosing a singular verb for a plural form of the word 93% of the time. On investigating the segmentations, these words were found to be segmented to \"[UNK]\" by the tokenizer. This model uses the standard SentencePiece Unigram tokenizer 5 , ostensibly the same as many of these other models. Typically, this tokenizer is considered to be open-vocabulary, yet it fails to segment these subwords, indicating that this is not strictly true for this very popular implementation. If these 4 words are disregarded, in the no-prime case this model achieves an accuracy of 0.93, the highest of all non-French models across languages. While this error is of substantial concern, it affects only the Dutch BERTje results, as no other models were found to have this behavior.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Other models were found to frequently fail on the plural or singular forms of some words, such as BETO, which 50% of the time identified \"comanas\" as a singular word form. Some models, such as the English model and FlauBERT, instead seemed to be uncertain about the plurality of all forms, making a moderate amount errors at roughly equal rates across non-words. In the case of the English model, the model has a bias towards plurality, with plural accuracy 0.19 greater than singular accuracy; however, FlauBERT makes agreement errors at roughly equal rates regardless of whether the subject is singular or plural.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "With the German models, accuracy was > 0.88 for singular non-words, many of which are disambiguated by their determiner. Accordingly, most errors are plural word forms which the model identified as singular. Both monolingual German models showed a pattern of having many plural forms that were identified as singular > 70% of the time. Most of the remaining forms in each model were correctly identified as plural > 70% of the time, indicating that these models are relatively certain in their predictions. Unfortunately, no clear relationship to how closely the segmentation pattern matches the morphology was found with whether the correct verb is selected for a given non-word. However, it is possible that the frequency of the final subword segment occurring as a plural affix in a German corpus would be more predictive of which segments are likely to result in errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Although these results are largely consistent with the linguistic hypotheses discussed in Section 3.1, there is an uneven amount of training data across the models and languages. Notably, the French monolingual models used the most data, with German models using the least. However, this relationship is different within the mBERT model itself. This model, being trained on the Wikipedia dumps of each language, has the most data for English, followed by German, then French, then Spanish, then Dutch. While this does not completely disentangle the effects of training size (e.g., for the low Dutch performance), it does indicate that the disparity between model performances in, e.g., French and German cannot be explained solely by this factor. Further, almost all models use the same vocabulary size and number of parameters, with only FlauBERT being substantially larger, so this is also likely not a major factor. Therefore, it seems plausible that a primary driver of the differences in model performance between languages reported here is the language's plural construction. 6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-linguistic factors", "sec_num": "5.1" }, { "text": "While this study is primarily focused on the morphological and novel-word generalization capacities of BERT, it also investigates more models and languages than prior work on BERT's linguistic capacities-no previous work has looked at more than one monolingual model for a single non-English language. The results here strongly suggest that the field of \"BERTology\" needs to consider the generality of their claims across not just different languages, but even across different BERT models developed on the same language. Even models based on the same architecture and within the same language, such as the German deepset and dbmdz models show different results on this simple task. While French subject-verb agreement can be determined solely from the subject determiner, FlauBERT achieved only 0.92 accuracy on even simple French sentences, while both mBERT and CamemBERT achieve accuracies higher than 0.95. This suggests that even in languages where subject-verb agreement is relatively simple, the BERT training objective alone cannot guarantee total generalization, even when the model performs well on downstream tasks. This suggests a need for greater scruitiny of claims of BERT's linguistic capacity that evaluate only one or two models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implications for BERTology", "sec_num": "5.2" }, { "text": "Finally, To consider the relation of these theoretical findings to real-world performance, we ran the German experiments on real nouns again without capitalizing nouns. While all nouns are capitalized in formal German, this case serves as an example of a simple typo that might occur in real data. Agreement accuracy dropped from 1.00 in all 3 German models to 0.90, 0.79, and 0.89 in the mBERT, deepset, and dbmdz models respectively. This indicates that BERT's agreement faculty is highly sensitive to noise, failing to generalize even to highly plausible \"non-words\" (in this case, uncased nouns). This casts substantial doubt on the generality of BERT's extensively studied agreement competencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Potential practical implications", "sec_num": "5.3" }, { "text": "These results suggest that BERT models have some understanding of morphology when applied to novel words (or at least the plurals in a few Germanic and Romance languages). Performance is much significantly better than chance in simple agreement cases, even when no prime is given. This shows that the BERT models have learned something about what plural and singular forms \"look like.\" However, non-word performance is not helped especially by the inclusion of a priming sentence, indicating that the BERT models in ques-tion may not have learned to recognize new words and apply rules to them, as humans might. Further work should investigate what types of primes affect the performance and how.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The model performance here represents a best case for BERT's morphological capacity on novel words. The plural construction is extremely common in text and is connected to phenomena like agreement which additionally pressures it to be learned on a non-semantic level. Further, the simple sentences studied here allow for the potential of n-gram-level agreement heuristics which are not possible in the general case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "That BERT struggles to capture morphology in this way is likely not due to a lack of training data. There are two potential culprits: the tokenization method and the training objective. The FlauBERT results especially indicate that the masked language modeling objective may not sufficiently encourage agreement. Cross-linguistically, the models seem not to have picked up on how to use the information in the prime. In addition, the subword tokenization methods used by BERT and BERT-like models make the morphology learning problem significantly more complicated. This is because the plural morpheme is connected to some number of final characters of a word as a single token, meaning even plurals formed in the same way may be represented differently. This work points to a need for subword segmentation strategies that more closely mirror a language's morphology than current approaches like Unigram segmentation or byte-pair encoding. In this aspect of language, there remains a large gap between BERT's behavior and human performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "This appendix summarizes some additional differences between the models. It is not clear to the author that these would be related to the pattern of results presented here, but they are included so that interested readers need not hunt them down.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Additional model details", "sec_num": null }, { "text": "The models vary in whether they use an auxiliary task in addition to the masked language modelling (MLM) task described in the background. Some models use next-sentence prediction (NSP), in which the BERT model sees two sentences and must determine whether the second one follows the first. The initial BERT study indicated this improved performance, but subsequent work (Liu et al., 2019) found the opposite to be true, and many subsequent BERT models omit this objective.", "cite_spans": [ { "start": 371, "end": 389, "text": "(Liu et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "A Additional model details", "sec_num": null }, { "text": "BERTje instead includes a sentence order prediction (SOP) task, in which the model is presented with two consecutive sentences which may be in their original order or may be swapped, and must predict if they are in the correct order. (Vries et al., 2019) claim the addition of this objective improves their performance on downstream tasks.", "cite_spans": [ { "start": 234, "end": 254, "text": "(Vries et al., 2019)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "A Additional model details", "sec_num": null }, { "text": "Another attribute of the models that vary is how they handle the masking in MLM. The original BERT model masked out a portion of its training data before training, so every time a sentence is encountered the masked segments are the same. Subsequent works such as Liu et al. (2019) utilize dynamic masking, where different segments are masked in different training epochs. This is often achieved by masking the training data a fixed number of times and cycling through them during training. Finally, some models utilize sub-word masking (SWM), in which individual subwords are masked independently, while other models use whole-word masking (WWM), where all subwords of a single word are always masked together.", "cite_spans": [ { "start": 263, "end": 280, "text": "Liu et al. (2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "A Additional model details", "sec_num": null }, { "text": "Objective ( Table 5 : Additional details of models. MLM = masked language modeling, NSP = next sentence prediction, SOP = sentence order prediction, SWM = sub-word masking, WWM = whole-word masking .", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "https://deepset.ai/german-bert 2 https://github.com/dbmdz/berts 3 https://github.com/google-research/ bert/blob/master/multilingual.md", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Code for generating the dataset and replicating the experiments is available at https://github.com/ColemanHaley/BERTnovel-morphology.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/google/ sentencepiece", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Additional architectural differences between the models are described in Appendix A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The author would like to thank Paul Schauenburg for his grammaticality judgements and extensive help with preparing the plural non-word data. I would also like to thank Tom McCoy, Tal Linzen, and Colin Wilson for their helpful and clarifying comments. This work was supported in part by a Provost's Undergraduate Research Award from Johns Hopkins University.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The child's learning of english morphology. Word", "authors": [ { "first": "Jean", "middle": [], "last": "Berko", "suffix": "" } ], "year": 1958, "venue": "", "volume": "14", "issue": "", "pages": "150--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jean Berko. 1958. The child's learning of english mor- phology. Word, 14:150-177.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Byte pair encoding is suboptimal for language model pretraining", "authors": [ { "first": "Kaj", "middle": [], "last": "Bostrom", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" } ], "year": 2020, "venue": "Computing Research Repository", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.03720" ] }, "num": null, "urls": [], "raw_text": "Kaj Bostrom and Greg Durrett. 2020. Byte pair en- coding is suboptimal for language model pretraining. Computing Research Repository, arXiv:2004.03720. To appear in Findings of ACL: EMNLP 2020.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Spanish pre-trained BERT model and evaluation data", "authors": [ { "first": "Jos\u00e9", "middle": [], "last": "Ca\u00f1ete", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Chaperon", "suffix": "" }, { "first": "Rodrigo", "middle": [], "last": "Fuentes", "suffix": "" }, { "first": "Jorge", "middle": [], "last": "P\u00e9rez", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jos\u00e9 Ca\u00f1ete, Gabriel Chaperon, Rodrigo Fuentes, and Jorge P\u00e9rez. 2020. Spanish pre-trained BERT model and evaluation data. To appear in PML4DC at ICLR 2020.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A systematic analysis of morphological content in bert models for multiple languages", "authors": [ { "first": "Daniel", "middle": [], "last": "Edmiston", "suffix": "" } ], "year": 2020, "venue": "Computing Research Repository", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.03032" ] }, "num": null, "urls": [], "raw_text": "Daniel Edmiston. 2020. A systematic analysis of morphological content in bert models for multi- ple languages. Computing Research Repository, arXiv:2004.03032.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models", "authors": [ { "first": "Allyson", "middle": [], "last": "Ettinger", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "34--48", "other_ids": { "DOI": [ "10.1162/tacl_a_00298" ] }, "num": null, "urls": [], "raw_text": "Allyson Ettinger. 2020. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34-48.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Learning and overgeneralization patterns in a connectionist model of the German plural", "authors": [ { "first": "Naomi", "middle": [], "last": "Feldman", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Naomi Feldman. 2005. Learning and overgeneraliza- tion patterns in a connectionist model of the German plural. Master's thesis, University of Vienna.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Assessing BERT's syntactic abilities", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2019, "venue": "Computing Research Repository", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1901.05287" ] }, "num": null, "urls": [], "raw_text": "Yoav Goldberg. 2019. Assessing BERT's syntac- tic abilities. Computing Research Repository, arXiv:1901.05287.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "DagoBERT: Generating derivational morphology with a pretrained language model", "authors": [ { "first": "Valentin", "middle": [], "last": "Hofmann", "suffix": "" }, { "first": "Janet", "middle": [ "B" ], "last": "Pierrehumbert", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2020, "venue": "Computing Research Repository", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.00672" ] }, "num": null, "urls": [], "raw_text": "Valentin Hofmann, Janet B. Pierrehumbert, and Hin- rich Sch\u00fctze. 2020. DagoBERT: Generating deriva- tional morphology with a pretrained language model. Computing Research Repository, arXiv:2005.00672.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Prosodic choices in plural formation in Dutch", "authors": [ { "first": "H", "middle": [ "G" ], "last": "Van Der Hulst", "suffix": "" }, { "first": "J", "middle": [], "last": "Kooij", "suffix": "" } ], "year": 1998, "venue": "Phonology and morphology of the Germanic languages", "volume": "", "issue": "", "pages": "187--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. G. van der Hulst and J. Kooij. 1998. Prosodic choices in plural formation in Dutch. In W. Kehrein and R. Wiese, editors, Phonology and morphol- ogy of the Germanic languages, pages 187-198. Niemeyer, T\u00fcbingen.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Subword regularization: Improving neural network translation models with multiple subword candidates", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "66--75", "other_ids": { "DOI": [ "10.18653/v1/P18-1007" ] }, "num": null, "urls": [], "raw_text": "Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple sub- word candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 66-75, Mel- bourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Schemas in German plural formation", "authors": [ { "first": "Klaus-Michael", "middle": [], "last": "K\u00f6pcke", "suffix": "" } ], "year": 1988, "venue": "Lingua", "volume": "74", "issue": "4", "pages": "303--335", "other_ids": { "DOI": [ "10.1016/0024-3841(88)90064-2" ] }, "num": null, "urls": [], "raw_text": "Klaus-Michael K\u00f6pcke. 1988. Schemas in German plu- ral formation. Lingua, 74(4):303 -335.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "FlauBERT: Unsupervised language model pre-training for french", "authors": [ { "first": "Hang", "middle": [], "last": "Le", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Vial", "suffix": "" }, { "first": "Jibril", "middle": [], "last": "Frej", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Segonne", "suffix": "" }, { "first": "Maximin", "middle": [], "last": "Coavoux", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Lecouteux", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Allauzen", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Crabb\u00e9", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Besacier", "suffix": "" }, { "first": "Didier", "middle": [], "last": "Schwab", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "2479--2490", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hang Le, Lo\u00efc Vial, Jibril Frej, Vincent Segonne, Max- imin Coavoux, Benjamin Lecouteux, Alexandre Al- lauzen, Beno\u00eet Crabb\u00e9, Laurent Besacier, and Didier Schwab. 2020. FlauBERT: Unsupervised language model pre-training for french. In Proceedings of The 12th Language Resources and Evaluation Con- ference, pages 2479-2490, Marseille, France. Euro- pean Language Resources Association.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "RoBERTa: A robustly optimized BERT pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "Computing Research Repository", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pre- training approach. Computing Research Repository, arXiv:1907.11692.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "German inflection: the exception that proves the rule", "authors": [ { "first": "G", "middle": [ "F" ], "last": "Marcus", "suffix": "" }, { "first": "U", "middle": [], "last": "Brinkmann", "suffix": "" }, { "first": "H", "middle": [], "last": "Clahsen", "suffix": "" }, { "first": "R", "middle": [], "last": "Wiese", "suffix": "" }, { "first": "S", "middle": [], "last": "Pinker", "suffix": "" } ], "year": 1995, "venue": "Cogn Psychol", "volume": "29", "issue": "3", "pages": "189--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. F. Marcus, U. Brinkmann, H. Clahsen, R. Wiese, and S. Pinker. 1995. German inflection: the excep- tion that proves the rule. Cogn Psychol, 29(3):189- 256.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "\u00c9ric de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot", "authors": [ { "first": "Louis", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Muller", "suffix": "" }, { "first": "Pedro Javier Ortiz", "middle": [], "last": "Su\u00e1rez", "suffix": "" }, { "first": "Yoann", "middle": [], "last": "Dupont", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Romary", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7203--7219", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.645" ] }, "num": null, "urls": [], "raw_text": "Louis Martin, Benjamin Muller, Pedro Javier Or- tiz Su\u00e1rez, Yoann Dupont, Laurent Romary,\u00c9ric de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2020. CamemBERT: a tasty French language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7203-7219, Online. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Targeted syntactic evaluation of language models", "authors": [ { "first": "Rebecca", "middle": [], "last": "Marvin", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1192--1202", "other_ids": { "DOI": [ "10.18653/v1/D18-1151" ] }, "num": null, "urls": [], "raw_text": "Rebecca Marvin and Tal Linzen. 2018. Targeted syn- tactic evaluation of language models. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Inflecting when there's no majority: Limitations of encoder-decoder neural networks as cognitive models for German plurals", "authors": [ { "first": "Kate", "middle": [], "last": "Mccurdy", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lopez", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1745--1756", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.159" ] }, "num": null, "urls": [], "raw_text": "Kate McCurdy, Sharon Goldwater, and Adam Lopez. 2020. Inflecting when there's no majority: Limi- tations of encoder-decoder neural networks as cog- nitive models for German plurals. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1745-1756, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Cross-linguistic syntactic evaluation of word prediction models", "authors": [ { "first": "Aaron", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "Garrett", "middle": [], "last": "Nicolai", "suffix": "" }, { "first": "Panayiota", "middle": [], "last": "Petrou-Zeniou", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Talmina", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5523--5539", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.490" ] }, "num": null, "urls": [], "raw_text": "Aaron Mueller, Garrett Nicolai, Panayiota Petrou- Zeniou, Natalia Talmina, and Tal Linzen. 2020. Cross-linguistic syntactic evaluation of word predic- tion models. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 5523-5539, Online. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": { "DOI": [ "10.18653/v1/N18-1202" ] }, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Quantity doesn't buy quality syntax with neural language models", "authors": [ { "first": "Aaron", "middle": [], "last": "Marten Van Schijndel", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5831--5837", "other_ids": { "DOI": [ "10.18653/v1/D19-1592" ] }, "num": null, "urls": [], "raw_text": "Marten van Schijndel, Aaron Mueller, and Tal Linzen. 2019. Quantity doesn't buy quality syntax with neural language models. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5831-5837, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "BERTurk -BERT models for Turkish", "authors": [ { "first": "Stefan", "middle": [], "last": "Schweter", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.5281/zenodo.3770924" ] }, "num": null, "urls": [], "raw_text": "Stefan Schweter. 2020. BERTurk -BERT models for Turkish. Zenodo.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1715--1725", "other_ids": { "DOI": [ "10.18653/v1/P16-1162" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "The subsegmental structure of German plural allomorphy", "authors": [ { "first": "Jochen", "middle": [], "last": "Trommer", "suffix": "" } ], "year": 2020, "venue": "Natural Language & Linguistic Theory", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1007/s11049-020-09479-7" ] }, "num": null, "urls": [], "raw_text": "Jochen Trommer. 2020. The subsegmental structure of German plural allomorphy. Natural Language & Linguistic Theory.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "BERTje: A Dutch BERT Model. Computing Research Repository", "authors": [ { "first": "Andreas", "middle": [], "last": "Wietse De Vries", "suffix": "" }, { "first": "Arianna", "middle": [], "last": "Van Cranenburgh", "suffix": "" }, { "first": "Tommaso", "middle": [], "last": "Bisazza", "suffix": "" }, { "first": "", "middle": [], "last": "Caselli", "suffix": "" }, { "first": "Malvina", "middle": [], "last": "Gertjan Van Noord", "suffix": "" }, { "first": "", "middle": [], "last": "Nissim", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.09582" ] }, "num": null, "urls": [], "raw_text": "Wietse de Vries, Andreas van Cranenburgh, Arianna Bisazza, Tommaso Caselli, Gertjan van Noord, and Malvina Nissim. 2019. BERTje: A Dutch BERT Model. Computing Research Repository, arXiv:1912.09582.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The Phonology of German. Oxford Linguistics", "authors": [ { "first": "R", "middle": [], "last": "Wiese", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Wiese. 2000. The Phonology of German. Oxford Linguistics. Oxford University Press.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "R", "middle": [], "last": "Russ", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "5753--5763", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretrain- ing for language understanding. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9 Buc, E. Fox, and R. Garnett, editors, Advances in Neu- ral Information Processing Systems 32, pages 5753- 5763. Curran Associates, Inc.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Differences in plural forms of monolingual German preschoolers and adults", "authors": [ { "first": "Eugen", "middle": [], "last": "Zaretsky", "suffix": "" }, { "first": "Benjamin", "middle": [ "P" ], "last": "Lange", "suffix": "" }, { "first": "Harald", "middle": [ "A" ], "last": "Euler", "suffix": "" }, { "first": "Katrin", "middle": [], "last": "Neumann", "suffix": "" } ], "year": 2013, "venue": "Lingue e Linguaggi", "volume": "10", "issue": "", "pages": "169--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugen Zaretsky, Benjamin P. Lange, Harald A. Eu- ler, and Katrin Neumann. 2013. Differences in plu- ral forms of monolingual German preschoolers and adults. Lingue e Linguaggi, 10:169-180.", "links": null } }, "ref_entries": { "TABREF1": { "num": null, "content": "", "type_str": "table", "html": null, "text": "ConditionStimulus Candidates No prime, real words The author knows many different foreign languages and [MASK] playing tennis with colleagues. enjoy/enjoys Prime, real words This is a pilot. the pilots [MASK]. laugh/laughs Prime, non-words This is a bik. the biks [MASK]. laugh/laughs" }, "TABREF2": { "num": null, "content": "
ModelLanguage Parameters Training tokens Tokenization
BERT BASE (Devlin et al., 2019)English110M3.3BWordPiece 30k
CamemBERT (Martin et al., 2020) French110M32.7BSentencePiece 32k
FlauBERT (Le et al., 2020)French138M12.8BBPE 50k
BETO (Ca\u00f1ete et al., 2020)Spanish110M3BBPE 32k
BERTje (Vries et al., 2019)Dutch110M2.4BSentencePiece 30k
Deepset 1German110M1.8BSentencePiece 30k
dbmdz 2German110M2.4BSentencePiece 30k
mBERT 3All110M-WordPiece 110k
", "type_str": "table", "html": null, "text": "Sample agreement stimuli in representative conditions in English. Correct completion is in bold." }, "TABREF3": { "num": null, "content": "", "type_str": "table", "html": null, "text": "" }, "TABREF5": { "num": null, "content": "
", "type_str": "table", "html": null, "text": "Agreement accuracy on simple sentences (e.g. \"The author laughs.\")." } } } }