{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:10:56.921431Z" }, "title": "Analyzing the Domain Robustness of Pretrained Language Models, Layer by Layer", "authors": [ { "first": "Ramesh", "middle": [], "last": "Abhinav", "suffix": "", "affiliation": {}, "email": "abhinav@comp.nus.edu.sg" }, { "first": "", "middle": [], "last": "Kashyap", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": { "country": "Singapore" } }, "email": "" }, { "first": "Laiba", "middle": [], "last": "Mehnaz", "suffix": "", "affiliation": { "laboratory": "MIDAS Lab", "institution": "IIIT-Delhi \u03b3 Independent Researcher \u03b4 Maharaja Agrasen Institute of Technology", "location": { "settlement": "New Delhi", "country": "India" } }, "email": "laibamehnaz@gmail.com" }, { "first": "Bhavitvya", "middle": [], "last": "Malik", "suffix": "", "affiliation": {}, "email": "bhavitvya.malik@gmail.com" }, { "first": "Abdul", "middle": [], "last": "Waheed", "suffix": "", "affiliation": {}, "email": "abdulwaheed1513@gmail.com" }, { "first": "Devamanyu", "middle": [], "last": "Hazarika", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": { "country": "Singapore" } }, "email": "hazarika@comp.nus.edu.sg" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": { "country": "Singapore" } }, "email": "" }, { "first": "Rajiv", "middle": [], "last": "Ratn", "suffix": "", "affiliation": { "laboratory": "MIDAS Lab", "institution": "IIIT-Delhi \u03b3 Independent Researcher \u03b4 Maharaja Agrasen Institute of Technology", "location": { "settlement": "New Delhi", "country": "India" } }, "email": "rajivratn@iiitd.ac.in" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The robustness of pretrained language models (PLMs) is generally measured using performance drops on two or more domains. However, we do not yet understand the inherent robustness achieved by contributions from different layers of a PLM. We systematically analyze the robustness of these representations layer by layer from two perspectives. First, we measure the robustness of representations by using domain divergence between two domains. We find that i) Domain variance increases from the lower to the upper layers for vanilla PLMs; ii) Models continuously pretrained on domain-specific data (DAPT) (Gururangan et al., 2020) exhibit more variance than their pretrained PLM counterparts; and that iii) Distilled models (e.g.,DistilBERT) also show greater domain variance. Second, we investigate the robustness of representations by analyzing the encoded syntactic and semantic information using diagnostic probes. We find that similar layers have similar amounts of linguistic information for data from an unseen domain.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "The robustness of pretrained language models (PLMs) is generally measured using performance drops on two or more domains. However, we do not yet understand the inherent robustness achieved by contributions from different layers of a PLM. We systematically analyze the robustness of these representations layer by layer from two perspectives. First, we measure the robustness of representations by using domain divergence between two domains. We find that i) Domain variance increases from the lower to the upper layers for vanilla PLMs; ii) Models continuously pretrained on domain-specific data (DAPT) (Gururangan et al., 2020) exhibit more variance than their pretrained PLM counterparts; and that iii) Distilled models (e.g.,DistilBERT) also show greater domain variance. Second, we investigate the robustness of representations by analyzing the encoded syntactic and semantic information using diagnostic probes. We find that similar layers have similar amounts of linguistic information for data from an unseen domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Pretrained Language Models (PLMs) have improved the downstream performance of many natural language understanding tasks on standard data (Devlin et al., 2019) . 1 Recent works attest to the surprising out-of-the-box robustness of PLMs on out-of-distribution tasks (Hendrycks et al., 2020; Brown et al., 2020; Miller et al., 2020) . These works measure robustness in terms of the performance invariance of PLMs on end tasks like Natural Language Inference (Bowman et al., 2015; Williams et al., 2018) , Sentiment Analysis (Maas et al., 2011) , Question Answering , among others. However, they do not investigate the domain invariance of PLM representations from different layers when presented with data from distinct domains. Studying the invariance of PLM representations has been useful in advancing methods for unsupervised domain adaptation. For example, in building domain adaptation models that explicitly reduce the divergence between layers of a neural network (Long et al., 2015; Shen et al., 2018a) , for data selection (Aharoni and Goldberg, 2020; Ma et al., 2019) et cetera.", "cite_spans": [ { "start": 137, "end": 158, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF14" }, { "start": 161, "end": 162, "text": "1", "ref_id": null }, { "start": 264, "end": 288, "text": "(Hendrycks et al., 2020;", "ref_id": "BIBREF23" }, { "start": 289, "end": 308, "text": "Brown et al., 2020;", "ref_id": "BIBREF11" }, { "start": 309, "end": 329, "text": "Miller et al., 2020)", "ref_id": "BIBREF40" }, { "start": 455, "end": 476, "text": "(Bowman et al., 2015;", "ref_id": "BIBREF10" }, { "start": 477, "end": 499, "text": "Williams et al., 2018)", "ref_id": "BIBREF69" }, { "start": 521, "end": 540, "text": "(Maas et al., 2011)", "ref_id": "BIBREF38" }, { "start": 969, "end": 988, "text": "(Long et al., 2015;", "ref_id": "BIBREF36" }, { "start": 989, "end": 1008, "text": "Shen et al., 2018a)", "ref_id": "BIBREF55" }, { "start": 1030, "end": 1058, "text": "(Aharoni and Goldberg, 2020;", "ref_id": "BIBREF0" }, { "start": 1059, "end": 1075, "text": "Ma et al., 2019)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given the importance of PLMs, a glass-box study of the internal robustness of PLM representations is overdue. We thus study these representations, dissecting them layer by layer, to uncover their internal contributions in domain adaptation. Firstly, we use the tools of domain divergence and domain invariance, without subscribing to the performance of a model on any end task. The theory of domain adaptation (Ben-David et al., 2010) , shows that reducing H-divergence between two domains results in higher performance in the target domain. Many works have since adopted this concept for domain adaptation in NLP (Ganin et al., 2016; Bousmalis et al., 2016) . The aim is to learn representations that are invariant to the domain, while also being discriminative of a particular task. Other divergence measures such as Maximum Mean Discrepancy (MMD) (Gretton et al., 2012a) , Correlational Alignment (CORAL), Central Moment Discrepancy (CMD) (Zellinger et al., 2017) , have been subsequently defined and used (Ramponi and Plank, 2020; Kashyap et al., 2020) . However, our community does not yet understand the inherent domain-invariance of PLM representations, particularly across different layers.", "cite_spans": [ { "start": 410, "end": 434, "text": "(Ben-David et al., 2010)", "ref_id": "BIBREF8" }, { "start": 614, "end": 634, "text": "(Ganin et al., 2016;", "ref_id": "BIBREF16" }, { "start": 635, "end": 658, "text": "Bousmalis et al., 2016)", "ref_id": "BIBREF9" }, { "start": 850, "end": 873, "text": "(Gretton et al., 2012a)", "ref_id": "BIBREF18" }, { "start": 942, "end": 966, "text": "(Zellinger et al., 2017)", "ref_id": "BIBREF73" }, { "start": 1009, "end": 1034, "text": "(Ramponi and Plank, 2020;", "ref_id": "BIBREF48" }, { "start": 1035, "end": 1056, "text": "Kashyap et al., 2020)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We ask key questions concerning domain invariance of PLM representations and find surpris-ing results. First, we consider vanilla PLM representations (e.g., BERT) which are trained on standard data like Wikipedia and Books (Plank, 2016) . We ask: do they exhibit domain invariance when presented with non-standard data like Twitter and biomedical text? ( \u00a73), and are lower layers of PLMs general and invariant compared to higher layers? To answer these, we measure the domain divergence of PLM representations considering standard and non-standard data. We find that the lower layers of PLMs are more domain invariant compared to the upper layers ( \u00a73.2). We find that it is similar in spirit to computer vision models where lower layers of the neural network learn Gabor filters and extract edges irrespective of the image and are more transferable across tasks compared to the upper layers (Yosinski et al., 2014) .", "cite_spans": [ { "start": 223, "end": 236, "text": "(Plank, 2016)", "ref_id": "BIBREF46" }, { "start": 893, "end": 916, "text": "(Yosinski et al., 2014)", "ref_id": "BIBREF71" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While PLMs improve the performance of tasks on standard data, Gururangan et al. (2020) improve on domain-specific tasks by continuing to pretrain RoBERTa on domain-specific data (DAPT). We thus also ask: what happens to the domain invariance of DAPT models? We find that compared to pretrained RoBERTa, the divergence of DAPT at a given layer either remains the same or increases, providing evidence of their specialization to a domain ( \u00a73.3). Lastly, given that standard PLMs have high training cost, we also consider the distilled model DistilBERT (Sanh et al., 2019) . What happens to the domain invariance in distilled model representations? We find that such representations produce more domainspecific representations across layers ( \u00a73.4).", "cite_spans": [ { "start": 551, "end": 570, "text": "(Sanh et al., 2019)", "ref_id": "BIBREF54" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We further analyze the robustness of representations from the perspective of the encoded syntactic and semantic information across domains ( \u00a74). Do contextualized word-level representations encode similar syntactic and semantic information even for unseen domains? We experiment with zeroshot probes where the probes are trained on standard data only. We consider syntactic tasks like POS and NER and a semantic task -coreference resolution and find that the probes indicate similar layers encode similar amount of information, even on non-standard data. In summary, our contributions are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We investigate the domain invariance of PLMs layer by layer and find that lower layers are more domain invariant than upper layers, which is useful for transfer learning and domain adaptation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Further, we analyze the robustness in terms of the syntactic and semantic information encoded in the representations across unseen domains and find that similar layers have similar amounts of linguistic information, which is a preliminary exposition of their overall performance robustness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The majority of current PLMs like BERT (Devlin et al., 2019) are transformer (Vaswani et al., 2017) based models. As such we focus on the representations from different transformers. They are unsupervisedly pretrained using masked language-modeling and next sentence prediction objectives, over large amounts of English standard data such as the Books corpus (Zhu et al., 2015) and Wikipedia articles. We consider varitaion in size: two differently sized versions of the BERT model, bert base uncased -a 12 layer model and bert large uncased -a 24 layer model, both trained on lower-cased text, for comparing matters of size in representations. Next, to analyze whether training with larger data scale aids in robustness, we consider RoBERTa (Liu et al., 2019b) , which is similar to BERT, but trained on a magnitude larger standard data. Further, we check the effect of distillation on domaininvariance and hence, consider DistilBERT (Sanh et al., 2019) . Finally, training of models on domain-specific data is known to increase their performance on domain-specific tasks. To analyze the effect of continued fine-tuning on invariance, we consider RoBERTa pretrained on non-standard Biomedical (Gururangan et al., 2020) , and Twitter (Barbieri et al., 2020) domain data. We refer to this as DAPT-biomed and DAPT-tweet, respectively. For our experiments, we use the models hosted on the huggingface-transformer library (Wolf et al., 2020) . Divergence Measures. We consider three different divergence measures that are widely used in the unsupervised domain adaptation literature. Correlation Alignment (CORAL) measures the difference between covariance of features -a second-order moment. Sun and Saenko (2016) reduce the distributional distance between features for unsupervised domain adaptation (UDA) in computer vision models. In contrast to CORAL, Central moment Discrepancy (CMD) considers higher-order moments of random variables to measure the distributional difference between features, and has been used in both NLP (Peng et al., 2018) and multimodal UDA ). Finally we consider another popular measure of measuring divergence -Maximum Mean Discrepancy (Gretton et al., 2012a) . Specifically, we consider the Multi-Kernel Gaussian variate (MK-MMD-Gaussian), which ensures that the statistical two sample test for the difference in distributions have high power and low test error (Gretton et al., 2012b; Long et al., 2015) . We chose these measures because of their popularity, relevance and inexpensive calculations, and provide their technical details in Appendix A.", "cite_spans": [ { "start": 39, "end": 60, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF14" }, { "start": 77, "end": 99, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF65" }, { "start": 359, "end": 377, "text": "(Zhu et al., 2015)", "ref_id": "BIBREF76" }, { "start": 742, "end": 761, "text": "(Liu et al., 2019b)", "ref_id": null }, { "start": 935, "end": 954, "text": "(Sanh et al., 2019)", "ref_id": "BIBREF54" }, { "start": 1194, "end": 1219, "text": "(Gururangan et al., 2020)", "ref_id": "BIBREF20" }, { "start": 1234, "end": 1257, "text": "(Barbieri et al., 2020)", "ref_id": "BIBREF6" }, { "start": 1418, "end": 1437, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF70" }, { "start": 1689, "end": 1710, "text": "Sun and Saenko (2016)", "ref_id": "BIBREF57" }, { "start": 2026, "end": 2045, "text": "(Peng et al., 2018)", "ref_id": "BIBREF43" }, { "start": 2162, "end": 2185, "text": "(Gretton et al., 2012a)", "ref_id": "BIBREF18" }, { "start": 2389, "end": 2412, "text": "(Gretton et al., 2012b;", "ref_id": "BIBREF19" }, { "start": 2413, "end": 2431, "text": "Long et al., 2015)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "2" }, { "text": "3 How Domain-Invariant are PLM Representations?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "2" }, { "text": "Most of the current techniques in unsupervised domain adaptation explicitly reduce the divergence between different layer representations during training . A common posthoc analysis from such works shows the reduction of domain invariance at different layers. However, they do not pay much heed to the domaininvariance of representations that already exist in such models prior to domain-adapted training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "2" }, { "text": "Thus, we use domain divergence measures to investigate whether domain-invariance is an inherent property of pretrained transformer models, by the virtue of large-scale self-supervised learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "2" }, { "text": "We randomly sample 5000 standard data sentences from the Toronto Books corpus (Zhu et al., 2015) , which is similar to the data used to train pretrained language models. We further split them into five groups of 1000 sentences for calculating our divergence measures and report the mean and variance of our results. We consider two non-standard domains. The biomedical domain similarly consists of 5000 sentences from publicly available PubMed abstracts 2 and for the Twitter domain, we sample 5000 tweets from the year 2011 made available by the archive team. 3 We follow the same procedure as Nguyen et al. (2020) to preprocess tweets: we use fastText (Joulin et al., 2017) to consider only English tweets and use the emoji package 4 to translate emojis into text strings, normalize all the user mentions to @USER and URLs to HTTPURL. We make forward passes of 1000 samples from one pair of domains (standard-biomedical / standard-twitter) separately through the transformers, obtaining two sets of representations. We then use these to calculate divergence measures. We consider the representations of [CLS] token as the representation of a sentence, as done in other works. Note that we do not fine-tune any of our models on the non-standard data.", "cite_spans": [ { "start": 78, "end": 96, "text": "(Zhu et al., 2015)", "ref_id": "BIBREF76" }, { "start": 561, "end": 562, "text": "3", "ref_id": null }, { "start": 654, "end": 675, "text": "(Joulin et al., 2017)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Method", "sec_num": "3.1" }, { "text": "Overall, the divergence measures increase from the lower layers to the upper layers ( Figure 1 ). CORAL and CMD for bert-base-uncased and bert-large-uncased indicate that the divergence strictly increases. Surprisingly, the models trained on standard data extract invariant representations at the lower layers, becoming more domain-specific at the upper layers, irrespective of the domain. Both CORAL and CMD indicate a sharp decrease in divergence for the last layer for all the models. Since they are language models trained to predict the next word, they might encode representations related to the pretraining objective itself (Liu et al., 2019a) . Compared to BERT-base, the divergence measures of BERTlarge, at layers where they can be compared, is lower (c.f. Fig. 1 and Fig. 5 in Appendix B). Even though both the models are trained on a similar amount of data and similar training procedures, it is surprising that BERT-base has lower divergence than BERT-large.", "cite_spans": [ { "start": 631, "end": 650, "text": "(Liu et al., 2019a)", "ref_id": "BIBREF34" } ], "ref_spans": [ { "start": 86, "end": 94, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 767, "end": 773, "text": "Fig. 1", "ref_id": "FIGREF0" }, { "start": 778, "end": 784, "text": "Fig. 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Across Layers:", "sec_num": null }, { "text": "But, MK-MMD-Gaussian does not indicate a clear increase in divergence. We attribute this to the divergence measure, since MK-MMD-Gaussian is sensitive to the kernel and choosing an optimal value for its parameters is non-trivial (Gretton et al., 2012b) . We confirm this by plotting the PCA representations of these data points (Figs. 8 to 13 in Appendix E.), which show that the representations from the two domains are interspersed in the lower layers and separated in the upper layers, as done in many previous works (Ganin et al., 2016; Long et al., 2015) . We further quantify this by performing k-means clustering where k = 2 (the number of domains). We evaluate the clusters using Normalized Mutual Information (c.f. Table 1 ). Clustering quality is higher for upper layers compared to lower layers where representations are interspersed.", "cite_spans": [ { "start": 229, "end": 252, "text": "(Gretton et al., 2012b)", "ref_id": "BIBREF19" }, { "start": 520, "end": 540, "text": "(Ganin et al., 2016;", "ref_id": "BIBREF16" }, { "start": 541, "end": 559, "text": "Long et al., 2015)", "ref_id": "BIBREF36" } ], "ref_spans": [ { "start": 724, "end": 731, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Across Layers:", "sec_num": null }, { "text": "The increasing divergence across layers has plausible implications in making decisions in many scenarios. For example, in deciding the number of layers in the gradual unfreezing of layers in transfer learning (Howard and Ruder, 2018) , in unsupervised domain adaptation where divergence between representations from different layers are reduced (Long et al., 2015) . Recently, Aharoni and Goldberg (2020) show the final transformer layer representations cluster while Ma et al. (2019) consider penultimate layer representations. The high domain divergence of the upper layers is a plausible explanation for the clustering (Figs. 8 to 13 in Appendix E.). Clustering of representations plays a key role in downstream applications, such as data selection for machine translation and curriculum learning, data points in the source domain closest to the target domain are chosen (Axelrod et al., 2011; Moore and Lewis, 2010) .", "cite_spans": [ { "start": 209, "end": 233, "text": "(Howard and Ruder, 2018)", "ref_id": "BIBREF27" }, { "start": 345, "end": 364, "text": "(Long et al., 2015)", "ref_id": "BIBREF36" }, { "start": 377, "end": 404, "text": "Aharoni and Goldberg (2020)", "ref_id": "BIBREF0" }, { "start": 468, "end": 484, "text": "Ma et al. (2019)", "ref_id": "BIBREF37" }, { "start": 874, "end": 896, "text": "(Axelrod et al., 2011;", "ref_id": "BIBREF4" }, { "start": 897, "end": 919, "text": "Moore and Lewis, 2010)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Across Layers:", "sec_num": null }, { "text": "BERT vs. RoBERTa: Compared to BERT, RoBERTa has uniform divergence across layers (c.f. Fig. 1 ). RoBERTa is similar to BERT, but a major difference is the amount of pre-training data used (one magnitude; 160GB vs. 16GB). We speculate that the domain-invariance is because the pretraining data is an unintended mixture of different domains. Recent works have shown the impact of training models with large and diverse datasets on the robustness of image classification models (Taori et al., 2020 ) and text classification models (Tu et al., 2020) with similar trends observed where RoBERTa is more robust.", "cite_spans": [ { "start": 475, "end": 494, "text": "(Taori et al., 2020", "ref_id": "BIBREF60" }, { "start": 528, "end": 545, "text": "(Tu et al., 2020)", "ref_id": null } ], "ref_spans": [ { "start": 87, "end": 93, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Across Layers:", "sec_num": null }, { "text": "To create domain-specific PLM, the simplest methods train models from scratch on domainspecific data like scientific publications (Beltagy et al., 2019) , BioBERT , Clini-calBERT (Alsentzer et al., 2019) among others. In contrast, instead of pretraining from scratch, recent work shows impressive benefits of continuing to pretrain on domain-specific data -termed domain adaptive pretraining (DAPT) (Gururangan et al., 2020). Although there are improvements on domain-specific end tasks, the domain-invariance of these representations is not analyzed. We consider only the CORAL and CMD divergence measures from now, due to our observations from the previous section. Figure 2 shows that the divergence across the layers for DAPT-biomed is the same as RoBERTa or is higher (c.f. Fig. 6 in Appendix C for DAPTtwitter). The main aim of continuing to pretrain is to make the models more domain-specific. We expect the representations to diverge from the standard representations after model training. DAPT representations possibly serve as good initial representations for fine-tuning on domainspecific end tasks like natural language inference, text classification et cetera (Hao et al., 2019) . Teasing out the benefits of domain-specific pretraining from the task-specific fine-tuning is still unclear and warrants careful attention.", "cite_spans": [ { "start": 130, "end": 152, "text": "(Beltagy et al., 2019)", "ref_id": "BIBREF7" }, { "start": 179, "end": 203, "text": "(Alsentzer et al., 2019)", "ref_id": "BIBREF3" }, { "start": 1173, "end": 1191, "text": "(Hao et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 668, "end": 676, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 779, "end": 785, "text": "Fig. 6", "ref_id": null } ], "eq_spans": [], "section": "What happens to the domain-invariance of DAPT models?", "sec_num": "3.3" }, { "text": "Knowledge Distillation (Hinton et al., 2015) has been successfully used to reduce the size and inference time of PLMs. Here, a smaller student network mimics the output of a larger teacher network. We consider the DistilBERTmodel. Fig. 3 shows the comparison of divergence measures between DistilBERT with BERT for the standard and the biomedical domain pair (c.f. Fig. 7 in Appendix D for comparison with Twitter do-main). DistilBERT contains half the number of layers compared to BERT. At a comparable layers, DistilBERT always has higher divergence values for both CMD and CORAL. Sanh et al. (2019) show that distillation loss that mimics the teacher's output and cosine embedding loss which aligns the student and teacher hidden states vectors, are the major contributors to the student's performance. Yet, we find that DistilBERT still has greater variance which may affect downstream tasks like text classification. Although a few models (Jiao et al., 2020; Sanh et al., 2019) reduce some notion of geometric distance between the intermediate representations of the student and the teacher, it does not guarantee that the entire linguistic knowledge and the domain-invariance of the teacher are transferred to the student model. Recent work in NLP have tried to incorporate rich information from teacher networks using contrastive learning (Tian et al., 2020; Sun et al., 2020) and by reducing the Earth Mover's distance between the hidden representations in the transformer architecture . Related computer vision work also to impart adversarial robustness, even in the student network (Goldblum et al., 2020) . The benefits of such enhanced distillation techniques on the robustness of the model is an under-explored area.", "cite_spans": [ { "start": 23, "end": 44, "text": "(Hinton et al., 2015)", "ref_id": "BIBREF26" }, { "start": 583, "end": 601, "text": "Sanh et al. (2019)", "ref_id": "BIBREF54" }, { "start": 944, "end": 963, "text": "(Jiao et al., 2020;", "ref_id": "BIBREF29" }, { "start": 964, "end": 982, "text": "Sanh et al., 2019)", "ref_id": "BIBREF54" }, { "start": 1346, "end": 1365, "text": "(Tian et al., 2020;", "ref_id": "BIBREF63" }, { "start": 1366, "end": 1383, "text": "Sun et al., 2020)", "ref_id": "BIBREF58" }, { "start": 1592, "end": 1615, "text": "(Goldblum et al., 2020)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 231, "end": 237, "text": "Fig. 3", "ref_id": "FIGREF2" }, { "start": 365, "end": 371, "text": "Fig. 7", "ref_id": null } ], "eq_spans": [], "section": "What happens to the domain-invariance after distillation?", "sec_num": "3.4" }, { "text": "How much linguistic information do representations from pretrained language models still encode for data from a different domain? Here, we evaluate the robustness of representations in bert-base-uncased. Do word-level repre- sentations from PLMs encode similar levels of linguistic knowledge irrespective of the domain?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Robustness of Linguistic Information", "sec_num": "4" }, { "text": "Edge Probes: Edge probes (Tenney et al., 2019b) measure the magnitude of linguistic information present in contextualized word representations.", "cite_spans": [ { "start": 25, "end": 47, "text": "(Tenney et al., 2019b)", "ref_id": "BIBREF62" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Method", "sec_num": "4.1" }, { "text": "Representations of spans from a specific layer are passed through a shallow, multi-layer perceptron which predict their linguistic label. The performance of the probes indicates the magnitude of linguistic information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets and Method", "sec_num": "4.1" }, { "text": "To evaluate the linguistic information in BERT representations regardless of the domain, we train probes on source domain data and test on a heldout test dataset from the target domain. Since the non-standard data is unseen during training, this is a form of zero-shot probing, as also experimented in (Ravichander et al., 2020b) . Training separate probes on every domain would yield inaccurate information about the linguistic information in the representations. The probes themselves may learn the linguistic task and overfit on the target domain data which can serve as a confounding factor.", "cite_spans": [ { "start": 302, "end": 329, "text": "(Ravichander et al., 2020b)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Method", "sec_num": "4.1" }, { "text": "A performance drop in probing performance between domains should not be interpreted as an absence of linguistic information. Other confounding factors like distribution difference (Recht et al., 2019; Miller et al., 2020 ) may be responsible. We are interested in the underlying pattern, and one has to exercise caution in interpreting the absolute performance numbers.", "cite_spans": [ { "start": 180, "end": 200, "text": "(Recht et al., 2019;", "ref_id": "BIBREF51" }, { "start": 201, "end": 220, "text": "Miller et al., 2020", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Method", "sec_num": "4.1" }, { "text": "We chose three tasks from the suite of tasks defined by (Tenney et al., 2019b) , where POS tagging (part-of-speech tagging), and NER (Named entity recognition) are considered syntactic, and where Coreference resolution (Coref) is considered a semantic task. We chose these tasks guided by the availability of similar datasets in both domains. For all our experiments involving probing, we use the jiant framework . Data: Following (Tenney et al., 2019b) we use the OntoNotes 5.0 corpus (Weischedel, Ralph et al., 2013) for probing. Since they are from newswire and web text, which is similar to the pretraining corpus of BERT (Devlin et al., 2019) , we consider this dataset as standard data (source domain). We choose Twitter to represent non-standard data (target domain) for the probing task since our previous experiments showed a greater divergence, and thus are significantly different from the pretraining corpus used in BERT.", "cite_spans": [ { "start": 56, "end": 78, "text": "(Tenney et al., 2019b)", "ref_id": "BIBREF62" }, { "start": 486, "end": 518, "text": "(Weischedel, Ralph et al., 2013)", "ref_id": null }, { "start": 626, "end": 647, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Method", "sec_num": "4.1" }, { "text": "For POS tagging, we use the dataset described by (Derczynski et al., 2013) . We remove the following POS tags from the dataset: USR, URL, HT, RT, \"(\", and \")\", to normalize the labels across the domains. For NER, we use the dataset released for the shared task of the Workshop on Noisy Usergenerated Text (W-NUT) (Baldwin et al., 2015) . For coreference resolution, we use the dataset presented in (Akta\u015f et al., 2018) , whose annotations were later modified by (Akta\u015f et al., 2020) so that they were conceptually parallel to OntoNotes 5.0 corpus (Weischedel, Ralph et al., 2013) . The size of the datasets across train, development and test splits were kept similar for both the domains (c.f. Appendix F).", "cite_spans": [ { "start": 49, "end": 74, "text": "(Derczynski et al., 2013)", "ref_id": "BIBREF13" }, { "start": 313, "end": 335, "text": "(Baldwin et al., 2015)", "ref_id": "BIBREF5" }, { "start": 398, "end": 418, "text": "(Akta\u015f et al., 2018)", "ref_id": "BIBREF1" }, { "start": 462, "end": 482, "text": "(Akta\u015f et al., 2020)", "ref_id": "BIBREF2" }, { "start": 547, "end": 579, "text": "(Weischedel, Ralph et al., 2013)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Method", "sec_num": "4.1" }, { "text": "Even though the probes had not seen examples from the target domain, we observe from Fig. 4 that the best performing layer is the same across domains for each task. The F1 scores peak at the same layers for both the domains, across all tasks. In both domains, the F1 for the task of POS tagging peaks at layer 5; for NER and Coref, at layer 10 and 8, respectively. Considering the knowledge discovered by the probes, it can be seen that similar layers are the most important for syntactic and semantic tasks across domains.", "cite_spans": [], "ref_spans": [ { "start": 85, "end": 91, "text": "Fig. 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "Concerning which part of the model encodes syntactic information required for POS and NER, we observe that the middle layers perform the best for both tasks, invariant of domain. This result is consistent with the results reported for nondomain adaptation work (Liu et al., 2019a; Jawahar et al., 2019) . For Coref, the upper layers perform better on the task for the source domain. This indicates that the models store the information required for Coref (Liu et al., 2019a) , but that the lower layers perform better when it comes to the target domain. We speculate that this is due to the nature of Twitter-coref dataset (target domain). For tasks like coreference resolution, there is a need for the presence of semantic information to identify the co-referring entities. But as tweets are naturally shorter, they contain co-referring entities that are close to each other, and do not require long-range information. This might make it easier for BERT models to use syntactic information from the lower layers to perform well on the target domain dataset.", "cite_spans": [ { "start": 261, "end": 280, "text": "(Liu et al., 2019a;", "ref_id": "BIBREF34" }, { "start": 281, "end": 302, "text": "Jawahar et al., 2019)", "ref_id": "BIBREF28" }, { "start": 455, "end": 474, "text": "(Liu et al., 2019a)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "We note that Merchant et al. (2020) show PLM representations do not experience catastrophic forgetting when fine-tuned on different end tasks such as MNLI, SQuAD and dependency parsing. With the limited capabilities that probes have, the results of this work show that similar information is being encoded for a task in similar layers without fine-tuning on any domain-specific data. This indicates that PLM representations might encode similar linguistic information across domains to begin with, potentially aiding performance on domain-specific end tasks.", "cite_spans": [ { "start": 13, "end": 35, "text": "Merchant et al. (2020)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "Our analysis using probes for different domains is intended to be an initial exploration of this topic. We inherit all the limitations of the probing classifiers highlighted by recent works (Tenney et al., 2019a; Voita and Titov, 2020; Pimentel et al., 2020) . Since probing trains shallow models, there exists a possibility that the performance confounds with the models learning the task rather than being diagnostic about the linguistic power of representations. It also does not indicate that the model uses this information effectively (Hewitt and Manning, 2019) which requires further analysis. We also consider only one target domain -Twitter -and analyze bert-base-uncased for our probes. The availability and varying characteristics of the dataset across domains dictates our choice. For example, compared to standard coreference, biomedical text exhibits co-referring terms across sentences in long documents.", "cite_spans": [ { "start": 190, "end": 212, "text": "(Tenney et al., 2019a;", "ref_id": "BIBREF61" }, { "start": 213, "end": 235, "text": "Voita and Titov, 2020;", "ref_id": "BIBREF66" }, { "start": 236, "end": 258, "text": "Pimentel et al., 2020)", "ref_id": null }, { "start": 541, "end": 567, "text": "(Hewitt and Manning, 2019)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Limitations", "sec_num": "5" }, { "text": "6 Related Work", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations", "sec_num": "5" }, { "text": "Pretrained language models (PLMs) perform well on a wide range of NLP tasks, but they do not generalize well when the test distribution is different. A robust model must adapt to the shift in distributions (Quionero-Candela et al., 2009 ) and generalize to out-of-distribution (OOD) examples. (Hendrycks et al., 2020 ) study the OOD robustness of PLMs, finding that the performance drop is substantially smaller than their shallow LSTM and CNN counterparts. Much of the literature on PLM robustness use the notion of performance drop in a new target domain (Hendrycks et al., 2020; Tu et al., 2020; Miller et al., 2020) . However, analyzing the robustness and invariance of the representations under data from different domains or adversarial examples (Zhu et al., 2020) has not received much attention thus far in domain adaptation.", "cite_spans": [ { "start": 206, "end": 236, "text": "(Quionero-Candela et al., 2009", "ref_id": "BIBREF47" }, { "start": 293, "end": 316, "text": "(Hendrycks et al., 2020", "ref_id": "BIBREF23" }, { "start": 557, "end": 581, "text": "(Hendrycks et al., 2020;", "ref_id": "BIBREF23" }, { "start": 582, "end": 598, "text": "Tu et al., 2020;", "ref_id": null }, { "start": 599, "end": 619, "text": "Miller et al., 2020)", "ref_id": "BIBREF40" }, { "start": 752, "end": 770, "text": "(Zhu et al., 2020)", "ref_id": "BIBREF75" } ], "ref_spans": [], "eq_spans": [], "section": "NLP Robustness", "sec_num": "6.1" }, { "text": "Concerning the robustness of linguistic information stored in representations, Merchant et al. (2020) analyze the syntactic and semantic information preserved by PLMs, both before and after fine-tuning the models on task-specific data. Similarly, Tamkin et al. (2020) analyze the role of different layers in transfer learning on end tasks. Different from their study, we are interested in the intrinsic invariance of the PLM representations under data from different domains.", "cite_spans": [ { "start": 79, "end": 101, "text": "Merchant et al. (2020)", "ref_id": "BIBREF39" }, { "start": 247, "end": 267, "text": "Tamkin et al. (2020)", "ref_id": "BIBREF59" } ], "ref_spans": [], "eq_spans": [], "section": "NLP Robustness", "sec_num": "6.1" }, { "text": "For unsupervised domain adaptation, a popular method is use the adversarial training between a domain and a task classifier (DANN) (Ganin et al., 2016) . Compared to DANN, where domainspecific peculiarities are lost, (Bousmalis et al., 2016) introduce domain-specific networks, which where domain-specific and domain-invariant representations are formed in a shared-private network. Another method to obtain invariant representations is to explicitly reduce the domain divergence between different layers of a neural network (Miller et al., 2020; Sun and Saenko, 2016; Shen et al., 2018b,a) . For a complete treatment on UDA refer to (Ramponi and Plank, 2020) and for a review on divergence measure refer to (Kashyap et al., 2020) . Considering the inherent domaininvariance of representations is thus important for UDA models.", "cite_spans": [ { "start": 131, "end": 151, "text": "(Ganin et al., 2016)", "ref_id": "BIBREF16" }, { "start": 217, "end": 241, "text": "(Bousmalis et al., 2016)", "ref_id": "BIBREF9" }, { "start": 525, "end": 546, "text": "(Miller et al., 2020;", "ref_id": "BIBREF40" }, { "start": 547, "end": 568, "text": "Sun and Saenko, 2016;", "ref_id": "BIBREF57" }, { "start": 569, "end": 590, "text": "Shen et al., 2018b,a)", "ref_id": null }, { "start": 634, "end": 659, "text": "(Ramponi and Plank, 2020)", "ref_id": "BIBREF48" }, { "start": 708, "end": 730, "text": "(Kashyap et al., 2020)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Domain Adaptation", "sec_num": "6.2" }, { "text": "As pretrained transformer models provide improvements on end tasks, understanding their internals and knowledge they encode has become increasingly important. For a review on efforts to understand pretrained transformers, see Rogers et al. (2021) . Probing is a popular method to understand the linguistic information stored in continuous representations (Conneau et al., 2018) . Tenney et al. (2019a,b) use probes to understand the linguistic information that the representations capture.", "cite_spans": [ { "start": 226, "end": 246, "text": "Rogers et al. (2021)", "ref_id": "BIBREF52" }, { "start": 355, "end": 377, "text": "(Conneau et al., 2018)", "ref_id": "BIBREF12" }, { "start": 380, "end": 403, "text": "Tenney et al. (2019a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Probing", "sec_num": "6.3" }, { "text": "Recent work has questioned the premise of using a probe. Hewitt and Liang (2019) propose control tasks to ensure that the performance of probe is diagnostic about the linguistic information and is not because of learning the task. (Pimentel et al., 2020) utilize information theory to show that contextual representations contain similar amounts of information as lexical tokens. They suggest that better performing probes are increasingly accurate of detecting linguistic information regardless of their complexity and propose ease of probing as an alternative solution. This is similar to minimum description length suggested by Voita and Titov (2020) . Contrary to previous works Ravichander et al. (2020a) ; Elazar et al. (2020) argue that presence of linguistic information does not guarantee its utility for end tasks. In contrast to these works that consider only a single domain, we provide experiments to diagnose cross domain linguistic information using probes.", "cite_spans": [ { "start": 57, "end": 80, "text": "Hewitt and Liang (2019)", "ref_id": "BIBREF24" }, { "start": 231, "end": 254, "text": "(Pimentel et al., 2020)", "ref_id": null }, { "start": 631, "end": 653, "text": "Voita and Titov (2020)", "ref_id": "BIBREF66" }, { "start": 683, "end": 709, "text": "Ravichander et al. (2020a)", "ref_id": "BIBREF49" }, { "start": 712, "end": 732, "text": "Elazar et al. (2020)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Probing", "sec_num": "6.3" }, { "text": "We consider domain robustness from the perspective of domain-invariance of pretrained language model (PLM) representations. We observe that the lower layers of PLMs are generally domaininvariant. We also find that domain variance increases on continuously pretrained (DAPT) models and distilled models (DistilBERT). We have seen that RoBERTa is robust, possibly by virtue of training with more data. Domain adaptation methods using it should be careful in assessing the empirical benefits of their methods. As distillation becomes a mainstay method in NLP for retaining accuracy and saving training and inference costs on large models, considering distillation techniques to retain domain invariance and broadly applicable linguistic properties is of interest to the community.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Considering the inherent domain-invariance of PLM representations at various layers is possibly useful in understanding their performance on out of domain distribution data (Hendrycks et al., 2020) and for domain adaptation in general. For example, since we understand that the lower layers of BERT are domain-invariant compared to higher layers, we can freeze them during domain adaptation (Peters et al., 2019; Shen et al., 2018a) or drop them to make the models smaller and more efficient (Sajjad et al., 2020) . In the future, we will incorporate this information for domain adaptation of models.", "cite_spans": [ { "start": 173, "end": 197, "text": "(Hendrycks et al., 2020)", "ref_id": "BIBREF23" }, { "start": 391, "end": 412, "text": "(Peters et al., 2019;", "ref_id": "BIBREF44" }, { "start": 413, "end": 432, "text": "Shen et al., 2018a)", "ref_id": "BIBREF55" }, { "start": 492, "end": 513, "text": "(Sajjad et al., 2020)", "ref_id": "BIBREF53" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We used edge probes (Tenney et al., 2019b) to identify the linguistic information in representations of data from different domains. One has to note that edge probes consider span representations for probing and not the representations of the entire sentence. To answer Is there any correlation between domain-invariance of a sentence and the amount of linguistic information contained in them?, we should consider sentence-level probes, similar to (Conneau et al., 2018; Jawahar et al., 2019 ) But, we are restricted by the lack of sentence-level probing data for different domains. We believe that this is a ripe area for future work.", "cite_spans": [ { "start": 20, "end": 42, "text": "(Tenney et al., 2019b)", "ref_id": "BIBREF62" }, { "start": 449, "end": 471, "text": "(Conneau et al., 2018;", "ref_id": "BIBREF12" }, { "start": 472, "end": 492, "text": "Jawahar et al., 2019", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Maximum Mean Discrepancy (MMD): MMD is a non-parametric method to estimate the distance between distributions based on Reproducing Kernel Hilbert Spaces (RKHS). Given two random variables X = {x 1 , x 2 , ..., x m } and Y = {y 1 , y 2 , ...., y n } that are drawn from distributions P and Q, the empirical estimate of the distance between distribution P and Q is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Divergence Measures", "sec_num": null }, { "text": "M M D(X, Y ) = 1 m m i=1 \u03c6(x i ) \u2212 1 n n i=1 \u03c6(y i ) H", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Divergence Measures", "sec_num": null }, { "text": "(1) Here \u03c6 : X \u2192 H are nonlinear mappings or of the samples to a feature representation in a RKHS, called kernels. In this work, we map the contextual word representations of the text to RKHS. Various kernels can be used for this purpose. Some of the kernels are given below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Divergence Measures", "sec_num": null }, { "text": "Rational Quadratic Kernel", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Divergence Measures", "sec_num": null }, { "text": "\u03c6(x, y) = 1 + 1 2\u03b1 (x \u2212 y) T \u0398 \u22122 (x \u2212 y) \u2212\u03b1 Energy \u03c6(x, y) = \u2212 x \u2212 y 2 Gaussian \u03c6(x, y) = exp(\u2212 x \u2212 y 2 2 \u03b3 ) Laplacian \u03c6(x, y) = exp(\u2212 x \u2212 y 2 \u03c3 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Divergence Measures", "sec_num": null }, { "text": "In this work we use a mixture of Gaussian Kernels rather than a single kernel which is known to be more stable than just using a single kernel. The mixture of kernels are given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Divergence Measures", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "K = m i=1 \u03bb i k i | m i=1 \u03bb i = 1", "eq_num": "(2)" } ], "section": "A Divergence Measures", "sec_num": null }, { "text": "We set \u03bb i to be 1 m . We follow (Long et al., 2015) and use the Gaussian kernel. We calculate a initial value \u03b3 s and set it to the median pairwise distances between two samples, also known as median heuristic. For every kernel k i , the value of \u03b3 is set from 2 \u22128 \u03b3 s and 2 8 \u03b3 s , increasing it by a multiple of 2.", "cite_spans": [ { "start": 33, "end": 52, "text": "(Long et al., 2015)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "A Divergence Measures", "sec_num": null }, { "text": "Correlation Alignment (CORAL): Correlation alignment is the distance between the secondorder moment of the source and target samples. If d is the representation dimension, F represents Frobenius norm and Cov S , Cov T is the covariance matrix of the source and target samples, then CORAL is defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Divergence Measures", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "D CORAL = 1 4d 2 Cov S \u2212 Cov T 2 F", "eq_num": "(3)" } ], "section": "A Divergence Measures", "sec_num": null }, { "text": "Central Moment Discrepancy (CMD): Central Moment Discrepancy is another metric that measures the distance between source and target distributions. It not only considers the first moment and second moment, but also other higher-order moments. While MMD operates in a projected space, CMD operates in the representation space. If P and Q are two probability distributions and X = {X 1 , X 2 , ...., X N } and Y = {Y 1 , Y 2 , ...., Y N } are random vectors that are independent and identically distributed from P and Q and every component of the vector is bounded by [a, b] , CMD is then defined by", "cite_spans": [ { "start": 565, "end": 571, "text": "[a, b]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A Divergence Measures", "sec_num": null }, { "text": "CM D(P, Q) = 1 |b \u2212 a| E(X) \u2212 E(Y ) 2 + \u221e k=2 1 |b \u2212 a| k c k (X) \u2212 c k (Y ) 2 (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Divergence Measures", "sec_num": null }, { "text": "where E(X) is the expectation of X and c k is the k \u2212 th order central moment which is defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Divergence Measures", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c k (X) = E N i=1 (X i \u2212 E(X i )) r i", "eq_num": "(5)" } ], "section": "A Divergence Measures", "sec_num": null }, { "text": "and r 1 + r 2 + r N = k and r 1 ....r N \u2265 0 B Domain Divergence Plots Fig. 5 shows the domain divergence measures for bert-base-uncased and bert-large-uncased models only for greater quality. We consider only the CORAL and CMD divergence measures. Even though both the models are trained on similar amounts of data, with similar training procedures, at comparable layers bert-large-uncasedmodels are more domain-invariant compared to bert-base-uncased. ", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 76, "text": "Fig. 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "A Divergence Measures", "sec_num": null }, { "text": "Divergence plots for DAPT-twitter compared with its roberta-base counterpart are shown in Fig. 6 . Here we consider the CORAL and CMD divergence measures. The plots show that for DAPT-twitter the divergence measures are either the same or more than their roberta-base counterpart which indicates their specialization for a domain.", "cite_spans": [], "ref_spans": [ { "start": 90, "end": 96, "text": "Fig. 6", "ref_id": null } ], "eq_spans": [], "section": "C DAPT Twitter Divergence Plots", "sec_num": null }, { "text": "D DistilBERTvs BERT for twitter Fig. 7 shows the comparison between Dis-tilBERT -the student network and the bert-base-uncased which is the teacher network in knowledge distillation (Hinton et al., 2015) . We consider the CORAL and CMD divergence measures. . The statistics about the datasets used for training the probes are consolidated in Table 2 .", "cite_spans": [ { "start": 182, "end": 203, "text": "(Hinton et al., 2015)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 32, "end": 38, "text": "Fig. 7", "ref_id": null }, { "start": 342, "end": 349, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "C DAPT Twitter Divergence Plots", "sec_num": null }, { "text": "For all the three tasks (POS, NER and coreference resolution) we train the probing classifiers on the source domain for 3 epochs. We use Adam as the optimizer with a learning rate of 1e-4, and a batch size of 32. We also evaluate on the validation dataset every 1000 steps, and halve the learning rate if no improvement is seen in 5 validations. The rest of the hyperparameters are the Figure 6 : Comparing divergences for RoBERTa vs DAPT-twitter (Barbieri et al., 2020) . We consider the CORAL and CMD divergence measures for standard vs twitter samples. The plots are shown for CORAL and CMD divergence measures.", "cite_spans": [ { "start": 447, "end": 470, "text": "(Barbieri et al., 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 386, "end": 394, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "C DAPT Twitter Divergence Plots", "sec_num": null }, { "text": "(a) CORAL (b) CMD Figure 7 : BERT vs DistilBERT models for standard vs twitter domains. CORAL and CMD divergence measures for standard vs twitter samples. Two encoders are considered here. bert-base-uncased which is the teacher and distilbert-base which is the student of knowledge distillation. The domain-invariance of distilbert-base is always larger than it's teacher. same as defined by (Tenney et al., 2019a) in their edge probing experiments. Figure 13 : PCA plots for representation of roberta-base for the pair of standard and the twitter domain for different layers. The PCA representations compared to the bert-base-uncased and bert-large-uncased models, the representations are interspersed across all the layers Figure 14 : PCA plots for representation of distilbert-base for the pair of standard and the biomedical domain for different layers. The lower layers are still interspersed, with a clearer separation in the higher layers.", "cite_spans": [ { "start": 392, "end": 414, "text": "(Tenney et al., 2019a)", "ref_id": "BIBREF61" } ], "ref_spans": [ { "start": 18, "end": 26, "text": "Figure 7", "ref_id": null }, { "start": 450, "end": 459, "text": "Figure 13", "ref_id": "FIGREF0" }, { "start": 725, "end": 734, "text": "Figure 14", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "C DAPT Twitter Divergence Plots", "sec_num": null }, { "text": "We can see a corresponding increase in the divergence measures for these layers. Figure 15 : PCA plots for representation of distilbert-base for the pair of standard and the biomedical domain for different layers. The lower layers are still interspersed, with a clearer separation in the higher layers.", "cite_spans": [], "ref_spans": [ { "start": 81, "end": 90, "text": "Figure 15", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "C DAPT Twitter Divergence Plots", "sec_num": null }, { "text": "We can see a corresponding increase in the divergence measures for these layers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C DAPT Twitter Divergence Plots", "sec_num": null }, { "text": "We borrow the term standard data from(Plank, 2016) to refer to news and web-like text and non-standard data to refer to other text like biomedical and Twitter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.nlm.nih.gov/databases/download/ pubmed medline.html 3 https://archive.org/details/twitterstream 4 https://pypi.org/project/emoji", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank Yisong Miao for reading a draft of this paper and providing insightful comments. We would also like to acknowledge the support of the NExT research grant funds, supported by the National Research Foundation, Prime Ministers Office, Singapore under its IRC @ SG Funding Initiative, and to gratefully acknowledge the support of NVIDIA Corporation with the donation of the GeForce GTX Titan X GPU used in this research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Unsupervised domain clusters in pretrained language models", "authors": [ { "first": "Roee", "middle": [], "last": "Aharoni", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7747--7763", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.692" ] }, "num": null, "urls": [], "raw_text": "Roee Aharoni and Yoav Goldberg. 2020. Unsuper- vised domain clusters in pretrained language mod- els. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7747-7763, Online. Association for Compu- tational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Anaphora resolution for Twitter conversations: An exploratory study", "authors": [ { "first": "Berfin", "middle": [], "last": "Akta\u015f", "suffix": "" }, { "first": "Tatjana", "middle": [], "last": "Scheffler", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the First Workshop on Computational Models of Reference, Anaphora and Coreference", "volume": "", "issue": "", "pages": "1--10", "other_ids": { "DOI": [ "10.18653/v1/W18-0701" ] }, "num": null, "urls": [], "raw_text": "Berfin Akta\u015f, Tatjana Scheffler, and Manfred Stede. 2018. Anaphora resolution for Twitter conversa- tions: An exploratory study. In Proceedings of the First Workshop on Computational Models of Reference, Anaphora and Coreference, pages 1-10, New Orleans, Louisiana. Association for Computa- tional Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Adapting coreference resolution to Twitter conversations", "authors": [ { "first": "Berfin", "middle": [], "last": "Akta\u015f", "suffix": "" }, { "first": "Veronika", "middle": [], "last": "Solopova", "suffix": "" }, { "first": "Annalena", "middle": [], "last": "Kohnert", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "2454--2460", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.222" ] }, "num": null, "urls": [], "raw_text": "Berfin Akta\u015f, Veronika Solopova, Annalena Kohnert, and Manfred Stede. 2020. Adapting coreference res- olution to Twitter conversations. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2454-2460, Online. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Publicly available clinical BERT embeddings", "authors": [ { "first": "Emily", "middle": [], "last": "Alsentzer", "suffix": "" }, { "first": "John", "middle": [], "last": "Murphy", "suffix": "" }, { "first": "William", "middle": [], "last": "Boag", "suffix": "" }, { "first": "Wei-Hung", "middle": [], "last": "Weng", "suffix": "" }, { "first": "Di", "middle": [], "last": "Jindi", "suffix": "" }, { "first": "Tristan", "middle": [], "last": "Naumann", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Mcdermott", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop", "volume": "", "issue": "", "pages": "72--78", "other_ids": { "DOI": [ "10.18653/v1/W19-1909" ] }, "num": null, "urls": [], "raw_text": "Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clin- ical BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72-78, Minneapolis, Minnesota, USA. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Domain adaptation via pseudo in-domain data selection", "authors": [ { "first": "Amittai", "middle": [], "last": "Axelrod", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "355--362", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in-domain data selection. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 355-362, Edinburgh, Scotland, UK. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Shared tasks of the 2015 workshop on noisy user-generated text: Twitter lexical normalization and named entity recognition", "authors": [ { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Marie", "middle": [], "last": "Catherine De Marneffe", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Han", "suffix": "" }, { "first": "Young-Bum", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Workshop on Noisy User-generated Text", "volume": "", "issue": "", "pages": "126--135", "other_ids": { "DOI": [ "10.18653/v1/W15-4319" ] }, "num": null, "urls": [], "raw_text": "Timothy Baldwin, Marie Catherine de Marneffe, Bo Han, Young-Bum Kim, Alan Ritter, and Wei Xu. 2015. Shared tasks of the 2015 workshop on noisy user-generated text: Twitter lexical normaliza- tion and named entity recognition. In Proceedings of the Workshop on Noisy User-generated Text, pages 126-135, Beijing, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Tweet-Eval: Unified benchmark and comparative evaluation for tweet classification", "authors": [ { "first": "Francesco", "middle": [], "last": "Barbieri", "suffix": "" }, { "first": "Jose", "middle": [], "last": "Camacho-Collados", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Espinosa Anke", "suffix": "" }, { "first": "Leonardo", "middle": [], "last": "Neves", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "1644--1650", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.148" ] }, "num": null, "urls": [], "raw_text": "Francesco Barbieri, Jose Camacho-Collados, Luis Es- pinosa Anke, and Leonardo Neves. 2020. Tweet- Eval: Unified benchmark and comparative evalu- ation for tweet classification. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1644-1650, Online. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "SciB-ERT: A pretrained language model for scientific text", "authors": [ { "first": "Iz", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3615--3620", "other_ids": { "DOI": [ "10.18653/v1/D19-1371" ] }, "num": null, "urls": [], "raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615-3620, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A theory of learning from different domains", "authors": [ { "first": "Shai", "middle": [], "last": "Ben-David", "suffix": "" }, { "first": "John", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Kulesza", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Jennifer", "middle": [ "Wortman" ], "last": "Vaughan", "suffix": "" } ], "year": 2010, "venue": "Mach. Learn", "volume": "79", "issue": "1-2", "pages": "151--175", "other_ids": { "DOI": [ "10.1007/s10994-009-5152-4" ] }, "num": null, "urls": [], "raw_text": "Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Mach. Learn., 79(1-2):151-175.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Domain separation networks", "authors": [ { "first": "Konstantinos", "middle": [], "last": "Bousmalis", "suffix": "" }, { "first": "George", "middle": [], "last": "Trigeorgis", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Silberman", "suffix": "" }, { "first": "Dilip", "middle": [], "last": "Krishnan", "suffix": "" }, { "first": "Dumitru", "middle": [], "last": "Erhan", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "343--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Er- han. 2016. Domain separation networks. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 343-351.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "632--642", "other_ids": { "DOI": [ "10.18653/v1/D15-1075" ] }, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large an- notated corpus for learning natural language infer- ence. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Language models are few-shot learners", "authors": [ { "first": "Benjamin", "middle": [], "last": "Tom B Brown", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Mann", "suffix": "" }, { "first": "Melanie", "middle": [], "last": "Ryder", "suffix": "" }, { "first": "Jared", "middle": [], "last": "Subbiah", "suffix": "" }, { "first": "Prafulla", "middle": [], "last": "Kaplan", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Dhariwal", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Neelakantan", "suffix": "" }, { "first": "Girish", "middle": [], "last": "Shyam", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Sastry", "suffix": "" }, { "first": "", "middle": [], "last": "Askell", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.14165" ] }, "num": null, "urls": [], "raw_text": "Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "German", "middle": [], "last": "Kruszewski", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2126--2136", "other_ids": { "DOI": [ "10.18653/v1/P18-1198" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, German Kruszewski, Guillaume Lample, Lo\u00efc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic proper- ties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136, Mel- bourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Twitter part-of-speech tagging for all: Overcoming sparse and noisy data", "authors": [ { "first": "Leon", "middle": [], "last": "Derczynski", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "S", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Bontcheva", "suffix": "" } ], "year": 2013, "venue": "RANLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leon Derczynski, Alan Ritter, S. Clark, and Kalina Bontcheva. 2013. Twitter part-of-speech tagging for all: Overcoming sparse and noisy data. In RANLP.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Amnesic probing: Behavioral explanation with amnesic counterfactuals", "authors": [ { "first": "Yanai", "middle": [], "last": "Elazar", "suffix": "" }, { "first": "Shauli", "middle": [], "last": "Ravfogel", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Jacovi", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yanai Elazar, Shauli Ravfogel, Alon Jacovi, and Yoav Goldberg. 2020. Amnesic probing: Behavioral ex- planation with amnesic counterfactuals.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Domain-adversarial training of neural networks", "authors": [ { "first": "Yaroslav", "middle": [], "last": "Ganin", "suffix": "" }, { "first": "Evgeniya", "middle": [], "last": "Ustinova", "suffix": "" }, { "first": "Hana", "middle": [], "last": "Ajakan", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Germain", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Laviolette", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Marchand", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Lempitsky", "suffix": "" } ], "year": 2016, "venue": "J. Mach. Learn. Res", "volume": "17", "issue": "1", "pages": "2096--2030", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Fran\u00e7ois Lavio- lette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. J. Mach. Learn. Res., 17(1):2096-2030.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Adversarially robust distillation", "authors": [ { "first": "Micah", "middle": [], "last": "Goldblum", "suffix": "" }, { "first": "Liam", "middle": [], "last": "Fowl", "suffix": "" }, { "first": "Soheil", "middle": [], "last": "Feizi", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Goldstein", "suffix": "" } ], "year": 2020, "venue": "The Thirty-Second Innovative Applications of Artificial Intelligence Conference", "volume": "2020", "issue": "", "pages": "3996--4003", "other_ids": {}, "num": null, "urls": [], "raw_text": "Micah Goldblum, Liam Fowl, Soheil Feizi, and Tom Goldstein. 2020. Adversarially robust distilla- tion. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 3996-4003. AAAI Press.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A kernel two-sample test", "authors": [ { "first": "Arthur", "middle": [], "last": "Gretton", "suffix": "" }, { "first": "Karsten", "middle": [ "M" ], "last": "Borgwardt", "suffix": "" }, { "first": "Malte", "middle": [ "J" ], "last": "Rasch", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Sch\u00f6lkopf", "suffix": "" }, { "first": "Alexander", "middle": [ "J" ], "last": "Smola", "suffix": "" } ], "year": 2012, "venue": "J. Mach. Learn. Res", "volume": "13", "issue": "", "pages": "723--773", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Sch\u00f6lkopf, and Alexander J. Smola. 2012a. A kernel two-sample test. J. Mach. Learn. Res., 13:723-773.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Optimal kernel choice for large-scale two-sample tests", "authors": [ { "first": "Arthur", "middle": [], "last": "Gretton", "suffix": "" }, { "first": "K", "middle": [], "last": "Bharath", "suffix": "" }, { "first": "Dino", "middle": [], "last": "Sriperumbudur", "suffix": "" }, { "first": "Heiko", "middle": [], "last": "Sejdinovic", "suffix": "" }, { "first": "Sivaraman", "middle": [], "last": "Strathmann", "suffix": "" }, { "first": "Massimiliano", "middle": [], "last": "Balakrishnan", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Pontil", "suffix": "" }, { "first": "", "middle": [], "last": "Fukumizu", "suffix": "" } ], "year": 2012, "venue": "Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held", "volume": "", "issue": "", "pages": "1214--1222", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arthur Gretton, Bharath K. Sriperumbudur, Dino Sejdi- novic, Heiko Strathmann, Sivaraman Balakrishnan, Massimiliano Pontil, and Kenji Fukumizu. 2012b. Optimal kernel choice for large-scale two-sample tests. In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012, Lake Tahoe, Nevada, United States, pages 1214-1222.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Don't stop pretraining: Adapt language models to domains and tasks", "authors": [ { "first": "Ana", "middle": [], "last": "Suchin Gururangan", "suffix": "" }, { "first": "Swabha", "middle": [], "last": "Marasovi\u0107", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Iz", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Downey", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8342--8360", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.740" ] }, "num": null, "urls": [], "raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Visualizing and understanding the effectiveness of BERT", "authors": [ { "first": "Yaru", "middle": [], "last": "Hao", "suffix": "" }, { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Ke", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4143--4152", "other_ids": { "DOI": [ "10.18653/v1/D19-1424" ] }, "num": null, "urls": [], "raw_text": "Yaru Hao, Li Dong, Furu Wei, and Ke Xu. 2019. Visualizing and understanding the effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4143-4152, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Misa: Modality-invariant and -specific representations for multimodal sentiment analysis", "authors": [ { "first": "Devamanyu", "middle": [], "last": "Hazarika", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Zimmermann", "suffix": "" }, { "first": "Soujanya", "middle": [], "last": "Poria", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th ACM International Conference on Multimedia, MM '20", "volume": "", "issue": "", "pages": "1122--1131", "other_ids": { "DOI": [ "10.1145/3394171.3413678" ] }, "num": null, "urls": [], "raw_text": "Devamanyu Hazarika, Roger Zimmermann, and Sou- janya Poria. 2020. Misa: Modality-invariant and -specific representations for multimodal senti- ment analysis. In Proceedings of the 28th ACM International Conference on Multimedia, MM '20, page 1122-1131, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Pretrained transformers improve out-of-distribution robustness", "authors": [ { "first": "Dan", "middle": [], "last": "Hendrycks", "suffix": "" }, { "first": "Xiaoyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Dziedzic", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2744--2751", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.244" ] }, "num": null, "urls": [], "raw_text": "Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. 2020. Pretrained transformers improve out-of-distribution robustness. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2744-2751, Online. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Designing and interpreting probes with control tasks", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2733--2743", "other_ids": { "DOI": [ "10.18653/v1/D19-1275" ] }, "num": null, "urls": [], "raw_text": "John Hewitt and Percy Liang. 2019. Designing and in- terpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2733-2743, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A structural probe for finding syntax in word representations", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4129--4138", "other_ids": { "DOI": [ "10.18653/v1/N19-1419" ] }, "num": null, "urls": [], "raw_text": "John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word repre- sentations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Distilling the knowledge in a neural network", "authors": [ { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. CoRR, abs/1503.02531.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Universal language model fine-tuning for text classification", "authors": [ { "first": "Jeremy", "middle": [], "last": "Howard", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "328--339", "other_ids": { "DOI": [ "10.18653/v1/P18-1031" ] }, "num": null, "urls": [], "raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Univer- sal language model fine-tuning for text classifica- tion. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 328-339. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "What does BERT learn about the structure of language", "authors": [ { "first": "Ganesh", "middle": [], "last": "Jawahar", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3651--3657", "other_ids": { "DOI": [ "10.18653/v1/P19-1356" ] }, "num": null, "urls": [], "raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651-3657, Florence, Italy. As- sociation for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "TinyBERT: Distilling BERT for natural language understanding", "authors": [ { "first": "Xiaoqi", "middle": [], "last": "Jiao", "suffix": "" }, { "first": "Yichun", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Lifeng", "middle": [], "last": "Shang", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Linlin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Fang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "4163--4174", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.372" ] }, "num": null, "urls": [], "raw_text": "Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. TinyBERT: Distilling BERT for natural lan- guage understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4163-4174, Online. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Bag of tricks for efficient text classification", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter", "volume": "2", "issue": "", "pages": "427--431", "other_ids": {}, "num": null, "urls": [], "raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for ef- ficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427-431, Valencia, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Domain divergences: a survey and empirical analysis", "authors": [ { "first": "Devamanyu", "middle": [], "last": "Abhinav Ramesh Kashyap", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Hazarika", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Kan", "suffix": "" }, { "first": "", "middle": [], "last": "Zimmermann", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhinav Ramesh Kashyap, Devamanyu Hazarika, Min-Yen Kan, and Roger Zimmermann. 2020. Do- main divergences: a survey and empirical analysis.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "BioBERT: a pretrained biomedical language representation model for biomedical text mining", "authors": [ { "first": "Jinhyuk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Wonjin", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "Sungdong", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Donghyeon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sunkyu", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Chan", "middle": [], "last": "Ho So", "suffix": "" }, { "first": "Jaewoo", "middle": [], "last": "Kang", "suffix": "" } ], "year": 2019, "venue": "Bioinformatics", "volume": "36", "issue": "4", "pages": "1234--1240", "other_ids": { "DOI": [ "10.1093/bioinformatics/btz682" ] }, "num": null, "urls": [], "raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre- trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "BERT-EMD: Many-to-many layer mapping for BERT compression with earth mover's distance", "authors": [ { "first": "Jianquan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiaokang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Honghong", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Ruifeng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Min", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yaohong", "middle": [], "last": "Jin", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "3009--3018", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.242" ] }, "num": null, "urls": [], "raw_text": "Jianquan Li, Xiaokang Liu, Honghong Zhao, Ruifeng Xu, Min Yang, and Yaohong Jin. 2020. BERT- EMD: Many-to-many layer mapping for BERT compression with earth mover's distance. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3009-3018, Online. Association for Compu- tational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Linguistic knowledge and transferability of contextual representations", "authors": [ { "first": "Nelson", "middle": [ "F" ], "last": "Liu", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1073--1094", "other_ids": { "DOI": [ "10.18653/v1/N19-1112" ] }, "num": null, "urls": [], "raw_text": "Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019a. Linguistic knowledge and transferability of contex- tual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073-1094, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Learning transferable features with deep adaptation networks", "authors": [ { "first": "Mingsheng", "middle": [], "last": "Long", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Jianmin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 32nd International Conference on Machine Learning", "volume": "37", "issue": "", "pages": "97--105", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I. Jordan. 2015. Learning transfer- able features with deep adaptation networks. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pages 97-105. JMLR.org.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Domain adaptation with BERT-based domain classification and data selection", "authors": [ { "first": "Xiaofei", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Zhiguo", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP", "volume": "", "issue": "", "pages": "76--83", "other_ids": { "DOI": [ "10.18653/v1/D19-6109" ] }, "num": null, "urls": [], "raw_text": "Xiaofei Ma, Peng Xu, Zhiguo Wang, Ramesh Nalla- pati, and Bing Xiang. 2019. Domain adaptation with BERT-based domain classification and data se- lection. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 76-83, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Learning word vectors for sentiment analysis", "authors": [ { "first": "Andrew", "middle": [ "L" ], "last": "Maas", "suffix": "" }, { "first": "Raymond", "middle": [ "E" ], "last": "Daly", "suffix": "" }, { "first": "Peter", "middle": [ "T" ], "last": "Pham", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "142--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA. Association for Computational Lin- guistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "What happens to BERT embeddings during fine-tuning?", "authors": [ { "first": "Amil", "middle": [], "last": "Merchant", "suffix": "" }, { "first": "Elahe", "middle": [], "last": "Rahimtoroghi", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "33--44", "other_ids": { "DOI": [ "10.18653/v1/2020.blackboxnlp-1.4" ] }, "num": null, "urls": [], "raw_text": "Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, and Ian Tenney. 2020. What happens to BERT em- beddings during fine-tuning? In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 33-44, Online. Association for Computational Linguistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "The effect of natural distribution shift on question answering models", "authors": [ { "first": "John", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Karl", "middle": [], "last": "Krauth", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Recht", "suffix": "" }, { "first": "Ludwig", "middle": [], "last": "Schmidt", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 37th International Conference on Machine Learning", "volume": "119", "issue": "", "pages": "6905--6916", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Miller, Karl Krauth, Benjamin Recht, and Ludwig Schmidt. 2020. The effect of natural distribution shift on question answering models. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 6905-6916. PMLR.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Intelligent selection of language model training data", "authors": [ { "first": "C", "middle": [], "last": "Robert", "suffix": "" }, { "first": "William", "middle": [], "last": "Moore", "suffix": "" }, { "first": "", "middle": [], "last": "Lewis", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the ACL 2010 Conference Short Papers", "volume": "", "issue": "", "pages": "220--224", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert C. Moore and William Lewis. 2010. Intel- ligent selection of language model training data. In Proceedings of the ACL 2010 Conference Short Papers, pages 220-224, Uppsala, Sweden. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "BERTweet: A pre-trained language model for English Tweets", "authors": [ { "first": "Thanh", "middle": [], "last": "Dat Quoc Nguyen", "suffix": "" }, { "first": "Anh", "middle": [ "Tuan" ], "last": "Vu", "suffix": "" }, { "first": "", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "9--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dat Quoc Nguyen, Thanh Vu, and Anh Tuan Nguyen. 2020. BERTweet: A pre-trained language model for English Tweets. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 9-14.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Cross-domain sentiment classification with target domain specific information", "authors": [ { "first": "Minlong", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yu-Gang", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2505--2513", "other_ids": { "DOI": [ "10.18653/v1/P18-1233" ] }, "num": null, "urls": [], "raw_text": "Minlong Peng, Qi Zhang, Yu-gang Jiang, and Xuan- jing Huang. 2018. Cross-domain sentiment clas- sification with target domain specific information. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2505-2513, Melbourne, Australia. Association for Computational Linguis- tics.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "To tune or not to tune? adapting pretrained representations to diverse tasks", "authors": [ { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)", "volume": "", "issue": "", "pages": "7--14", "other_ids": { "DOI": [ "10.18653/v1/W19-4302" ] }, "num": null, "urls": [], "raw_text": "Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapt- ing pretrained representations to diverse tasks. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 7-14, Florence, Italy. Association for Computational Lin- guistics.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Adina Williams, and Ryan Cotterell. 2020. Information-theoretic probing for linguistic structure", "authors": [ { "first": "Tiago", "middle": [], "last": "Pimentel", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Valvoda", "suffix": "" }, { "first": "Rowan", "middle": [], "last": "Hall Maudslay", "suffix": "" }, { "first": "Ran", "middle": [], "last": "Zmigrod", "suffix": "" } ], "year": null, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4609--4622", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.420" ] }, "num": null, "urls": [], "raw_text": "Tiago Pimentel, Josef Valvoda, Rowan Hall Maud- slay, Ran Zmigrod, Adina Williams, and Ryan Cot- terell. 2020. Information-theoretic probing for lin- guistic structure. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4609-4622, Online. Association for Computational Linguistics.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "What to do about nonstandard (or non-canonical) language in NLP", "authors": [ { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 13th Conference on Natural Language Processing", "volume": "16", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara Plank. 2016. What to do about non- standard (or non-canonical) language in NLP. In Proceedings of the 13th Conference on Natural Language Processing, KONVENS 2016, Bochum, Germany, September 19-21, 2016, volume 16 of Bochumer Linguistische Arbeitsberichte.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Dataset Shift in Machine Learning", "authors": [ { "first": "Joaquin", "middle": [], "last": "Quionero-Candela", "suffix": "" }, { "first": "Masashi", "middle": [], "last": "Sugiyama", "suffix": "" }, { "first": "Anton", "middle": [], "last": "Schwaighofer", "suffix": "" }, { "first": "Neil", "middle": [ "D" ], "last": "Lawrence", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D. Lawrence. 2009. Dataset Shift in Machine Learning. The MIT Press.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Neural unsupervised domain adaptation in NLP-A survey", "authors": [ { "first": "Alan", "middle": [], "last": "Ramponi", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "6838--6855", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.603" ] }, "num": null, "urls": [], "raw_text": "Alan Ramponi and Barbara Plank. 2020. Neural un- supervised domain adaptation in NLP-A survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6838-6855, Barcelona, Spain (Online). International Committee on Computational Linguistics.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Probing the probing paradigm: Does probing accuracy entail task relevance?", "authors": [ { "first": "Abhilasha", "middle": [], "last": "Ravichander", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhilasha Ravichander, Yonatan Belinkov, and Eduard Hovy. 2020a. Probing the probing paradigm: Does probing accuracy entail task relevance?", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "On the systematicity of probing contextualized word representations: The case of hypernymy in BERT", "authors": [ { "first": "Abhilasha", "middle": [], "last": "Ravichander", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Kaheer", "middle": [], "last": "Suleman", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Trischler", "suffix": "" }, { "first": "Jackie Chi Kit", "middle": [], "last": "Cheung", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "88--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhilasha Ravichander, Eduard Hovy, Kaheer Sule- man, Adam Trischler, and Jackie Chi Kit Che- ung. 2020b. On the systematicity of probing con- textualized word representations: The case of hy- pernymy in BERT. In Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics, pages 88-102, Barcelona, Spain (On- line). Association for Computational Linguistics.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Do ImageNet classifiers generalize to ImageNet?", "authors": [ { "first": "Benjamin", "middle": [], "last": "Recht", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Roelofs", "suffix": "" }, { "first": "Ludwig", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Vaishaal", "middle": [], "last": "Shankar", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 36th International Conference on Machine Learning", "volume": "97", "issue": "", "pages": "5389--5400", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. 2019. Do ImageNet clas- sifiers generalize to ImageNet? In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 5389-5400. PMLR.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "A primer in bertology: What we know about how bert works", "authors": [ { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Kovaleva", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" } ], "year": 2021, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "0", "pages": "842--866", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2021. A primer in bertology: What we know about how bert works. Transactions of the Association for Computational Linguistics, 8(0):842-866.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Poor man's bert: Smaller and faster transformer models", "authors": [ { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "Fahim", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hassan Sajjad, Fahim Dalvi, Nadir Durrani, and Preslav Nakov. 2020. Poor man's bert: Smaller and faster transformer models.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter", "authors": [ { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR, abs/1910.01108.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Wasserstein distance guided representation learning for domain adaptation", "authors": [ { "first": "Jian", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Yanru", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Weinan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yong", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)", "volume": "", "issue": "", "pages": "4058--4065", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Shen, Yanru Qu, Weinan Zhang, and Yong Yu. 2018a. Wasserstein distance guided representation learning for domain adaptation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 4058-4065. AAAI Press.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Wasserstein distance guided representation learning for domain adaptation", "authors": [ { "first": "Jian", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Yanru", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Weinan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yong", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)", "volume": "", "issue": "", "pages": "4058--4065", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Shen, Yanru Qu, Weinan Zhang, and Yong Yu. 2018b. Wasserstein distance guided representation learning for domain adaptation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 4058-4065. AAAI Press.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Deep CORAL: correlation alignment for deep domain adaptation", "authors": [ { "first": "Baochen", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Saenko", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baochen Sun and Kate Saenko. 2016. Deep CORAL: correlation alignment for deep domain adaptation. CoRR, abs/1607.01719.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Contrastive distillation on intermediate representations for language model compression", "authors": [ { "first": "Siqi", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Zhe", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Yuwei", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Shuohang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "498--508", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.36" ] }, "num": null, "urls": [], "raw_text": "Siqi Sun, Zhe Gan, Yuwei Fang, Yu Cheng, Shuo- hang Wang, and Jingjing Liu. 2020. Contrastive distillation on intermediate representations for lan- guage model compression. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 498-508, Online. Association for Computational Linguistics.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Investigating transferability in pretrained language models", "authors": [ { "first": "Alex", "middle": [], "last": "Tamkin", "suffix": "" }, { "first": "Trisha", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Davide", "middle": [], "last": "Giovanardi", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "1393--1401", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.125" ] }, "num": null, "urls": [], "raw_text": "Alex Tamkin, Trisha Singh, Davide Giovanardi, and Noah Goodman. 2020. Investigating transferability in pretrained language models. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1393-1401, Online. Association for Computational Linguistics.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Measuring robustness to natural distribution shifts in image classification", "authors": [ { "first": "Rohan", "middle": [], "last": "Taori", "suffix": "" }, { "first": "Achal", "middle": [], "last": "Dave", "suffix": "" }, { "first": "Vaishaal", "middle": [], "last": "Shankar", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Carlini", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Recht", "suffix": "" }, { "first": "Ludwig", "middle": [], "last": "Schmidt", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, and Ludwig Schmidt. 2020. Measuring robustness to natural distribution shifts in image classification.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "BERT rediscovers the classical NLP pipeline", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4593--4601", "other_ids": { "DOI": [ "10.18653/v1/P19-1452" ] }, "num": null, "urls": [], "raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593-4601, Florence, Italy. Association for Compu- tational Linguistics.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "What do you learn from context? probing for sentence structure in contextualized word representations", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "R", "middle": [ "Thomas" ], "last": "Mccoy", "suffix": "" }, { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "7th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Di- panjan Das, and Ellie Pavlick. 2019b. What do you learn from context? probing for sen- tence structure in contextualized word representa- tions. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Contrastive representation distillation", "authors": [ { "first": "Yonglong", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Dilip", "middle": [], "last": "Krishnan", "suffix": "" }, { "first": "Phillip", "middle": [], "last": "Isola", "suffix": "" } ], "year": 2020, "venue": "8th International Conference on Learning Representations", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonglong Tian, Dilip Krishnan, and Phillip Isola. 2020. Contrastive representation distillation. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Spandana Gella, and He He. 2020. An empirical study on robustness to spurious correlations using pre-trained language models", "authors": [ { "first": "Lifu", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Garima", "middle": [], "last": "Lalwani", "suffix": "" } ], "year": null, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "621--633", "other_ids": { "DOI": [ "10.1162/tacl_a_00335" ] }, "num": null, "urls": [], "raw_text": "Lifu Tu, Garima Lalwani, Spandana Gella, and He He. 2020. An empirical study on robustness to spuri- ous correlations using pre-trained language models. Transactions of the Association for Computational Linguistics, 8:621-633.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Informationtheoretic probing with minimum description length", "authors": [ { "first": "Elena", "middle": [], "last": "Voita", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "183--196", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.14" ] }, "num": null, "urls": [], "raw_text": "Elena Voita and Ivan Titov. 2020. Information- theoretic probing with minimum description length. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 183-196, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "Anhad Mohananey", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ian", "middle": [ "F" ], "last": "Tenney", "suffix": "" }, { "first": "Yada", "middle": [], "last": "Pruksachatkun", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Yeres", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Phang", "suffix": "" }, { "first": "Haokun", "middle": [], "last": "Liu", "suffix": "" }, { "first": ",", "middle": [], "last": "Phu Mon Htut", "suffix": "" }, { "first": "Katherin", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Hula", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Raghu", "middle": [], "last": "Pappagari", "suffix": "" }, { "first": "R", "middle": [ "Thomas" ], "last": "Shuning Jin", "suffix": "" }, { "first": "Roma", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Yinghui", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Najoung", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Thibault", "middle": [], "last": "Kim", "suffix": "" }, { "first": "", "middle": [], "last": "F\u00e9vry", "suffix": "" } ], "year": null, "venue": "2019. jiant 1.3: A software toolkit for research on general-purpose text understanding models", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Wang, Ian F. Tenney, Yada Pruksachatkun, Phil Yeres, Jason Phang, Haokun Liu, Phu Mon Htut, , Katherin Yu, Jan Hula, Patrick Xia, Raghu Pap- pagari, Shuning Jin, R. Thomas McCoy, Roma Pa- tel, Yinghui Huang, Edouard Grave, Najoung Kim, Thibault F\u00e9vry, Berlin Chen, Nikita Nangia, An- had Mohananey, Katharina Kann, Shikha Bordia, Nicolas Patry, David Benton, Ellie Pavlick, and Samuel R. Bowman. 2019. jiant 1.3: A software toolkit for research on general-purpose text under- standing models. http://jiant.info/.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1112--1122", "other_ids": { "DOI": [ "10.18653/v1/N18-1101" ] }, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bow- man. 2018. A broad-coverage challenge cor- pus for sentence understanding through infer- ence. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112- 1122, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "Remi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "", "middle": [], "last": "Drame", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-demos.6" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF71": { "ref_id": "b71", "title": "How transferable are features in deep neural networks?", "authors": [ { "first": "Jason", "middle": [], "last": "Yosinski", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Clune", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Hod", "middle": [], "last": "Lipson", "suffix": "" } ], "year": 2014, "venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3320--3328", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are fea- tures in deep neural networks? In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3320-3328.", "links": null }, "BIBREF72": { "ref_id": "b72", "title": "Wasserstein distance regularized sequence representation for text matching in asymmetrical domains", "authors": [ { "first": "Weijie", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Xiaopeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Xiaozhao", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ji-Rong", "middle": [], "last": "Wen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "2985--2994", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.239" ] }, "num": null, "urls": [], "raw_text": "Weijie Yu, Chen Xu, Jun Xu, Liang Pang, Xiaopeng Gao, Xiaozhao Wang, and Ji-Rong Wen. 2020. Wasserstein distance regularized sequence represen- tation for text matching in asymmetrical domains. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2985-2994, Online. Association for Compu- tational Linguistics.", "links": null }, "BIBREF73": { "ref_id": "b73", "title": "Central moment discrepancy (CMD) for domain-invariant representation learning", "authors": [ { "first": "Werner", "middle": [], "last": "Zellinger", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Grubinger", "suffix": "" }, { "first": "Edwin", "middle": [], "last": "Lughofer", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Natschl\u00e4ger", "suffix": "" }, { "first": "Susanne", "middle": [], "last": "Saminger-Platz", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Werner Zellinger, Thomas Grubinger, Edwin Lughofer, Thomas Natschl\u00e4ger, and Susanne Saminger-Platz. 2017. Central moment discrepancy (CMD) for domain-invariant representation learning. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.", "links": null }, "BIBREF74": { "ref_id": "b74", "title": "Record: Bridging the gap between human and machine commonsense reading comprehension", "authors": [ { "first": "Sheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Duh", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jian- feng Gao, Kevin Duh, and Benjamin Van Durme. 2018. Record: Bridging the gap between human and machine commonsense reading comprehension. CoRR, abs/1810.12885.", "links": null }, "BIBREF75": { "ref_id": "b75", "title": "Learning adversarially robust representations via worst-case mutual information maximization", "authors": [ { "first": "Sicheng", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "David", "middle": [], "last": "Evans", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 37th International Conference on Machine Learning", "volume": "2020", "issue": "", "pages": "11609--11618", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sicheng Zhu, Xiao Zhang, and David Evans. 2020. Learning adversarially robust representations via worst-case mutual information maximization. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 11609-11618. PMLR.", "links": null }, "BIBREF76": { "ref_id": "b76", "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "authors": [ { "first": "Yukun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Rich", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Urtasun", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Torralba", "suffix": "" }, { "first": "Sanja", "middle": [], "last": "Fidler", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), ICCV '15", "volume": "", "issue": "", "pages": "19--27", "other_ids": { "DOI": [ "10.1109/ICCV.2015.11" ] }, "num": null, "urls": [], "raw_text": "Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), ICCV '15, page 19-27, USA. IEEE Com- puter Society.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Top: Comparing divergence for the standard vs. biomedical samples Bottom: Comparing divergence for the standard vs. twitter samples. The plots consider three divergence measures: CORAL, CMD, and MK-MMD Gaussian, for three encoders bert-base-uncased, bert-large-uncased and roberta-base. The values are the mean and standard deviation of divergence measures calculated over 5 splits of 1000 samples.", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "Comparing CORAL and CMD divergences for roberta-base andDAPT-biomed (Gururangan et al., 2020). Considers standard and biomedical samples.", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "CORAL and CMD divergence measures for standard vs biomedical samples. Two encoders are considered here: bert-base-uncased and distilbert-base.", "type_str": "figure", "num": null, "uris": null }, "FIGREF3": { "text": "Micro F1 scores for probing POS, NER, and Coref information from bert-base-uncased for the standard and the non-standard Twitter domain.", "type_str": "figure", "num": null, "uris": null }, "FIGREF4": { "text": "Comparing divergence of bert-base-uncased and bert-large-uncased models. Top: Divergence for standard and biomedical domain Bottom: Divergence for the standard and twitter domain.", "type_str": "figure", "num": null, "uris": null }, "FIGREF6": { "text": "PCA plots for representation of bert-base-uncased for the pair of standard and the biomedical domain for different layers. The PCA representations show that representations are interspersed in the lower layers with a progressively clearer separation in the later layers.", "type_str": "figure", "num": null, "uris": null }, "FIGREF7": { "text": "PCA plots for representation of bert-base-uncased for the pair of standard and the twitter domain for different layers. The PCA representations show that representations are interspersed in the lower layers with a progressively clearer separation in the later layers.", "type_str": "figure", "num": null, "uris": null }, "FIGREF8": { "text": "PCA plots for representation of bert-large-uncased for the pair of standard and the biomedical domain for different layers. The PCA representations show that representations are interspersed in the lower layers with a progressively clearer separation in the later layers.", "type_str": "figure", "num": null, "uris": null }, "FIGREF9": { "text": "PCA plots for representation of bert-large-uncased for the pair of standard and the twitter domain for different layers. The PCA representations show that representations are interspersed in the lower layers with a progressively clearer separation in the later layers.", "type_str": "figure", "num": null, "uris": null }, "FIGREF10": { "text": "PCA plots for representation of roberta-base for the pair of standard and the biomedical domain for different layers. The PCA representations compared to the bert-base-uncased and bert-large-uncased models, the representations are interspersed across all the layers", "type_str": "figure", "num": null, "uris": null }, "TABREF1": { "type_str": "table", "html": null, "text": "NMI values measuring the clustering performance at different layers of bert-base-uncased.", "content": "