{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:11:41.791234Z" }, "title": "Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora", "authors": [ { "first": "Xisen", "middle": [], "last": "Jin", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern", "location": { "country": "California" } }, "email": "xisenjin@usc.edu" }, { "first": "Dejiao", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "AWS AI Labs", "institution": "", "location": {} }, "email": "dejiaoz@amazon.com" }, { "first": "Henghui", "middle": [], "last": "Zhu", "suffix": "", "affiliation": { "laboratory": "AWS AI Labs", "institution": "", "location": {} }, "email": "henghui@amazon.com" }, { "first": "Wei", "middle": [], "last": "Xiao", "suffix": "", "affiliation": { "laboratory": "AWS AI Labs", "institution": "", "location": {} }, "email": "weixiaow@amazon.com" }, { "first": "Shang-Wen", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "AWS AI Labs", "institution": "", "location": {} }, "email": "shangwenl@amazon.com" }, { "first": "Xiaokai", "middle": [], "last": "Wei", "suffix": "", "affiliation": {}, "email": "xiaokaiw@amazon.com" }, { "first": "Andrew", "middle": [], "last": "Arnold", "suffix": "", "affiliation": { "laboratory": "AWS AI Labs", "institution": "", "location": {} }, "email": "anarnld@amazon.com" }, { "first": "Xiang", "middle": [], "last": "Ren", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern", "location": { "country": "California" } }, "email": "xiangren@usc.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Pretrained language models (PTLMs) are typically learned over a large, static corpus and further fine-tuned for various downstream tasks. However, when deployed in the real world, a PTLM-based model must deal with data distributions that deviate from what the PTLM was initially trained on. In this paper, we study a lifelong language model pretraining challenge where a PTLM is continually updated so as to adapt to emerging data. Over a domain-incremental research paper stream and a chronologically-ordered tweet stream, we incrementally pretrain a PTLM with different continual learning algorithms, and keep track of the downstream task performance (after fine-tuning). We evaluate PTLM's ability to adapt to new corpora while retaining learned knowledge in earlier corpora. Our experiments show distillation-based approaches to be most effective in retaining downstream performance in earlier domains. The algorithms also improve knowledge transfer, allowing models to achieve better downstream performance over the latest data, and improve temporal generalization when distribution gaps exist between training and evaluation because of time. We believe our problem formulation, methods, and analysis will inspire future studies towards continual pretraining of language models.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Pretrained language models (PTLMs) are typically learned over a large, static corpus and further fine-tuned for various downstream tasks. However, when deployed in the real world, a PTLM-based model must deal with data distributions that deviate from what the PTLM was initially trained on. In this paper, we study a lifelong language model pretraining challenge where a PTLM is continually updated so as to adapt to emerging data. Over a domain-incremental research paper stream and a chronologically-ordered tweet stream, we incrementally pretrain a PTLM with different continual learning algorithms, and keep track of the downstream task performance (after fine-tuning). We evaluate PTLM's ability to adapt to new corpora while retaining learned knowledge in earlier corpora. Our experiments show distillation-based approaches to be most effective in retaining downstream performance in earlier domains. The algorithms also improve knowledge transfer, allowing models to achieve better downstream performance over the latest data, and improve temporal generalization when distribution gaps exist between training and evaluation because of time. We believe our problem formulation, methods, and analysis will inspire future studies towards continual pretraining of language models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Pretrained language models (PTLMs) have achieved remarkable performance on benchmark datasets for a range of NLP tasks (Liu et al., 2019b; Brown et al., 2020) . However, when deployed in the wild, NLP systems must deal with emerging data that have constantly shifting data distribution, different from the text corpora they were initially pretrained on -for example, when new data domains are introduced (upper part of Fig. 1 ) (Gururangan et al., 2020) , or when the language uses and vocabulary change over time (lower part of Fig. 1 ) (Lazaridou et al., 2021) . Fine-tuning from a Figure 1 : Two data streams created for studying lifelong language model pre-training. We focus on evaluating knowledge retention on the domain-incremental research papers stream; we focus on adaptation to the latest data and temporal generalization on the chronologically ordered tweet stream. static and possibly \"outdated\" PTLM may limit the model performance on downstream tasks, as the PTLM may no longer provide an effective model initialization (Beltagy et al., 2019; M\u00fcller et al., 2020 ). Here we look to understand whether continuously adapting a PTLM to emerging data can yield gains on various downstream tasks, and how to achieve better downstream performance for such lifelong PTLM adaptation.", "cite_spans": [ { "start": 119, "end": 138, "text": "(Liu et al., 2019b;", "ref_id": "BIBREF32" }, { "start": 139, "end": 158, "text": "Brown et al., 2020)", "ref_id": "BIBREF6" }, { "start": 428, "end": 453, "text": "(Gururangan et al., 2020)", "ref_id": "BIBREF18" }, { "start": 538, "end": 562, "text": "(Lazaridou et al., 2021)", "ref_id": null }, { "start": 1036, "end": 1058, "text": "(Beltagy et al., 2019;", "ref_id": "BIBREF5" }, { "start": 1059, "end": 1078, "text": "M\u00fcller et al., 2020", "ref_id": "BIBREF38" } ], "ref_spans": [ { "start": 419, "end": 425, "text": "Fig. 1", "ref_id": null }, { "start": 529, "end": 535, "text": "Fig. 1", "ref_id": null }, { "start": 584, "end": 592, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A number of recent works make attempts on adapting PTLMs to a new data domain. Gururangan et al. (2020) ; adapt language models to corpora of different genres and topics and observe performance improvement in domainspecific downstream tasks. Arumae et al. (2020) further show that by regularizing the parameters of PTLMs, the downstream tasks performance on the general domain can be preserved. Another line of works focuses on temporal domain shift (Hombaiah et al., 2021) , which analyzes the effect of pretraining over up-to-date data to the downstream tasks. R\u00f6ttger and Pierrehumbert (2021) further study vocabulary composition approaches for improving adaptation to up-to-date corpora. However, these work focus their study on adapting PTLM to a single new domain; while in practice, corpora from distinct domains and time stamps may emerge sequentially. Whether one can maintain a single, up-to-date PTLM remains an open problem. Related to this, Lazaridou et al. (2021) study adaptation of PTLMs over temporal data streams, but solely focus on language modeling instead of fine-tuning performance. It is also important to understand multiple aspects of the utility of lifelong PTLM pretraining, such as knowledge retention over all the seen data, and study what methods can improve the utility of PTLMs in such a continual pretraining process.", "cite_spans": [ { "start": 79, "end": 103, "text": "Gururangan et al. (2020)", "ref_id": "BIBREF18" }, { "start": 242, "end": 262, "text": "Arumae et al. (2020)", "ref_id": "BIBREF0" }, { "start": 450, "end": 473, "text": "(Hombaiah et al., 2021)", "ref_id": "BIBREF20" }, { "start": 563, "end": 595, "text": "R\u00f6ttger and Pierrehumbert (2021)", "ref_id": "BIBREF46" }, { "start": 954, "end": 977, "text": "Lazaridou et al. (2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we formulate a Lifelong Language Model Pretraining task to simulate practical scenarios of maintaining and adapting a PTLM over emerging corpora, create a testbed (along with pretraining data streams and downstream tasks) for studying continual pretraining algorithms, and present a systematic evaluation protocol for measuring the progress made on this challenging problem (see Figure 2 for an illustration). We consider two types of text corpus sequences when constructing pretraining data streams, each of which simulates a representative use case and that has slightly different focuses on the evaluation: continuously learning a single model that is applicable to both old and new domains; and improving the model's ability to handle latest data. Specifically, we construct 1) a domain-incremental text stream that consists of academic papers published in four research fields, and 2) a temporal tweet stream that consists of tweets collected from four different years. By conducting systematic experiments on these two data streams, we look to answer a series of analysis questions: 1) whether continual pretraining retains fine-tuning performance over earlier corpora compared to traditional offline pretraining, 2) whether pretraining improves downstream performance on the latest data, and 3) whether pretraining improves temporal generalization where training and evaluation have distribution gaps because of time.", "cite_spans": [], "ref_spans": [ { "start": 394, "end": 402, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To address the research questions above, we conduct a systematic evaluation of existing continual learning (CL) algorithms, spanning over modelexpansion based, memory-based, and distillationbased approaches. Our results show distillationbased approaches are most effective in knowledge retention in the research paper stream, while simultaneously improve adaptation to latest data and temporal generalization in the tweet stream. We believe our problem formulation, evaluation setup, methods and analysis can inspire more future work on continual pretraining of language models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Here we present the problem formulation for lifelong pretraining of PTLM, provide details about the data stream construction process and downstream tasks, and introduce the evaluation protocol.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "2" }, { "text": "We consider the scenario where one needs to deploy and/or maintain NLP models over a sequence of T data domains. At each time step t the model visits an unlabeled text corpus D t from a domain with a data distribution P (D t ). The data distribution P (D t ) evolves as the time step t, forming a data stream D 1..", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lifelong Pretraining of PTLMs", "sec_num": "2.1" }, { "text": "T = {D 1 , D 2 , ...D T }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lifelong Pretraining of PTLMs", "sec_num": "2.1" }, { "text": "In practice, the data domain shift can refer to the topic change of the text content (from computer science research papers to biomedical papers), or temporal evolution of the text (from past to recent tweets). The task of lifelong pretraining of PTLM looks to continuously adapt a language model f as the model visits (unlabeled) text corpus D t from the data stream D 1..T , in order to provide a good model initialization for fine-tuning on downstream tasks from the same domain. With slight abuse in notations, we also use D t to directly refer to a data domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lifelong Pretraining of PTLMs", "sec_num": "2.1" }, { "text": "Here, we assume a language model f is updated sequentially over each pretraining corpora D t , without accessing the full earlier corpora {D i } i