ACL-OCL / Base_JSON /prefixS /json /starsem /2021.starsem-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:40:50.074640Z"
},
"title": "Recovering Lexically and Semantically Reused Texts",
"authors": [
{
"first": "Ansel",
"middle": [],
"last": "Maclaughlin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Khoury College of Computer Science Northeastern University",
"location": {}
},
"email": ""
},
{
"first": "Shaobin",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Khoury College of Computer Science Northeastern University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "David",
"middle": [
"A"
],
"last": "Smith",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Khoury College of Computer Science Northeastern University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Writers often repurpose material from existing texts when composing new documents. Because most documents have more than one source, we cannot trace these connections using only models of document-level similarity. Instead, this paper considers methods for local text reuse detection (LTRD), detecting localized regions of lexically or semantically similar text embedded in otherwise unrelated material. In extensive experiments, we study the relative performance of four classes of neural and bag-of-words models on three LTRD tasks-detecting plagiarism, modeling journalists' use of press releases, and identifying scientists' citation of earlier papers. We conduct evaluations on three existing datasets and a new, publicly-available citation localization dataset. Our findings shed light on a number of previously-unexplored questions in the study of LTRD, including the importance of incorporating document-level context for predictions, the applicability of of-the-shelf neural models pretrained on \"general\" semantic textual similarity tasks such as paraphrase detection, and the trade-offs between more efficient bag-of-words and feature-based neural models and slower pairwise neural models.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Writers often repurpose material from existing texts when composing new documents. Because most documents have more than one source, we cannot trace these connections using only models of document-level similarity. Instead, this paper considers methods for local text reuse detection (LTRD), detecting localized regions of lexically or semantically similar text embedded in otherwise unrelated material. In extensive experiments, we study the relative performance of four classes of neural and bag-of-words models on three LTRD tasks-detecting plagiarism, modeling journalists' use of press releases, and identifying scientists' citation of earlier papers. We conduct evaluations on three existing datasets and a new, publicly-available citation localization dataset. Our findings shed light on a number of previously-unexplored questions in the study of LTRD, including the importance of incorporating document-level context for predictions, the applicability of of-the-shelf neural models pretrained on \"general\" semantic textual similarity tasks such as paraphrase detection, and the trade-offs between more efficient bag-of-words and feature-based neural models and slower pairwise neural models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "When composing documents in many genresfrom news reports, to scientific papers, to political speeches-authors obtain ideas and inspiration from source documents and present them in the form of direct copies, quotations, summaries, or paraphrases. In the simplest case, e.g. in congressional bills, writers include text from earlier versions of the same document along with new material (Wilkerson et al., 2015) . In news media, journalists often paraphrase or quote speeches, press releases, and interviews (Niculae et al., 2015 ; * Equal contribution. Tan et al., 2016) . In academia, citations of papers usually appear along with summaries of their contributions (Qazvinian and Radev, 2010) . These are instances of lexical and semantic local text reuse, where both source and target documents contain lexically or semantically similar passages, surrounded by text that is unrelated or dissimilar. Often, reused text is presented without explicit links or citations, making it hard to track information flow.",
"cite_spans": [
{
"start": 386,
"end": 410,
"text": "(Wilkerson et al., 2015)",
"ref_id": "BIBREF45"
},
{
"start": 507,
"end": 528,
"text": "(Niculae et al., 2015",
"ref_id": "BIBREF32"
},
{
"start": 553,
"end": 570,
"text": "Tan et al., 2016)",
"ref_id": "BIBREF43"
},
{
"start": 665,
"end": 692,
"text": "(Qazvinian and Radev, 2010)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While many state-of-the-art (SoTA) NLP architectures have been trained on the closely-related tasks of document-and sentence-pair similarity detection (Reimers and Gurevych, 2019) and ad-hoc retrieval (Dai and Callan, 2019) , prior methods for local text-reuse detection (LTRD) are mostly limited to lexical matching (Lee, 2007; Clough et al., 2002; Leskovec et al., 2009; Wilkerson et al., 2015; Smith et al., 2014) with some dictionary expansion (Moritz et al., 2016) . To our knowledge, only Zhou et al. (2020) has applied neural models to this problem, proposing hierarchical neural models that use a cross-document attention mechanism to model local similarities between two candidate documents.",
"cite_spans": [
{
"start": 151,
"end": 179,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF39"
},
{
"start": 201,
"end": 223,
"text": "(Dai and Callan, 2019)",
"ref_id": "BIBREF15"
},
{
"start": 317,
"end": 328,
"text": "(Lee, 2007;",
"ref_id": "BIBREF22"
},
{
"start": 329,
"end": 349,
"text": "Clough et al., 2002;",
"ref_id": "BIBREF11"
},
{
"start": 350,
"end": 372,
"text": "Leskovec et al., 2009;",
"ref_id": "BIBREF24"
},
{
"start": 373,
"end": 396,
"text": "Wilkerson et al., 2015;",
"ref_id": "BIBREF45"
},
{
"start": 397,
"end": 416,
"text": "Smith et al., 2014)",
"ref_id": "BIBREF42"
},
{
"start": 448,
"end": 469,
"text": "(Moritz et al., 2016)",
"ref_id": "BIBREF31"
},
{
"start": 495,
"end": 513,
"text": "Zhou et al. (2020)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we conduct a large-scale evaluation of several lexical overlap and SoTA neural models for LTRD. Among the neural models, we benchmark not only the hierarchical neural models proposed by Zhou et al. (2020) , but also study the effectiveness of three classes of models not yet applied to LTRD: 1) BERT-based (Devlin et al., 2019) passage encoders trained on generic paraphrase, semantic textual similarity, and IR data (Reimers and Gurevych, 2019) ; 2) feature-based BERT models with direct sentence-level supervision; and 3) finetuned BERT-based models for sequence-pair tasks.",
"cite_spans": [
{
"start": 201,
"end": 219,
"text": "Zhou et al. (2020)",
"ref_id": "BIBREF48"
},
{
"start": 321,
"end": 342,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 432,
"end": 460,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We conduct evaluations on four datasets, including 1) PAN and S2ORC (Zhou et al., 2020) , benchmark LTRD datasets for plagiarism detection and citation localization; 2) Pr2News (MacLaughlin et al., 2020), a dataset of text reuse in news articles labeled with a mix of expert, non-expert, and heuristic annotation; 3) ARC-Sim, a new, publicly available 1 citation localization dataset created using citation links in the ACL ARC (Bird et al., 2008) .",
"cite_spans": [
{
"start": 68,
"end": 87,
"text": "(Zhou et al., 2020)",
"ref_id": "BIBREF48"
},
{
"start": 428,
"end": 447,
"text": "(Bird et al., 2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our experiments address a number of previouslyunexplored questions in the study of LTRD, including 1) the impact of training on weakly-supervised data on model accuracy; 2) the effectiveness of SoTA neural models trained on \"general\" semantic similarity data for LTRD tasks; 3) the importance of incorporating document-level context; 4) the effects of domain-adaptive pretraining (Gururangan et al., 2020) on the accuracy of fine-tuned BERT models; and 5) the trade-offs between more efficient lexical overlap and feature-based neural models and slower pairwise neural models.",
"cite_spans": [
{
"start": 380,
"end": 405,
"text": "(Gururangan et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "LTRD methods have been applied in many domains, including tracking short \"memes\" in news and social media (Leskovec et al., 2009) , tracing specific policy language embedded in proposed legislation (Wilkerson et al., 2015; Funk and Mullen, 2018) , studying reuse of scripture in historical and theological writings (Lee, 2007; Moritz et al., 2016) , tracing information propagation in news and social media (Tan et al., 2016; Clough et al., 2002; MacLaughlin et al., 2020) , and detecting plagiarism on the web (Potthast et al., 2013; S\u00e1nchez-P\u00e9rez et al., 2014; Vani and Gupta, 2017) . Most applications, however, use only lexical overlap and alignment methods to detect reuse, sometimes with lemmatization and dictionary curation.",
"cite_spans": [
{
"start": 106,
"end": 129,
"text": "(Leskovec et al., 2009)",
"ref_id": "BIBREF24"
},
{
"start": 198,
"end": 222,
"text": "(Wilkerson et al., 2015;",
"ref_id": "BIBREF45"
},
{
"start": 223,
"end": 245,
"text": "Funk and Mullen, 2018)",
"ref_id": "BIBREF19"
},
{
"start": 315,
"end": 326,
"text": "(Lee, 2007;",
"ref_id": "BIBREF22"
},
{
"start": 327,
"end": 347,
"text": "Moritz et al., 2016)",
"ref_id": "BIBREF31"
},
{
"start": 407,
"end": 425,
"text": "(Tan et al., 2016;",
"ref_id": "BIBREF43"
},
{
"start": 426,
"end": 446,
"text": "Clough et al., 2002;",
"ref_id": "BIBREF11"
},
{
"start": 447,
"end": 472,
"text": "MacLaughlin et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 511,
"end": 534,
"text": "(Potthast et al., 2013;",
"ref_id": "BIBREF36"
},
{
"start": 535,
"end": 562,
"text": "S\u00e1nchez-P\u00e9rez et al., 2014;",
"ref_id": "BIBREF40"
},
{
"start": 563,
"end": 584,
"text": "Vani and Gupta, 2017)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our work builds on the recent efforts of Zhou et al. (2020) , who demonstrate the efficacy of hierarchical neural models in detecting instances of non-literal reuse where authors paraphrase, summarize, and heavily edit source content. However, as discussed in \u00a71, we conduct a much larger set of experiments beyond those of Zhou et al. (2020) . In addition to the hierarchical neural models with document-level supervision proposed by Zhou et al. (2020) , we evaluate four sets of models: lexical overlap models, SoTA neural models trained for general paraphrase detection, hierarchical neural models with sentence-level supervision, and finetuned sequence-pair BERT models. Further, in 1 https://github.com/maclaughlin/ARC-Sim addition to evaluating models on the benchmark LTRD datasets introduced by Zhou et al. (2020), we conduct experiments on two more challenging datasets: ARC-Sim, a new citation localization dataset with hard negative examples, and Pr2News (MacLaughlin et al., 2020), a dataset of text reuse in science news articles with heuristically-labeled training data.",
"cite_spans": [
{
"start": 41,
"end": 59,
"text": "Zhou et al. (2020)",
"ref_id": "BIBREF48"
},
{
"start": 324,
"end": 342,
"text": "Zhou et al. (2020)",
"ref_id": "BIBREF48"
},
{
"start": 435,
"end": 453,
"text": "Zhou et al. (2020)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Also related to our work is research studying sentence-pair problems, e.g. paraphrase detection (PD) (Dolan and Brockett, 2005) , semantic textual similarity (STS) (Cer et al., 2017) and textual entailment, (Bowman et al., 2015) , and documentranking problems, e.g. ad-hoc retrieval (Croft et al., 2009) . In fact, it is trivial to adapt existing approaches to sentence-pair and document ranking problems to LTRD. As discussed in \u00a73, we cast LTRD as sentence classification and ranking, identifying which sentences in a target text are lexically or semantically reused from some portion of the source. Thus, in order to adapt sentence-pair models to this task, we simply compute scores for all pairs of (source sentence, target sentence), and use some function (e.g. max) to aggregate the scores for each target sentence. Similarly, one can adapt existing ad-hoc retrieval approaches by treating each target sentence as a query and computing a score with the corresponding source. These approaches, however, may suffer from a lack of contextualization and/or efficiency issues. Sentencepair models that encode each source and target sentence separately, while efficient, might miss important contextualizing information in surrounding sentences. Similarly, neural IR models that process each target sentence as a separate query do not contextualize target sentences and also require a computationally-expensive forward pass for each query. We study the importance and impact of these limitations in our work, testing the effectiveness of multiple SoTA BERT-based architectures for sequence-pair similarity and ranking.",
"cite_spans": [
{
"start": 101,
"end": 127,
"text": "(Dolan and Brockett, 2005)",
"ref_id": "BIBREF18"
},
{
"start": 164,
"end": 182,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 207,
"end": 228,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF8"
},
{
"start": 283,
"end": 303,
"text": "(Croft et al., 2009)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Following Zhou et al. (2020) , we define LTRD as two tasks: document-to-document (D2D) alignment and sentence-to-document (S2D) alignment. In D2D, for a given pair of documents (source document S, target document T), we aim to predict whether T reuses content from S. Thus, each pair has a corresponding binary label of 1 if T reuses content, else 0. Note, this is different than evaluating the similarity of the two documents as a whole, since, in this setting, only a small portion of T is adapted from S, and most of it is possibly unrelated. In S2D, given an (S, T) pair, we aim to predict which specific sentences t i \u2208 T contain reused S content. Thus, each pair has n corresponding labels, one label for each sentence t i \u2208 T. 2",
"cite_spans": [
{
"start": 10,
"end": 28,
"text": "Zhou et al. (2020)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "We benchmark four classes of models on this task:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "We evaluate two unsupervised metrics:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Overlap Models",
"sec_num": "4.1"
},
{
"text": "\u2022 TF-IDF Cosine Similarity: Simple word overlap metrics are commonly-used baselines to measure the similarity between two passages for PD (Dolan and Brockett, 2005) , STS (Reimers and Gurevych, 2019) , document retrieval (Croft et al., 2009) , and LTRD (Tan et al., 2016; Lee, 2007; Clough et al., 2002) .",
"cite_spans": [
{
"start": 138,
"end": 164,
"text": "(Dolan and Brockett, 2005)",
"ref_id": "BIBREF18"
},
{
"start": 171,
"end": 199,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF39"
},
{
"start": 221,
"end": 241,
"text": "(Croft et al., 2009)",
"ref_id": "BIBREF14"
},
{
"start": 253,
"end": 271,
"text": "(Tan et al., 2016;",
"ref_id": "BIBREF43"
},
{
"start": 272,
"end": 282,
"text": "Lee, 2007;",
"ref_id": "BIBREF22"
},
{
"start": 283,
"end": 303,
"text": "Clough et al., 2002)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Overlap Models",
"sec_num": "4.1"
},
{
"text": "\u2022 Rouge (Lin, 2004) : Since authors of derived documents often paraphrase and summarize source content, we evaluate Rouge, a popular summarization evaluation metric. We evaluate Rouge-{1, 2, L}, selecting the best configuration for each dataset using validation data.",
"cite_spans": [
{
"start": 8,
"end": 19,
"text": "(Lin, 2004)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Overlap Models",
"sec_num": "4.1"
},
{
"text": "We compute two versions of each metric: singlepair (sp) and all-pairs (ap). In sp, for a given document pair (S, T), we compute a score for each sentence t i \u2208 T by computing its similarity to the entire S. In ap, we compute a score for each sentence t i \u2208 T by computing its similarity to each sentence s i \u2208 S, then selecting the maximum score over all s i . These scores are then thresholded to make binary predictions. For the D2D task, we predict T as positive if it contains at least one positively predicted sentence. For the S2D task, we evaluate the predicted score for each t i \u2208 T.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Overlap Models",
"sec_num": "4.1"
},
{
"text": "We evaluate Sentence-BERT (SBERT) (Reimers and Gurevych, 2019) , a SoTA pretrained passage encoder for semantic-relatedness tasks. SBERT models are trained by 1) adding pooling (e.g. mean pooling) to the output of BERT; 2) training on pairs or triplets of passages to learn semantically meaningful passage representations; 3) at test time, computing the similarity between two passages as the cosine similarity between their pooled representations. We evaluate three SBERTs trained for different tasks:",
"cite_spans": [
{
"start": 34,
"end": 62,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Sentence-BERT Encoders",
"sec_num": "4.2"
},
{
"text": "\u2022 Semantic Textual Similarity (STS): (Liu et al., 2019) trained on SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018) then fine-tuned on the STS-B (Cer et al., 2017) train set.",
"cite_spans": [
{
"start": 37,
"end": 55,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 107,
"end": 130,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF46"
},
{
"start": 160,
"end": 178,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Sentence-BERT Encoders",
"sec_num": "4.2"
},
{
"text": "Roberta LARGE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Sentence-BERT Encoders",
"sec_num": "4.2"
},
{
"text": "\u2022 Paraphrase Detection (PD): distilled Roberta BASE (Sanh et al., 2019) fine-tuned on a large-scale paraphrase detection corpus.",
"cite_spans": [
{
"start": 52,
"end": 71,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Sentence-BERT Encoders",
"sec_num": "4.2"
},
{
"text": "\u2022 Information Retrieval (IR): distilled Roberta BASE (Sanh et al., 2019) fine-tuned on MS MARCO (Campos et al., 2016).",
"cite_spans": [
{
"start": 53,
"end": 72,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Sentence-BERT Encoders",
"sec_num": "4.2"
},
{
"text": "Note, these pretrained SBERT models are not trained for LTRD. Instead, they are trained on largescale datasets for other related tasks (PD, STS, IR). These experiments thus evaluate how well off-theshelf tools generalize to a new task and domain. Just as the lexical models, we evaluate in sp and ap settings. Following Reimers and Gurevych (2019), we embed each source document, source sentence, and target sentence separately, then compute cosine similarity for each pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Sentence-BERT Encoders",
"sec_num": "4.2"
},
{
"text": "We also benchmark three HNM. Similar to SBERT ( \u00a74.2), HNM operate on frozen embeddings (Peters et al., 2019) which are computationally efficient since they only need to be calculated once (i.e. only one BERT forward pass for each source or target sentence). Unlike SBERT, however, HNM also have task-specific model architectures that learn to contextualize and align sentences.",
"cite_spans": [
{
"start": 88,
"end": 109,
"text": "(Peters et al., 2019)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Neural Models (HNM)",
"sec_num": "4.3"
},
{
"text": "BERT-HAN (shallow) (Zhou et al., 2020) : this model mean pools frozen BERT embeddings to generate sentence representations, then uses a hierarchical attention network (HAN) (Yang et al., 2016) to add document-level context and a crossdocument attention (CDA) mechanism to align passages across documents. See Zhou et al. (2020) .",
"cite_spans": [
{
"start": 19,
"end": 38,
"text": "(Zhou et al., 2020)",
"ref_id": "BIBREF48"
},
{
"start": 173,
"end": 192,
"text": "(Yang et al., 2016)",
"ref_id": "BIBREF47"
},
{
"start": 309,
"end": 327,
"text": "Zhou et al. (2020)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Neural Models (HNM)",
"sec_num": "4.3"
},
{
"text": "At training time, BERT-HAN only calculates loss at the document-pair level, i.e. D2D classification. There is no sentence-level supervision (S2D). At inference, two sets of predictions are output: 1) the D2D prediction, as during training; 2) the intermediate hidden representations of the sentences t i \u2208 T are extracted, then ranked by their similarity to the final hidden representation of the entire source document S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Neural Models (HNM)",
"sec_num": "4.3"
},
{
"text": "GRU-HAN (deep) (Zhou et al., 2020) : this model mirrors BERT-HAN, except with GloVe (Pennington et al., 2014) embeddings and a HAN with CDA at both the word and sentence level. It follows the same training and testing regime.",
"cite_spans": [
{
"start": 15,
"end": 34,
"text": "(Zhou et al., 2020)",
"ref_id": "BIBREF48"
},
{
"start": 84,
"end": 109,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Neural Models (HNM)",
"sec_num": "4.3"
},
{
"text": "BCL-CDA: We adapt the BCL model from MacLaughlin et al. (2020) (originally designed for the task of intrinsic source attribution on Pr2News) for LTRD by adding a final CDA layer (Zhou et al., 2020) . After generating contextualized representations of each source and target sentence with BCL, a CDA layer computes an attention-weighted representation of each target sentence, weighted by its similarity to the source sentences. The CDAweighted and original target sentence representations are then concatenated and fed into a final layer for prediction.",
"cite_spans": [
{
"start": 178,
"end": 197,
"text": "(Zhou et al., 2020)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Neural Models (HNM)",
"sec_num": "4.3"
},
{
"text": "At training time, BCL-CDA is supervised with target sentence labels. At testing time, it makes target sentence-level predictions (S2D) just as in training. We make a D2D prediction for each (S, T) pair by taking the max over its sentence-level predictions. See Appendix C for full model details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Neural Models (HNM)",
"sec_num": "4.3"
},
{
"text": "Finally, we evaluate fine-tuned BERT-based models for sequence pair classification. Unlike the other three classes of models described above, features for these fine-tuned models cannot be precomputed. Instead, at test time, a separate forward pass is required for each (S, T) or (S, t i ) pair. Thus, though these models might achieve better performance than feature-based alternatives (Peters et al., 2019) , it may be unfeasible to test them on large collections where many pairwise computations would be required.",
"cite_spans": [
{
"start": 387,
"end": 408,
"text": "(Peters et al., 2019)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuned BERT-based Models",
"sec_num": "4.4"
},
{
"text": "Sequence Pair Models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuned BERT-based Models",
"sec_num": "4.4"
},
{
"text": "We fine-tune Roberta Base (Liu et al., 2019) using the standard setup for sequence-pair tasks such as PD, STS, and IR (Devlin et al., 2019; Akkalyoncu Yilmaz et al., 2019) . We create an input example for each (source document S, target sentence t i ) pair:",
"cite_spans": [
{
"start": 26,
"end": 44,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 118,
"end": 139,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF17"
},
{
"start": 140,
"end": 171,
"text": "Akkalyoncu Yilmaz et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuned BERT-based Models",
"sec_num": "4.4"
},
{
"text": "[CLS] < s 1 , ..., s n > [SEP] t i [SEP]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuned BERT-based Models",
"sec_num": "4.4"
},
{
"text": "where < s 1 , ..., s n > contains the source document, split into sentences, with each sentence separated by a special [SSS] token (\"source sentence start\") and t i is a single target sentence. We feed the [CLS] representation into a final layer 1: Dataset statistics: the total number of (source S, target T) example pairs, the average # of sentences and words in each S and T, and the average # of positively labeled T sentences in each positive (S, T) pair. For Pr2News, we report the average # of T sentences with label > 0 in the human-labeled val and test sets. to make a prediction for t i . Thus, making a prediction for an entire (S, T) document pair requires n forward passes, one for each t i \u2208 T. Domain-adapted Sequence Pair Models: As shown by Gururangan et al. 2020, further pretraining BERT-based models on in-domain text improves performance on a variety of tasks. We explore the effects of DAPT for LTRD, testing Roberta models domain-adapted on either biomedical publications, computer science publications or news data. We fine-tune these models as above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuned BERT-based Models",
"sec_num": "4.4"
},
{
"text": "Sequential Sequence Pair Models: Since the fine-tuned models discussed above operate on a single t i at a time, they cannot leverage information from the surrounding target context. Following the success of BERT-based models for sequential sentence classification (Cohan et al., 2019) , we construct new input examples containing the full source and target documents, split into sentences:",
"cite_spans": [
{
"start": 264,
"end": 284,
"text": "(Cohan et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuned BERT-based Models",
"sec_num": "4.4"
},
{
"text": "[CLS]< s 1 , ..., s n >[SEP]< t 1 , ..., t n >[SEP] Again, < s 1 , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuned BERT-based Models",
"sec_num": "4.4"
},
{
"text": ".., s n > contains the source sentences. Similarly, < t 1 , ..., t n > contains the target sentences, with each separated by a special [TSS] token (\"target sentence start\"). We feed the final [TSS] representations into a multi-layer feedforward network to make a prediction for each target sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuned BERT-based Models",
"sec_num": "4.4"
},
{
"text": "Each pair is labeled with all corresponding target sentence labels. Since many pairs exceed Roberta's 512 Wordpiece length limit, we use Longformer Base , a Robertabased model with an adapted attention pattern to handle up to 4,096 tokens. We put global attention on the [SSS] and [TSS] tokens to allow the model to capture cross-document sentence similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuned BERT-based Models",
"sec_num": "4.4"
},
{
"text": "We benchmark the proposed models on four different datasets (Table 1) . See Appendix A for further dataset stastics and preprocessing details. (Zhou et al., 2020) PAN contains pairs of (S, T) web documents where T has potentially plagiarized S. Positive pairs contain synthetic plagiarism, generated by methods such as back-translation (Potthast et al., 2013) . Negative examples are created by replacing S with another, unplagiarized source text,S, sampled from the corpus. D2D labels are binary: plagiarized or not. The S2D labels for t i \u2208 T are 1 if t i plagiarizes S, else 0 (labels in negative pairs are 0).",
"cite_spans": [
{
"start": 143,
"end": 162,
"text": "(Zhou et al., 2020)",
"ref_id": "BIBREF48"
},
{
"start": 336,
"end": 359,
"text": "(Potthast et al., 2013)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 60,
"end": 69,
"text": "(Table 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5"
},
{
"text": "Pr2News contains pairs of (press release S, science news article T), where each T has reused content from S. There are three aspects of this dataset which are unlike the others we study: 1) All (S, T) pairs are positive and contain reuse. Thus, we only evaluate the S2D task. 2) While the val and test sets are human-annotated, the (S, T) pairs in the training set are labeled using a heuristic (TF-IDF cosine similarity). Though there has been some success training neural models on scores generated by word-overlap heuristics for the problems of document retrieval (Dehghani et al., 2017) and source attribution (MacLaughlin et al., 2020), applications of weakly-supervised models have not yet been studied on human-labeled LTRD test sets. 3) Target sentences, t i \u2208 T, in the val and test sets are labeled on a 0-3 ordinal scale, ranging from no reuse (0) to near or exact duplication (3). (Zhou et al., 2020) S2ORC is a citation localization dataset, containing (abstract S, paper section T) pairs. Citation localization consists of identifying which t i \u2208 T, if any, cite the source. All citation marks are removed from the texts, so models can only make predictions by comparing the language of S and T, not just simply identify citation marks. Positive examples are created by sampling scientific papers from the broader S2ORC corpus , finding sections in those papers that contain citation(s) to another paper in the corpus, and pairing together the (cited source abstract S, citing section T). Negative pairs are created by pairing T with S, the abstract of a paper it does not cite. The D2D labels are 0 for negative pairs, 1 for positive. The S2D labels for t i \u2208 T are 1 if t i contains a citation of S, else 0. S2D labels for negative pairs are all 0.",
"cite_spans": [
{
"start": 567,
"end": 590,
"text": "(Dehghani et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 893,
"end": 912,
"text": "(Zhou et al., 2020)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pr2News (MacLaughlin et al., 2020)",
"sec_num": "5.2"
},
{
"text": "The design of this dataset follows the assumption that the citing sentence(s) in T often paraphrase or summarize some portion of the cited paper, which is, in turn, summarized by its abstract S. This assumption, however, may be incorrect if the citing sentence is a poor summary of the cited paper (Abu-Jbara and Radev, 2012) or it refers to content in the cited paper which is not included in the abstract. Nevertheless, this assumption allows for easy creation of large-scale, real-world LTRD datasets. This is in contrast to Pr2News, which is substantially smaller due its reliance on humanannotated val and test labels, and PAN, which uses automatic methods to generate synthetic examples. We discuss the trade-offs of using citation marks to generate LTRD datasets in \u00a75.4.",
"cite_spans": [
{
"start": 298,
"end": 325,
"text": "(Abu-Jbara and Radev, 2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "S2ORC",
"sec_num": "5.3"
},
{
"text": "Motivated by the design of S2ORC, we propose a new citation localization dataset 3 built on the ACL Anthology Conference Corpus (ARC) (Bird et al., 2008) . Just as S2ORC, we construct our dataset using citations links between papers. Thus, we first break up each ARC paper by section, then use ParsCit (Councill et al., 2008) to find all sections that cite another paper in ARC. Positive examples are pairs (abstract S, paper section T) where S is cited by at least one t i \u2208 T. Using this method we generate 61,131 positive (S, T) pairs. Most (88%) T contain only one positive sentence.",
"cite_spans": [
{
"start": 134,
"end": 153,
"text": "(Bird et al., 2008)",
"ref_id": "BIBREF5"
},
{
"start": 302,
"end": 325,
"text": "(Councill et al., 2008)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ARC-Sim",
"sec_num": "5.4"
},
{
"text": "To create negative examples, we pair each S from the positive samples with a new section,T, that does not cite it. Importantly,T is sampled from the same target paper as the original T. This generates 44,250 negative pairs. 4 We argue that these negative samples method will be both more difficult and realistic than those in S2ORC. In S2ORC, negatives are generated by sampling a new sourc\u1ebd S to pair with T. However, due to the large scale of the corpus,S and T are often completely unrelated (e.g. Bio vs. CS). These examples, therefore, are trivial and can be easily classified using simple lexical overlap. In ARC-Sim, however, negatives are generated by sampling a new sectionT from the same paper as T. We hypothesize that differentiating between these positive and negative examples will 1) be more difficult sinceT is likely still topically related to S and may contain some spurious lexical or semantic overlap; 2) be more indicative of real-world performance, since real users may need to identify which specific sections in a full target paper reuse content from the source. Further preprocessing and dataset split information is detailed in Appendix A. We use the same labeling scheme as S2ORC.",
"cite_spans": [
{
"start": 224,
"end": 225,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ARC-Sim",
"sec_num": "5.4"
},
{
"text": "With dataset creation complete, we sample a set of 50 positive pairs from the val set to analyze in depth. Three expert annotators (authors of this paper) perform the LTRD task, predicting which t i \u2208 T reuse content from S. Five pairs are marked by all annotators (Fleiss' Kappa: 0.83). The remaining 45 are split into 15 per annotator. Overall, we find that annotators mark more sentences as reused (avg. 1.6 sents / target) than the true citation labels (1.3 / target). This is reasonable since T often only cites S once, even if it discusses S in multiple sentences (Qazvinian and Radev, 2010) . These false negatives are one disadvantage of using citation marks as supervision. Further, we find that annotators and ground truth often, but not always, agree -annotators identify at least one true citing sentence in 72% of pairs. This difference is mostly due to 1) citing sentences that discuss source content not described in the source abstract; 2) OCR errors that can make text hard to read. On the whole, we find that ACL-Sim is a useful LTRD dataset, but there are clear avenues for improvement, such as manually annotating reused sentences without citation marks and improving OCR.",
"cite_spans": [
{
"start": 570,
"end": 597,
"text": "(Qazvinian and Radev, 2010)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ARC-Sim",
"sec_num": "5.4"
},
{
"text": "D2D Metrics: We evaluate the D2D task as (S, T) pair classification using F1 score. A positive label indicates that T reuses content from S. A negative label indicates no text reuse. There is no D2D task for Pr2News since all examples are positive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Settings & Metrics",
"sec_num": "6"
},
{
"text": "S2D Metrics: We evaluate S2D in two settings: corpus level (i.e. evaluating all target sentences from all pairs at once), and document level (i.e. evaluating the sentences in each target document w.r.t each other, then averaging scores across documents). The metrics for each setting depend on the dataset. At the corpus level, we evaluate binary-label datasets (PAN, S2ORC, ARC-Sim) with sentence-level F1 and ordinal-label datasets (Pr2News: 0-3 scale), with spearman's correlation (\u03c1) and NDCG@N (where N is the number of target sentences in the test set). At the document level, we evaluate binary-label datasets with mean average precision (MAP) and top-k accuracy (Acc@k), defined as the proportion of test examples where a positively-labeled sentence in T is ranked in the top k by the model. We evaluate ordinal-label datasets with NDCG@{1,3,5}. Note, in order for these document-level metrics to be meaningful, T must contain at least one positive sentence. Thus, our document-level evaluations are only calculated on the positive (S, T) pairs in each dataset. 5 Since Pr2News only contains positive examples, we use the full test set for all evaluations.",
"cite_spans": [
{
"start": 1070,
"end": 1071,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Settings & Metrics",
"sec_num": "6"
},
{
"text": "BERT-HAN & GRU-HAN: Since both HAN models are trained on document-level, not sentencelevel, labels, we cannot train them on Pr2News, where all document-level labels are positive. Thus, we skip evaluating the HAN models on this dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Settings & Metrics",
"sec_num": "6"
},
{
"text": "Domain-adapted RoBERTa Models: We evaluate three DAPT models: 1) Biomed-DAPT for S2ORC and Pr2News since they contain biomedical texts, 2) News-DAPT for Pr2News since the target documents are news articles, 3) CS-DAPT model for S2ORC and ARC-Sim since they contain CS papers. 6 We do not apply DAPT to PAN since no models are adapted to a similar domain.",
"cite_spans": [
{
"start": 276,
"end": 277,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Settings & Metrics",
"sec_num": "6"
},
{
"text": "As seen in Tables 2 & 3, BERT-based models finetuned on LTRD data perform the best in general, outperforming lexical overlap, SBERT, and HNM. Overall, models achieve their best performances on PAN. We suspect that this is because many positive (S, T) pairs are easy, containing many plagiarized passages with high lexical overlap, and since many negative (S, T) pairs are topically unrelated and share little lexical or semantic overlap. On the other end of the spectrum is ARC-Sim, where models score relatively poorly. We hypothesize that this is because most T only contain one citing sentence and since, as discussed in \u00a75.4, we focus on selecting hard negative target texts,T, sampled from the same document as the original T. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results & Discussion",
"sec_num": "7"
},
{
"text": "Model D2D-F1 S2D-F1 MAP Acc@1 Acc@3 Acc@5 D2D-F1 S2D-F1 MAP Acc@1 Acc@3 Acc@5 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PAN S2ORC Setting",
"sec_num": null
},
{
"text": "In general, the supervised BERT-based models outperform the unsupervised lexical overlap baselines. The exception to this finding is Pr2News, where the lexical overlap baselines Rouge ap and Rouge sp have the best corpus-level and document-level S2D scores, respectively. This result is perhaps not unexpected, since, unlike other datasets, the labeling methods of Pr2News differ substantially between training (heuristic generated by TFIDF ap scores), validation (non-expert-labeled) and test (expert-labeled). However, our results still contrast Dehghani et al. (2017) , who, working on a document ranking task, find that weakly-supervised neural models consistently outperform the unsupervised methods used to label their training data. We hypothesize that our negative finding might be due, in part, to the small scale of Pr2News and our reliance on only a single heuristic as the supervision signal source. To address this, future work could explore applications on larger weakly-supervised LTRD datasets, e.g. closer in scale to the 50M document collection of Dehghani et al. (2017) , and improving the weak-supervision signal to better reflect human judgements, e.g. through combination of multiple heuristics (Boecking et al., 2021).",
"cite_spans": [
{
"start": 548,
"end": 570,
"text": "Dehghani et al. (2017)",
"ref_id": "BIBREF16"
},
{
"start": 1066,
"end": 1088,
"text": "Dehghani et al. (2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Impact of Weak Supervision",
"sec_num": "7.1"
},
{
"text": "Next, we take a closer look at the performances of SBERT (Reimers and Gurevych, 2019) . Note, these off-the-shelf models are trained on the related tasks of either PD, STS, or IR, not on our LTRD datasets. Though PD, STS, and IR receive substantially more attention in the NLP and IR literature, prior research has not yet explored the generalizability of models trained on these tasks to LTRD. We focus in particular on SBERT-PD, since Reimers and Gurevych (2019) recommend it for various applications and claim that it achieves strong results on various similarity and retrieval tasks. Examining our results, however, we find the opposite -SBERT performs worse in general than the lexical overlap baselines, and SBERT-PD performs no better than SBERT-IR (though both better than SBERT-STS). We suspect that the SBERT models would perform better if they were finetuned on in-domain LTRD data. However, since we aimed to evaluate the effectiveness of an offthe-shelf tool, we did not test this hypothesis.",
"cite_spans": [
{
"start": 57,
"end": 85,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF39"
},
{
"start": 437,
"end": 464,
"text": "Reimers and Gurevych (2019)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of Off-the-shelf Tools",
"sec_num": "7.2"
},
{
"text": "To examine the importance of incorporating document-level context for LTRD, we compare the results of Roberta and Longformer. 7 As noted in \u00a74, input to both models follows the standard BERT sequence-pair setup (Devlin et al., 2019) . However, Roberta operates on pairs of source documents and single target sentences (S, t i ), while Longformer operates on full document pairs (S, T), making predictions for all target sentences simultaneously.",
"cite_spans": [
{
"start": 126,
"end": 127,
"text": "7",
"ref_id": null
},
{
"start": 211,
"end": 232,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Importance of Document-level Context",
"sec_num": "7.3"
},
{
"text": "From Tables 2 & 3 , we see that modeling target document context does not consistently improve performance. While Longformer outperforms Roberta on the D2D and corpus-level S2D tasks on most datasets, Roberta consistently scores higher on document-level S2D. To investigate the discrepancy between Longformer's strong corpuslevel S2D performance and its relatively weaker document-level S2D scores, we examine S2ORC 7 Longformer is initialized from RobertaBase, but has additional parameters and is further pretrained on a long-document corpus. Thus, though we cannot disentangle these effects from the benefits of incorporating document-level context, we believe our experiments provide a relatively fair comparison between two SoTA models for short vs. long input sequences. ",
"cite_spans": [
{
"start": 416,
"end": 417,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 5,
"end": 17,
"text": "Tables 2 & 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Importance of Document-level Context",
"sec_num": "7.3"
},
{
"text": "Pr2News Setting Model D2D-F1 S2D-F1 MAP Acc@1 Acc@3 Acc@5 \u03c1 NDCG@N NDCG@1 NDCG@3 NDCG@5 and ARC-Sim. At the corpus-level, Roberta mostly makes false positives (FP) errors, while Longformer makes roughly equal FP and FN errors (and fewer errors overall). For both models, most of these FPs occur in positive (S, T) pairs, i.e. pairs where at least one t i cites S. As discussed in \u00a75, these errors are reasonable, since T often only cites S once, even if it discusses S in multiple sentences (Qazvinian and Radev, 2010) . Roberta's more-frequent FP errors, however, do not affect its document-level scores as much. Since, at the document-level, we evaluate how well models rank the t i in each T w.r.t each other, models perform well if they score positive sentences higher than negatives (no reuse). Indeed, though Roberta predicts high scores for many negatives, it does better than Longformer at scoring positives higher, leading to better ranking performance. Next, we first perform error analysis on PAN, the only dataset where Roberta outperforms Longformer across all metrics. We find that Roberta makes few D2D errors, of which most (80%) are FPs. Longformer, on the other hand, not only makes substantially more errors overall, but splits them roughly equally between FPs and FNs. These FNs are especially surprising since many positive examples in PAN have high lexical overlap. On the other hand, for the corpus-level S2D task, we find that both models have similar numbers of TPs and FNs, but that Longformer generates an order of magnitude more FPs, i.e. predicting that negative target sentences contain reuse.",
"cite_spans": [
{
"start": 491,
"end": 518,
"text": "(Qazvinian and Radev, 2010)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ARC-Sim",
"sec_num": null
},
{
"text": "We next examine the benefits of DAPT. Gururangan et al. (2020) find that further pretraining Roberta on text from a new domain improves downstream performance, provided that this new domain is similar to the downstream task. To examine whether this finding holds for LTRD, we conduct DAPT evaluations on three datasets -S2ORC, ARC-Sim and Pr2News. Unlike Gururangan et al. 2020, however, we find mixed results. On ARC-Sim and Pr2News, standard Roberta models outperform the corresponding DAPT models on most metrics. The ARC-Sim findings are especially surprising, since its domain (NLP papers) is substantially different from Roberta's standard pretraining data (books, news, web documents) and since Gururangan et al. (2020) show strong performance gains from DAPT on a classification dataset also based on ACL-ARC. Moving on to S2ORC, our findings are reversed, with both DAPT models outperforming Roberta. However, as noted in \u00a76, since the extra pretraining data for these DAPT models is sampled from the same corpus as S2ORC, we cannot be sure how much of this boost is due to DAPT models pretraining on S2ORC's test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of Domain-adaptive Pretraining",
"sec_num": "7.4"
},
{
"text": "Finally, we discuss the trade-offs between models, focusing on differences in performance and relative computational efficiency. On one end of the efficiency spectrum are the lexical overlap metrics (TFIDF, Rouge-{1,2}) which are easily scaled to large document collections by simply keeping track of the ngrams in each source or target passage, then computing word-overlap scores for each (S, T) pair. 8 As discussed in \u00a74, we evaluate these metrics in two settings, sp and ap, depending on whether we compute similarity scores between target sen-tences and entire source documents or with each source sentence separately (then compute an aggregate score). Though no single metric or evaluation setting consistently achieves the best performance, these models provides a very strong baseline, especially on the D2D task.",
"cite_spans": [
{
"start": 403,
"end": 404,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Trade-offs between Models",
"sec_num": "7.5"
},
{
"text": "In the middle of the efficiency spectrum are SBERT and HNM. Though these models require an expensive forward pass to generate an embedding for each source or target passage, these embeddings can then be saved and reused. Scores for each (S, T) pair can be computed relatively quickly by either computing cosine similarity scores (SBERT) or running the pair through a lighter-weight taskspecific architecture (HNM). However, we find mixed and negative results regarding their effectiveness. Specifically, as discussed in \u00a77.2, offthe-shelf SBERT models generally lag behind the computationally-cheaper lexical overlap baselines. Results are slightly more positive, though, for the HNMs. BCL-CDA, the best HNM, achieves the second best performance on two datasets (S2ORC, ARC-Sim). However, it still lags behind the best model, fine-tuned BERT, by a significant margin. Further, it performs worse than lexical overlap baselines on the other datasets, PAN and Pr2News. Turning next to the HAN models, we find that though they achieve competitive D2D performance on two of the three datasets, they have very weak S2D scores. We suspect that this is because they are only trained on the D2D task -at test time, they make sentence-level predictions by computing similarity scores between hidden source and target representations extracted from a pretrained D2D model. Due to this training formulation, the HAN models fail to learn sentence-level representations that are useful for prediction. See Appendix B for a discussion of our efforts to replicate the results from the HAN models on our datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trade-offs between Models",
"sec_num": "7.5"
},
{
"text": "Lastly, the least efficient models are fine-tuned BERTs, which require a separate forward pass to compute a score for each (S, T) or (S, t i ) pair. As is the trend with other NLP tasks, though, these computationally-intense and parameter-rich models achieve the best average performance. This finding is clearest on S2ORC and ARC-Sim, where few t i contain reuse and that reuse is non-literal (e.g. paraphrase). On these datasets, the best fine-tuned BERT outperforms the next-best model (BCL-CDA) by an average of 6.3% (D2D) and 15.5% (S2D). However, on datasets where target documents directly copy large spans of source content with minimal changes (PAN) or where large-scale supervised training data is unavailable (Pr2News), fine-tuned BERT provides much less or no improvement over the lexical overlap metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trade-offs between Models",
"sec_num": "7.5"
},
{
"text": "We study methods for local text reuse detection, identifying passages in a target document that lexically or semantically reuse content from a source. Through evaluations on four datasets, including a new citation localization dataset, we confirm the strong performance of BERT models fine-tuned on our task. However, we also find that lexical-overlap methods, e.g. TFIDF, provide strong baselines, frequently outperforming off-the-shelf neural passage encoders and hierarchical neural models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Based on these findings, we suggest practitioners take one of two approaches: 1) in instances with little labeled training data or where most reuse is exact (i.e. copying), use traditional lexical overlap models; 2) in instances with large-scale labeled training data and where much of the reuse is nonliteral (e.g. summarization, paraphrasing), use a lexical overlap method to filter possible (S, T) pairs, then run a more expensive fine-tuned BERT on that subset. We suggest users opt for fine-tuned BERT models over pretrained passage encoders (SBERT) or HNMs for this second step since they achieve substantially higher performance. Suggestion #2 follows current approaches to neural IR, where neural models only rerank smaller lists of documents retrieved by a cheaper lexical overlap method, e.g. TF-IDF. Performance may be further boosted by fine-tuning BERT-based models that incorporate document-level context (i.e. Longformer) or ones that are adapted to the target domain of interest (i.e. DAPT), but often the standard Roberta Base achieves highly competitive results. 197 36, 227 5, 269 3, 852 5, 665 4, 171 ment, next 45 sentences of target document), and so on. Predictions for split examples are merged back together at test time.",
"cite_spans": [
{
"start": 1081,
"end": 1088,
"text": "197 36,",
"ref_id": null
},
{
"start": 1089,
"end": 1095,
"text": "227 5,",
"ref_id": null
},
{
"start": 1096,
"end": 1102,
"text": "269 3,",
"ref_id": null
},
{
"start": 1103,
"end": 1109,
"text": "852 5,",
"ref_id": null
},
{
"start": 1110,
"end": 1116,
"text": "665 4,",
"ref_id": null
},
{
"start": 1117,
"end": 1120,
"text": "171",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "We download the public dataset. As for PAN, we filter out malformed positive pairs that do not contain any positively-labeled sentences or contain positively-labeled sentences with no words. S2ORC examples, are, in general, short and do not require length-based filtering. Following Zhou et al. (2020), we split documents into sentences and tokenize them using NLTK (Bird and Loper, 2004) .",
"cite_spans": [
{
"start": 366,
"end": 388,
"text": "(Bird and Loper, 2004)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "S2ORC:",
"sec_num": null
},
{
"text": "For the hierarchical neural models (BERT-HAN, GRU-HAN, BCL-CDA), we cap source documents at 20 sentences (99th percentile). We split examples with target documents containing more than 29 sentences (99th percentile) into multiple examples and merge back predictions at test time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S2ORC:",
"sec_num": null
},
{
"text": "Pr2News: We obtain the preprocessed and filtered Pr2News dataset from MacLaughlin et al. (2020, \u00a74-5), who created it with data from Altmetric. We evaluate models on the provided test set of 50 expert-labeled (press release, news article) pairs. We use the set of 45 non-expert-labeled (press release, news article) pairs as our validation set (we filter out the 5 spurious validation set pairs noted by MacLaughlin et al. (2020)). Finally, we use the remaining 64,684 pairs labeled with their TF-IDF cosine similarity heuristic as training data. For pairs with more than one matched press release, we select the press release with the highest TFIDF cosine similarity to the news article.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S2ORC:",
"sec_num": null
},
{
"text": "For the hierarchical neural models (BERT-HAN, GRU-HAN, BCL-CDA), we cap source documents at 54 sentences (90th percentile). We split examples with target documents containing more than 57 sentences (90th percentile) into multiple examples and merge back predictions at test time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S2ORC:",
"sec_num": null
},
{
"text": "Although we use the official source code from Zhou et al. (2020) to run the HAN models, our results differ on PAN and S2ORC from their originally reported results (mostly slightly, but, in one instance, substantially). With the exception of using BERT BASE as the passage encoder for BERT-HAN instead of BERT LARGE , we follow their recommended hyperparameters. But, as compared with the results from Zhou et al. (2020), on the D2D task (measured by F1), BERT-HAN's scores are substantially lower on PAN and slightly lower on S2ORC. GRU-HAN's scores, on the other hand, are very slightly higher on both PAN and S2ORC. We hypothesize that the minor differences in performance are due to 1) differences in model random initialization (Reimers and Gurevych, 2017) ; 2) differences in the datasets -as noted in Appendix A, we filtered out some examples from PAN and S2ORC since they contained some malformed positive examples with either no positively-labeled sentences or positively-labeled sentences that were empty strings; 3) for BERT-HAN, we use BERT BASE as the encoder rather than BERT LARGE . Despite these factors, BERT-HAN's large performance drop on PAN is still surprising. However, we emphasize that even when using Zhou et al. (2020) 's original numbers, BERT-HAN still lags behind both our lexical overlap baselines and fine-tuned BERT models, so our overall takeaways from \u00a77 still stand. For the S2D task, our results are not directly comparable to the original numbers of Zhou et al. (2020) for two reasons:",
"cite_spans": [
{
"start": 46,
"end": 64,
"text": "Zhou et al. (2020)",
"ref_id": "BIBREF48"
},
{
"start": 732,
"end": 760,
"text": "(Reimers and Gurevych, 2017)",
"ref_id": "BIBREF38"
},
{
"start": 1225,
"end": 1243,
"text": "Zhou et al. (2020)",
"ref_id": "BIBREF48"
},
{
"start": 1486,
"end": 1504,
"text": "Zhou et al. (2020)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B Implementation of BERT-HAN and GRU-HAN",
"sec_num": null
},
{
"text": "1. We use different metrics -we use MAP and Acc@k, while they use MRR and P@k. MAP is more appropriate than MRR since there are often multiple positively-labeled target sentences. Acc@k is more appropriate than P@k when k is greater than the number of positively-labeled target sentences. When there are fewer than k positively-labeled target sentences in an example, a perfect system will still have a P@k < 1. Systems receive a perfect Acc@k score, on the other hand, if at least one positively-labeled target sentence appears anywhere in the top k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Implementation of BERT-HAN and GRU-HAN",
"sec_num": null
},
{
"text": "2. We evaluate on different sets of the data -as noted in \u00a76 , Zhou et al. (2020) calculate their S2D ranking metrics (MRR, P@k) on all test examples, both positive and negative. However, these metrics cannot be computed on negative examples where no target sentences contain reuse. We confirmed with Zhou et al. (2020) that, in these instances, they give their models full credit if the corresponding D2D prediction is correct, i.e. the model predicts that the target document contains no reuse. Since many negative examples in PAN and S2ORC are easy to classify, this manner of calculation substantially inflates the results.",
"cite_spans": [
{
"start": 63,
"end": 81,
"text": "Zhou et al. (2020)",
"ref_id": "BIBREF48"
},
{
"start": 301,
"end": 319,
"text": "Zhou et al. (2020)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B Implementation of BERT-HAN and GRU-HAN",
"sec_num": null
},
{
"text": "To address this, we calculate our S2D ranking metrics (MAP, Acc@k) on only the subset of positive examples. Calculating in this way shows substantially decreased S2D performance for the HAN models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Implementation of BERT-HAN and GRU-HAN",
"sec_num": null
},
{
"text": "BCL-CDA: We adapt the BCL model from MacLaughlin et al. (2020) for LTRD as follows (see MacLaughlin et al. (2020) for details of the BCL model): Each source and target sentence is fed into frozen BERT BASE separately. We then use a CNN with 1-max pooling over time to aggregate the token representations from BERT's second to last layer into a single representation for each sentence. We search over CNN filter size \u2208 {3,5,7} and number of filters \u2208 {50, 100, 200}. The sentence representations in each source or target document are then contextualized with document-level BiLSTMs (two separate BiLSTMs for source or target documents). We search over hidden dimension size \u2208 {64, 128} (same dimensionality for both BiLSTMs). After the BiLSTM layer, we are left with s i \u2208 S and t i \u2208 T, contextualized sentence representations for the sentences in the source and target documents. Next, we use a sentence-level CDA layer to computet i , an attention-weighted (Luong et al., 2015, \u00a73 .1: general attention) representation of t i , weighted by its similarity to the sentences s i \u2208 S. Finally, we concatenate [t i ;t i ] and feed this to a final layer to make a prediction for each target sentence.",
"cite_spans": [
{
"start": 88,
"end": 113,
"text": "MacLaughlin et al. (2020)",
"ref_id": "BIBREF29"
},
{
"start": 959,
"end": 982,
"text": "(Luong et al., 2015, \u00a73",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C Model Hyperparameters & Best Configurations",
"sec_num": null
},
{
"text": "We set dropout at 0.2, batch size at 32, and search over the max number of epochs (10, with early stopping). We optimize with Adam with learning rate \u2208 {1e-4, 5e-4}. For the PAN, S2ORC and ARC-Sim datasets, we use weighted cross-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Model Hyperparameters & Best Configurations",
"sec_num": null
},
{
"text": "We could also study the sentence-to-sentence problem, learning to identify which source sentence(s) contain the content reused in a given target sentence, if any. However, as noted byZhou et al. (2020), no datasets exist yet which contain such fine-grained annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available for download here.4 We sample 1 negative pair per (source abstract, target paper), so target papers that cite the source in more than 1 section will have more positive examples than negative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We confirmed thatZhou et al. (2020) calculate their document-level metrics, MRR, P@5 and P@10, across all (S, T) pairs. For the negative pairs, they give models full credit on the S2D task if their corresponding D2D prediction is correct. We argue that this is not indicative of model performance, and thus conduct our document-level evaluations on only positive pairs.6 CS-and Biomed-DAPT models are adapted on an internal version of the S2ORC corpus. Since the S2ORC LTRD dataset is randomly sampled from that same corpus, it is possible that the DAPT models are pretrained on some portion of the S2ORC LTRD test set. We do not believe this overlap exists for any other (DAPT, LTRD dataset) pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Rouge-L cannot be scaled as easily as the other lexical overlap baselines. However, it performs worse than Rouge-1 and -2 on all validation sets and is not applied to any test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Ansel MacLaughlin was supported by a National Endowment for the Humanities Digital Humanities Advancement Grant (HAA-263837-19) and a Northeastern University Dissertation Completion Fellowship.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": " Table 4 lists the training, validation and test set sizes for each dataset. Each split is separated into the number of positive examples that contain reuse and the number of negative examples that do not. Below we discuss the data preprocessing steps we follow for each dataset:ARC-Sim We create this dataset using papers from the ACL Anthology Conference Corpus (Bird et al., 2008 ). Since we use citation marks to identify instances of text reuse, we use ParsCit (Councill et al., 2008) to first identify all in-line citation marks. We then create examples by matching together a section in a paper that contains a citation with the abstract of the cited paper (assuming the cited paper is also in the ACL ARC). Since citation marks have a distinctive lexical pattern, we remove them all after matching the pairs. We then split sections and abstracts into sentences using Stanford CoreNLP , keeping track of where the original citation was in order to generate S2D labels. We create negative examples by matching a cited abstract together with another section from the same paper as the original citing section (the new section is selected so that it does not cite the paper). Finally, for computational feasibility, we limit source documents to 20 sentences and target sections to 50, the 90th percentiles in the data. We remove pairs where the citation occurs after the 50th sentence in the target section. We split the dataset into train/val/test by cited abstract S, yielding the splits detailed in Table 4 .PAN: We download the public dataset. We filter out 1) malformed positive pairs that do not contain any positively-labeled sentences or contain positively-labeled sentences with no words; 2) extremely long pairs which cause GPU memory issues for our models, removing (source, target) pairs that contain more than 4,000 tokens total (80th percentile). Following Zhou et al. (2020), we split documents into sentences and tokenize them using NLTK (Bird and Loper, 2004) .For the hierarchical neural models (BERT-HAN, GRU-HAN, BCL-CDA), we follow Zhou et al. (2020) and cap documents at a predefined number of sentences so that the models fit in GPU memory. We cap source documents at 50 sentences (90th percentile). We split examples with target documents containing more than 45 sentences (90th percentile entropy loss since the datasets are unbalanced (many more negative sentences than positive). We search over the weight w to put on examples from the positive class. Weights vary by dataset since datasets are not equally imbalanced: PAN \u2208 {1, 3, 5}, S2ORC \u2208 {3, 5, 10}, ARC-Sim \u2208 {10, 15, 20}. Following MacLaughlin et al. (2020), we use MAE loss for Pr2News. Fine-tuned RoBERTa BASE , DAPT, and Longformer: We search over Adam learning rate \u2208 {2e-5, 3e-5, 5e-5}. We use batch size 32 (with gradient accumulation to ensure that batches fit in GPU memory) and train models for 10 epochs at most (20 for Longformer), with early stopping. For PAN, S2ORC and ARC-Sim, following BCL-CDA, we search over weight w for weighted crossentropy loss. We search over the same w ranges for each dataset as for BCL-CDA, except for ARC-Sim, where we search over w \u2208 {5, 10, 20}. We use MAE loss for Pr2News.",
"cite_spans": [
{
"start": 364,
"end": 382,
"text": "(Bird et al., 2008",
"ref_id": "BIBREF5"
},
{
"start": 466,
"end": 489,
"text": "(Councill et al., 2008)",
"ref_id": "BIBREF13"
},
{
"start": 1958,
"end": 1980,
"text": "(Bird and Loper, 2004)",
"ref_id": "BIBREF6"
},
{
"start": 2057,
"end": 2075,
"text": "Zhou et al. (2020)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [
{
"start": 1,
"end": 8,
"text": "Table 4",
"ref_id": null
},
{
"start": 1506,
"end": 1513,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Data Preprocessing",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "TensorFlow: Large-scale ma",
"authors": [
{
"first": "Andy",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Devin",
"suffix": ""
},
{
"first": "Sanjay",
"middle": [],
"last": "Ghemawat",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Harp",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Irving",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Isard",
"suffix": ""
},
{
"first": "Yangqing",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Rafal",
"middle": [],
"last": "Jozefowicz",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Manjunath",
"middle": [],
"last": "Kudlur ; Martin Wicke",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Xiaoqiang",
"middle": [],
"last": "Zheng",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefow- icz, Lukasz Kaiser, Manjunath Kudlur, Josh Leven- berg, Dandelion Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi\u00e9gas, Oriol Vinyals, Pete Warden, Mar- tin Wattenberg, Martin Wicke, Yuan Yu, and Xiao- qiang Zheng. 2015. TensorFlow: Large-scale ma-",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Reference scope identification in citing sentences",
"authors": [
{
"first": "Amjad",
"middle": [],
"last": "Abu",
"suffix": ""
},
{
"first": "-Jbara",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "80--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amjad Abu-Jbara and Dragomir Radev. 2012. Refer- ence scope identification in citing sentences. In Pro- ceedings of the 2012 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 80-90, Montr\u00e9al, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Applying BERT to document retrieval with birch",
"authors": [
{
"first": "Shengjin",
"middle": [],
"last": "Zeynep Akkalyoncu Yilmaz",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Haotian",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations",
"volume": "",
"issue": "",
"pages": "19--24",
"other_ids": {
"DOI": [
"10.18653/v1/D19-3004"
]
},
"num": null,
"urls": [],
"raw_text": "Zeynep Akkalyoncu Yilmaz, Shengjin Wang, Wei Yang, Haotian Zhang, and Jimmy Lin. 2019. Apply- ing BERT to document retrieval with birch. In Pro- ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstra- tions, pages 19-24, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Longformer: The long-document transformer",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
}
],
"year": 2020,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. ArXiv, abs/2004.05150.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The ACL Anthology reference corpus: A reference dataset for bibliographic research in computational linguistics",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Gibson",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "Dongwon",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Brett",
"middle": [],
"last": "Powley",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "Yee",
"middle": [
"Fan"
],
"last": "Tan",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Robert Dale, Bonnie Dorr, Bryan Gibson, Mark Joseph, Min-Yen Kan, Dongwon Lee, Brett Powley, Dragomir Radev, and Yee Fan Tan. 2008. The ACL Anthology reference corpus: A reference dataset for bibliographic research in computational linguistics. In Proceedings of the Sixth Interna- tional Conference on Language Resources and Eval- uation (LREC'08), Marrakech, Morocco. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "NLTK: The natural language toolkit",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the ACL Interactive Poster and Demonstration Sessions",
"volume": "",
"issue": "",
"pages": "214--217",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird and Edward Loper. 2004. NLTK: The nat- ural language toolkit. In Proceedings of the ACL In- teractive Poster and Demonstration Sessions, pages 214-217, Barcelona, Spain. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Interactive weak supervision: Learning useful heuristics for data labeling",
"authors": [
{
"first": "W",
"middle": [],
"last": "Benedikt Boecking",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Neiswanger",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dubrawski",
"suffix": ""
}
],
"year": 2021,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benedikt Boecking, W. Neiswanger, E. Xing, and A. Dubrawski. 2021. Interactive weak supervision: Learning useful heuristics for data labeling. In ICLR.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1075"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Ms marco: A human generated machine reading comprehension dataset",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Fernando Campos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "Xia",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Tiwary",
"suffix": ""
},
{
"first": "Rangan",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Bhaskar",
"middle": [],
"last": "Mitra",
"suffix": ""
}
],
"year": 2016,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Fernando Campos, T. Nguyen, M. Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, L. Deng, and Bhaskar Mitra. 2016. Ms marco: A human generated machine reading com- prehension dataset. ArXiv, abs/1611.09268.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "I\u00f1igo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2001"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, I\u00f1igo Lopez- Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Measuring text reuse",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Clough",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
},
{
"first": "S",
"middle": [
"L"
],
"last": "Scott",
"suffix": ""
},
{
"first": "Yorick",
"middle": [],
"last": "Piao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wilks",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "152--159",
"other_ids": {
"DOI": [
"10.3115/1073083.1073110"
]
},
"num": null,
"urls": [],
"raw_text": "Paul Clough, Robert Gaizauskas, Scott S.L. Piao, and Yorick Wilks. 2002. Measuring text reuse. In Pro- ceedings of the 40th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 152- 159, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Pretrained language models for sequential sentence classification",
"authors": [
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "King",
"suffix": ""
},
{
"first": "Bhavana",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Weld",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3693--3699",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1383"
]
},
"num": null,
"urls": [],
"raw_text": "Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, and Dan Weld. 2019. Pretrained language models for sequential sentence classification. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 3693-3699, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "ParsCit: an open-source CRF reference string parsing package",
"authors": [
{
"first": "C",
"middle": [
"Lee"
],
"last": "Isaac Councill",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Giles",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isaac Councill, C. Lee Giles, and Min-Yen Kan. 2008. ParsCit: an open-source CRF reference string pars- ing package. In Proceedings of the Sixth Interna- tional Conference on Language Resources and Eval- uation (LREC'08), Marrakech, Morocco. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Search Engines -Information Retrieval in Practice",
"authors": [
{
"first": "W",
"middle": [],
"last": "Croft",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Metzler",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Strohman",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Croft, Donald Metzler, and Trevor Strohman. 2009. Search Engines -Information Retrieval in Practice. Pearson.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deeper text understanding for ir with contextual neural language modeling",
"authors": [
{
"first": "Zhuyun",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Callan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhuyun Dai and J. Callan. 2019. Deeper text under- standing for ir with contextual neural language mod- eling. Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Neural ranking models with weak supervision",
"authors": [
{
"first": "M",
"middle": [],
"last": "Dehghani",
"suffix": ""
},
{
"first": "Hamed",
"middle": [],
"last": "Zamani",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kamps",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Dehghani, Hamed Zamani, A. Severyn, J. Kamps, and W. Croft. 2017. Neural ranking models with weak supervision. Proceedings of the 40th Interna- tional ACM SIGIR Conference on Research and De- velopment in Information Retrieval.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Automatically constructing a corpus of sentential paraphrases",
"authors": [
{
"first": "B",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Third International Workshop on Paraphrasing (IWP2005)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William B. Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The spine of American law: Digital text analysis and U.S. legal practice",
"authors": [
{
"first": "Kellen",
"middle": [],
"last": "Funk",
"suffix": ""
},
{
"first": "Lincoln",
"middle": [],
"last": "Mullen",
"suffix": ""
}
],
"year": 2018,
"venue": "The American Historical Review",
"volume": "123",
"issue": "1",
"pages": "1--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kellen Funk and Lincoln Mullen. 2018. The spine of American law: Digital text analysis and U.S. legal practice. The American Historical Review, 123(1):1-39.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Don't stop pretraining: Adapt language models to domains and tasks",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Downey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8342--8360",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.740"
]
},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A computational model of text reuse in ancient literary texts",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lee. 2007. A computational model of text reuse in ancient literary texts. In Proceedings of the 45th",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Annual Meeting of the Association of Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "472--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association of Computational Linguistics, pages 472-479, Prague, Czech Repub- lic. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Meme-tracking and the dynamics of the news cycle",
"authors": [
{
"first": "Jure",
"middle": [],
"last": "Leskovec",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Backstrom",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Kleinberg",
"suffix": ""
}
],
"year": 2009,
"venue": "KDD",
"volume": "",
"issue": "",
"pages": "497--506",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jure Leskovec, Lars Backstrom, and Jon Kleinberg. 2009. Meme-tracking and the dynamics of the news cycle. In KDD, pages 497-506.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, M. Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "S2ORC: The semantic scholar open research corpus",
"authors": [
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Lucy",
"middle": [
"Lu"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Rodney",
"middle": [],
"last": "Kinney",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Weld",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4969--4983",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.447"
]
},
"num": null,
"urls": [],
"raw_text": "Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kin- ney, and Daniel Weld. 2020. S2ORC: The semantic scholar open research corpus. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 4969-4983, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1166"
]
},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421, Lis- bon, Portugal. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Source attribution: Recovering the press releases behind health science news",
"authors": [
{
"first": "Ansel",
"middle": [],
"last": "Maclaughlin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wihbey",
"suffix": ""
},
{
"first": "Aleszu",
"middle": [],
"last": "Bajak",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ansel MacLaughlin, J. Wihbey, Aleszu Bajak, and D. A. Smith. 2020. Source attribution: Recovering the press releases behind health science news. In ICWSM.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In ACL, pages 55-60.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Non-literal text reuse in historical texts: An approach to identify reuse transformations and its application to Bible reuse",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Moritz",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Wiederhold",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Pavlek",
"suffix": ""
},
{
"first": "Yuri",
"middle": [],
"last": "Bizzoni",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "B\u00fcchler",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1849--1859",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1190"
]
},
"num": null,
"urls": [],
"raw_text": "Maria Moritz, Andreas Wiederhold, Barbara Pavlek, Yuri Bizzoni, and Marco B\u00fcchler. 2016. Non-literal text reuse in historical texts: An approach to identify reuse transformations and its application to Bible reuse. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, pages 1849-1859, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Quotus: The structure of political media coverage as revealed by quoting patterns",
"authors": [
{
"first": "Vlad",
"middle": [],
"last": "Niculae",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Suen",
"suffix": ""
},
{
"first": "Justine",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Cristian",
"middle": [],
"last": "Danescu-Niculescu-Mizil",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Leskovec",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vlad Niculae, Caroline Suen, Justine Zhang, Cristian Danescu-Niculescu-Mizil, and J. Leskovec. 2015. Quotus: The structure of political media coverage as revealed by quoting patterns. Proceedings of the 24th International Conference on World Wide Web.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Kopf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Raison",
"suffix": ""
},
{
"first": "Alykhan",
"middle": [],
"last": "Tejani",
"suffix": ""
},
{
"first": "Sasank",
"middle": [],
"last": "Chilamkurthy",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- torch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 8024-8035. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "To tune or not to tune? adapting pretrained representations to diverse tasks",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)",
"volume": "",
"issue": "",
"pages": "7--14",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4302"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pre- trained representations to diverse tasks. In Proceed- ings of the 4th Workshop on Representation Learn- ing for NLP (RepL4NLP-2019), pages 7-14, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Overview of the 5th international competition on plagiarism detection",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Gollub",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Hagen",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Tippmann",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Kiesel",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Efstathios",
"middle": [],
"last": "Stamatatos",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2013,
"venue": "Working Notes Papers of the CLEF 2013 Evaluation Labs",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Potthast, Tim Gollub, Matthias Hagen, Martin Tippmann, Johannes Kiesel, Paolo Rosso, Efstathios Stamatatos, and Benno Stein. 2013. Overview of the 5th international competition on plagiarism de- tection. In Working Notes Papers of the CLEF 2013 Evaluation Labs.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Identifying non-explicit citing sentences for citation-based summarization",
"authors": [
{
"first": "Vahed",
"middle": [],
"last": "Qazvinian",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Dragomir",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "555--564",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vahed Qazvinian and Dragomir R. Radev. 2010. Identi- fying non-explicit citing sentences for citation-based summarization. In Proceedings of the 48th Annual Meeting of the Association for Computational Lin- guistics, pages 555-564, Uppsala, Sweden. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "338--348",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1035"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338-348, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3982--3992",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1410"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "A winning approach to text alignment for text reuse detection at pan",
"authors": [
{
"first": "M",
"middle": [],
"last": "S\u00e1nchez-P\u00e9rez",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Sidorov",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Gelbukh",
"suffix": ""
}
],
"year": 2014,
"venue": "CLEF",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. S\u00e1nchez-P\u00e9rez, G. Sidorov, and Alexander Gelbukh. 2014. A winning approach to text alignment for text reuse detection at pan 2014. In CLEF.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. ArXiv, abs/1910.01108.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Detecting and modeling local text reuse",
"authors": [
{
"first": "David",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cordell",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [
"Maddock"
],
"last": "Dillon",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Stramp",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Wilkerson",
"suffix": ""
}
],
"year": 2014,
"venue": "JCDL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David A. Smith, Ryan Cordell, Elizabeth Maddock Dil- lon, Nick Stramp, and John Wilkerson. 2014. De- tecting and modeling local text reuse. In JCDL.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Lost in propagation? unfolding news cycles from the source",
"authors": [
{
"first": "Chenhao",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Adrien",
"middle": [],
"last": "Friggeri",
"suffix": ""
},
{
"first": "Lada",
"middle": [
"A"
],
"last": "Adamic",
"suffix": ""
}
],
"year": 2016,
"venue": "ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chenhao Tan, Adrien Friggeri, and Lada A. Adamic. 2016. Lost in propagation? unfolding news cycles from the source. In ICWSM.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Detection of idea plagiarism using syntax-semantic concept extractions with genetic algorithm",
"authors": [
{
"first": "K",
"middle": [],
"last": "Vani",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gupta",
"suffix": ""
}
],
"year": 2017,
"venue": "Expert Syst. Appl",
"volume": "73",
"issue": "",
"pages": "11--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Vani and D. Gupta. 2017. Detection of idea plagia- rism using syntax-semantic concept extractions with genetic algorithm. Expert Syst. Appl., 73:11-26.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Tracing the flow of policy ideas on legislatures: A text reuse approach",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wilkerson",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Stramp",
"suffix": ""
}
],
"year": 2015,
"venue": "American Journal of Political Science",
"volume": "59",
"issue": "4",
"pages": "943--956",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wilkerson, David A. Smith, and Nick Stramp. 2015. Tracing the flow of policy ideas on legisla- tures: A text reuse approach. American Journal of Political Science, 59(4):943-956.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Hierarchical attention networks for document classification",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1480--1489",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1174"
]
},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1480-1489, San Diego, California. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Multilevel text alignment with crossdocument attention",
"authors": [
{
"first": "Xuhui",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Nikolaos",
"middle": [],
"last": "Pappas",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "5012--5025",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.407"
]
},
"num": null,
"urls": [],
"raw_text": "Xuhui Zhou, Nikolaos Pappas, and Noah A. Smith. 2020. Multilevel text alignment with cross- document attention. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 5012-5025, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Below, we discuss all searched hyperparameters",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Below, we discuss all searched hyperparameters",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "2019) and run on 16GB or 32GB Nvdia P100s or V100s. TF-IDF: We search over n-gram size (unigrams or unigrams & bigrams). Rouge: We search over three different Rouge measures, Rouge-{1, 2, L}. Sentence-BERT: None except threshold. We test the following pretrained Sentence-BERT models: Semantic Textual Similarity: stsbroberta-large, Paraphrase Detection: paraphrasedistilroberta-base-v1",
"authors": [
{
"first": "",
"middle": [],
"last": "Paszke",
"suffix": ""
}
],
"year": 2015,
"venue": "were implemented in Pytorch",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "All neural models, with the exception of BCL- CDA (Tensorflow: Abadi et al. (2015)) were imple- mented in Pytorch (Paszke et al., 2019) and run on 16GB or 32GB Nvdia P100s or V100s. TF-IDF: We search over n-gram size (unigrams or unigrams & bigrams). Rouge: We search over three different Rouge measures, Rouge-{1, 2, L}. Sentence-BERT: None except threshold. We test the following pretrained Sentence-BERT models: Semantic Textual Similarity: stsb- roberta-large, Paraphrase Detection: paraphrase- distilroberta-base-v1, Information Retrieval: msmarco-distilroberta-base-v2.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "We use the suggested batch size (256), HAN hidden dimension size",
"authors": [
{
"first": "",
"middle": [],
"last": "Bert-Han",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "50",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "BERT-HAN (shallow): We use the suggested batch size (256), HAN hidden dimension size (50),",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "learning rates \u2208 {1e-5, 2e-5, 5e-5, 1e-4}. We use BERT BASE as the sentence encoder instead of BERT LARGE for efficiency reasons. For the S2ORC and ARC-Sim datasets",
"authors": [
{
"first": "(",
"middle": [],
"last": "Adam",
"suffix": ""
},
{
"first": "Ba",
"middle": [],
"last": "Kingma",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam (Kingma and Ba, 2015) learning rates \u2208 {1e-5, 2e-5, 5e-5, 1e-4}. We use BERT BASE as the sentence encoder instead of BERT LARGE for efficiency reasons. For the S2ORC and ARC- Sim datasets, we find that BERT-HAN's S2D per-",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "We use batch size 128 and 50 dimensional GloVe embeddings. Otherwise, the HPs are the same as for",
"authors": [
{
"first": "Gru-Han ; Bert-Han",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "GRU-HAN (deep): We use batch size 128 and 50 dimensional GloVe embeddings. Otherwise, the HPs are the same as for BERT-HAN.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "For the neural models: e is epochs, lr is learning rate, and w is the weight placed on positive examples in weighted cross-entropy loss",
"authors": [],
"year": null,
"venue": "Table 5: Best HP configurations for all models across all datasets. t is the classification threshold (only for PAN, S2ORC and ARC-Sim)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Table 5: Best HP configurations for all models across all datasets. t is the classification threshold (only for PAN, S2ORC and ARC-Sim). BERT-HAN and GRU-HAN have two thresholds, one for document classification, the other for sentence classification. All other models have a single, sentence-level threshold. n-gram is the n-gram range for TF-IDF (unigrams or unigrams and bigrams). For the neural models: e is epochs, lr is learning rate, and w is the weight placed on positive examples in weighted cross-entropy loss (weight on negative examples is 1).",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "f s is the CNN filter size, nf is number of CNN filters, and lhd is the BiLSTM hidden dimension",
"authors": [
{
"first": "Bcl-Cda",
"middle": [],
"last": "For",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "For BCL-CDA, f s is the CNN filter size, nf is number of CNN filters, and lhd is the BiLSTM hidden dimension. '-' indicates that there are no HPs to be optimized. '\u00d7' indicates that the model is not trained on that dataset.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"num": null,
"content": "<table/>",
"text": "",
"type_str": "table"
},
"TABREF2": {
"html": null,
"num": null,
"content": "<table/>",
"text": "D2D and S2D results on PAN and S2ORC.",
"type_str": "table"
},
"TABREF4": {
"html": null,
"num": null,
"content": "<table/>",
"text": "D2D and S2D results on ARC-Sim and S2D results on Pr2News.",
"type_str": "table"
},
"TABREF6": {
"html": null,
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">Train</td><td>Val</td><td/><td>Test</td></tr><tr><td>Dataset</td><td># Pos</td><td colspan=\"5\"># Neg # Pos # Neg # Pos # Neg</td></tr><tr><td>PAN</td><td>6,152</td><td colspan=\"5\">7,567 1,243 1,336 1,253 1,352</td></tr><tr><td>S2ORC</td><td colspan=\"6\">74,807 75,861 9,262 9,562 9,258 9,561</td></tr><tr><td colspan=\"2\">Pr2News 64,684</td><td>-</td><td>45</td><td>-</td><td>50</td><td>-</td></tr><tr><td>ARC</td><td/><td/><td/><td/><td/></tr></table>",
"text": "Number of examples in the training, validation and test sets of each dataset, split into numbers of positive and negative examples. Pr2News contains no negative examples.",
"type_str": "table"
}
}
}
}