{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:11:46.850700Z" }, "title": "Entities, Dates, and Languages: Zero-Shot on Historical Texts with T0", "authors": [ { "first": "Francesco", "middle": [], "last": "De Toni", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Western Australia", "location": { "settlement": "Perth", "country": "Australia" } }, "email": "francesco.detoni@uwa.edu.au" }, { "first": "Christopher", "middle": [], "last": "Akiki", "suffix": "", "affiliation": { "laboratory": "Javier de la Rosa National Library of Norway", "institution": "Leipzig University", "location": { "settlement": "Leipzig, Oslo", "country": "Germany, Norway" } }, "email": "" }, { "first": "Cl\u00e9mentine", "middle": [], "last": "Fourrier", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Enrique", "middle": [], "last": "Manjavacas", "suffix": "", "affiliation": { "laboratory": "", "institution": "Leiden University", "location": { "settlement": "Leiden", "country": "The Netherlands" } }, "email": "" }, { "first": "Stefan", "middle": [], "last": "Schweter", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bayerische Staatsbibliothek", "location": { "settlement": "M\u00fcnchen", "country": "Germany" } }, "email": "" }, { "first": "Daniel", "middle": [], "last": "Van Strien", "suffix": "", "affiliation": { "laboratory": "", "institution": "British Library", "location": { "settlement": "London", "country": "United Kingdom" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this work, we explore whether the recently demonstrated zero-shot abilities of the T0 model extend to Named Entity Recognition for out-of-distribution languages and time periods. Using a historical newspaper corpus in 3 languages as test-bed, we use prompts to extract possible named entities. Our results show that a naive approach for prompt-based zero-shot multilingual Named Entity Recognition is errorprone, but highlights the potential of such an approach for historical languages lacking labeled datasets. Moreover, we also find that T0-like models can be probed to predict the publication date and language of a document, which could be very relevant for the study of historical texts * .", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "In this work, we explore whether the recently demonstrated zero-shot abilities of the T0 model extend to Named Entity Recognition for out-of-distribution languages and time periods. Using a historical newspaper corpus in 3 languages as test-bed, we use prompts to extract possible named entities. Our results show that a naive approach for prompt-based zero-shot multilingual Named Entity Recognition is errorprone, but highlights the potential of such an approach for historical languages lacking labeled datasets. Moreover, we also find that T0-like models can be probed to predict the publication date and language of a document, which could be very relevant for the study of historical texts * .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "This paper lies at the focal point of three orthogonal advances. First, the recent surge in GLAM 1 -led digitisation efforts (Terras, 2011) , open citizen science (Haklay et al., 2021) and the expansive commodification of data (Hey and Trefethen, 2003) , have enabled a new mode of historical inquiry that capitalises on the 'big data of the past' (Kaplan and Di Lenardo, 2017) . Second, the 2017 breakthrough that was the transformer architecture (Vaswani et al., 2017) has led to the so-called ImageNet moment of Natural Language Processing (Ruder, 2018) and brought about unprecedented progress in transfer-learning (Raffel et al., 2020) , few-shot learning (Schick and Sch\u00fctze, 2021) , zero-shot learning (Sanh et al., 2021) , and prompt-based learning (Le Scao and Rush, 2021) for natural language. Third, the growing popularity of promptbased methods (Liu et al., 2021) has resulted in a new paradigm for training and fine-tuning Large Language Models (LLM) as well as novel applications in Named Entity Recognition (NER) (Liu et al., 2022) .", "cite_spans": [ { "start": 125, "end": 139, "text": "(Terras, 2011)", "ref_id": "BIBREF28" }, { "start": 163, "end": 184, "text": "(Haklay et al., 2021)", "ref_id": null }, { "start": 227, "end": 252, "text": "(Hey and Trefethen, 2003)", "ref_id": "BIBREF11" }, { "start": 348, "end": 377, "text": "(Kaplan and Di Lenardo, 2017)", "ref_id": "BIBREF13" }, { "start": 448, "end": 470, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF30" }, { "start": 543, "end": 556, "text": "(Ruder, 2018)", "ref_id": "BIBREF24" }, { "start": 619, "end": 640, "text": "(Raffel et al., 2020)", "ref_id": "BIBREF22" }, { "start": 661, "end": 687, "text": "(Schick and Sch\u00fctze, 2021)", "ref_id": "BIBREF26" }, { "start": 709, "end": 728, "text": "(Sanh et al., 2021)", "ref_id": null }, { "start": 857, "end": 875, "text": "(Liu et al., 2021)", "ref_id": "BIBREF19" }, { "start": 1028, "end": 1046, "text": "(Liu et al., 2022)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "NER for historical texts has been the focus of a growing body of research, most recently surveyed by Ehrmann et al. (2021) . Both NER and the related task of Entity Linking can enhance our ability to search and navigate digitised historical materials (Neudecker et al., 2014; Kim and Cassidy, 2015) . However, applying NER to historical texts poses a number of challenges, including those due to errors in Optical Character Recognition (OCR) (Ehrmann et al., 2021; Hamdi et al., 2019; Boros et al., 2020) and domain transfer (Baptiste et al., 2021) . To advance research in this area, an increasing number of datasets have been created to support the development and evaluation of NER approaches in historical text (Neudecker, 2016; Ehrmann et al., 2020 Ehrmann et al., , 2022 In this paper, we examine the zero-shot abilities of T0-a prompt-based LLM developed as part of the BigScience project for open research (Sanh et al., 2021 )-on the challenging task of historical NER 2 . This endeavour had two main hurdles:", "cite_spans": [ { "start": 101, "end": 122, "text": "Ehrmann et al. (2021)", "ref_id": null }, { "start": 251, "end": 275, "text": "(Neudecker et al., 2014;", "ref_id": "BIBREF21" }, { "start": 276, "end": 298, "text": "Kim and Cassidy, 2015)", "ref_id": "BIBREF15" }, { "start": 442, "end": 464, "text": "(Ehrmann et al., 2021;", "ref_id": null }, { "start": 465, "end": 484, "text": "Hamdi et al., 2019;", "ref_id": "BIBREF10" }, { "start": 485, "end": 504, "text": "Boros et al., 2020)", "ref_id": "BIBREF3" }, { "start": 525, "end": 548, "text": "(Baptiste et al., 2021)", "ref_id": "BIBREF0" }, { "start": 715, "end": 732, "text": "(Neudecker, 2016;", "ref_id": "BIBREF20" }, { "start": 733, "end": 753, "text": "Ehrmann et al., 2020", "ref_id": "BIBREF6" }, { "start": 754, "end": 776, "text": "Ehrmann et al., , 2022", "ref_id": "BIBREF5" }, { "start": 914, "end": 932, "text": "(Sanh et al., 2021", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) the model was neither trained to recognize entities, nor was it ever tested on that task; (2) our evaluation dataset was out-of-distribution, containing both multilingual and historical data. To better contextualize the results of our experiments, we also run zero-shot prompt-based probing (Zhong et al., 2021) to assess T0's broader ability of extracting factual knowledge about two key factors in our experiment, that is, language variation and historical variation in the dataset.", "cite_spans": [ { "start": 295, "end": 315, "text": "(Zhong et al., 2021)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our data comes from version 1.4 of the CLEF-HIPE 3 2020 open-access dataset 4 : an OCR'ed newspaper corpus annotated for NER (Ehrmann et al., 2020) . It contains Swiss and Luxembourgish newspapers from 1790 to 2010, in English, German and French. For our experiment, we use only entities of coarse type, according to their literal sense. Coarse entity types in the CLEF-HIPE 2020 dataset are persons, locations, organizations, dates and products (which includes media and doctrines).", "cite_spans": [ { "start": 125, "end": 147, "text": "(Ehrmann et al., 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental setup 2.1 Data description", "sec_num": "2" }, { "text": "We mix the original training and validation sets to constitute our test set 5 , and we split this new set by language and date (using 20 years time intervals, 6 see Table 1 ). Each language dataset is relatively balanced between 1810 and 1910, with English containing between 2,202 and 4,697 tokens per split with the exception of one split (1850-1870 English) for which there are no tokens. German contains between 6,735 and 12,829 tokens, and French contains between 8,550 and 16,874 tokens. The end periods contain on average more tokens for German and French. Overall, the dataset contains 3.8% of named entities (from 1.9 to 5.6%, depending on time periods and datasets). The most balanced dataset across time periods is the French one (between 3.8 and 4.6% named entities).", "cite_spans": [], "ref_spans": [ { "start": 165, "end": 172, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental setup 2.1 Data description", "sec_num": "2" }, { "text": "In our experiments, we use the T0++ variant of the T0 language model (Sanh et al., 2021) , based on the LM-adapted T5 model (Lester et al., 2021) , itself a variant of the T5 model (Raffel et al., 2020) , which further pretrains the original encoder-decoder architecture of T5 with an autoregressive language modeling objective. 7 Crucially, this pretraining is done using a prompt-based training setup, in which training examples are transformed into prompts using a variety of crowd-sourced prompt templates. This setup allows T0 to perform few-shot and zeroshot learning when presented with new prompts for a previously unseen task.", "cite_spans": [ { "start": 69, "end": 88, "text": "(Sanh et al., 2021)", "ref_id": null }, { "start": 124, "end": 145, "text": "(Lester et al., 2021)", "ref_id": "BIBREF17" }, { "start": 181, "end": 202, "text": "(Raffel et al., 2020)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Model description", "sec_num": "2.2" }, { "text": "Our goal in this paper is to see if and how state-ofthe-art language models can be used for historical NLP tasks, with minimal modifications and finetuning. 8 As such, we choose to use a 'naive' approach, by directly asking the model which named entities a given sentence contains. To do so, we first design prompts for each named entity type (see Table 2 ). For each sentence in the dataset, we then 1) use all the generation prompts to determine if the sentence contains named entities of each entity type 9 ; 2) filter the model's answer to keep only tokens that are actually in the input sentence, keeping the entity covering the longer span in case of nested entities; and 3) ask a disambiguation question if needed (if a token was assigned to multiple entities by the model). Results are stored at each step.", "cite_spans": [ { "start": 157, "end": 158, "text": "8", "ref_id": null } ], "ref_spans": [ { "start": 348, "end": 355, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiments", "sec_num": "2.3" }, { "text": "We then evaluate the results and conduct two additional experiments to better understand the impact of the dataset language and time period on the performance of the LM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "2.3" }, { "text": "Results reveal limitations in our proposed approach. First, T0 exhibits a clear tendency to produce nonempty outputs regardless of the presence or absence of named entities in the input: none of the prompts generates an empty answer. This is especially visible for the entity PROD, for which T0 answers over 55% of the queries with the name of the entity itself (e.g. either media or doctrine) rather than with any other token from the input sentence. Second, adequately matching T0's output with tokens in the input sentence proved difficult. Even when T0 generates an answer semantically very close", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations", "sec_num": "3.1" }, { "text": "Time period #Documents #Tokens NE% #Documents #Tokens NE% #Documents #Tokens NE% In which year is the following text likely to have been published: text: to the correct token in the sentence, differences in spelling prevent the algorithm from correctly associating T0's answer with said token in the input sentence. This problem is inherent to the nature of our dataset: frequent OCR errors generate unpredictable variations in 'gold' word spelling (including spacing between words and letters or diacritics variation), which are automatically corrected by T0 during its predictions, 10 which negatively affects our ability to automatically match its answers with corresponding tokens in the sentence. In other instances, the model translated words from French and German into English. Further experiments might need to mitigate language variety by adding input text to the prompt, to help the model correctly assess the language in which it must answer. As all answers predicted are considered strictly incorrect, the algorithm never enters its disambiguation phase. We therefore analyse non disambiguated results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "German French", "sec_num": null }, { "text": "To evaluate proximity between predictions and gold, we compare 'gold' tokens with predicted tokens using normalized Levenshtein distance, 11 using this metric as a proxy to identify best predictions for each entity query in each sentence. For a given example, we define (1) the true positive as the prediction with the shortest Levenshtein distance from the gold;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.2" }, { "text": "(2) false positives as predictions of entities that are not actually present in the input sentence; and (3) false negatives as predictions that have longer Levenshtein distance to the gold tokens (i.e. predictions that would have failed to identify entity tokens in the sentence). Precision and F1score are relatively low, especially for PROD entities, which were the most difficult to define in terms of text prompts. Higher values for recall are due to the fact that increasing the Levenshtein threshold makes it more likely to find an acceptable answer among those generated by T0. Unsurprisingly, the highest increase is found in TIME entities (dates have fixed formats, which makes it more likely to find an acceptable distance between predictions and correct tokens). Precision scores for each entity type are shown in Figure 1 (see Fig. 3 in Appendix for recall and F1-score). The results of our experiment suggest that, although T0 struggles to return exact matches of the entities in the input sentence, it is still capable of generating answers that are semantically close to the correct tokens.", "cite_spans": [], "ref_spans": [ { "start": 825, "end": 833, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 839, "end": 845, "text": "Fig. 3", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "3.2" }, { "text": "After manually inspecting the dataset and its numerous OCR artifacts, we choose 0.4 as a reasonable heuristic of close semantic similarity between T0's output and gold tokens. We find that using a threshold of 0.4 prevents the apparition of false positives, and therefore we use it to analyze differences between languages and between historical periods within the dataset. With respect to variations across languages, we observe that the precision of predictions in English does not have a clear edge over precision in French and German ( Fig. 2 ; see also Fig. 4 in Appendix). This is unexpected, as T0 should display considerable bias towards English, which constitutes most of its training data. With respect to variations across periods, we observe an improvement in precision (and F1-score) for PERS and LOC entities in English texts from 1850s onwards ( Fig. 3 ; for recall and F1-score, see Fig. 5 in Appendix), when for other entities and languages, precision and F1-score are either stable or show a downward trend (e.g. LOC in German) 12 . Variations in recall cannot be reduced to clear trends, but they are particularly erratic in English texts. A possible explanation could be that T0 is more sensitive to English text inputs, and therefore outputs a higher or lower number of irrelevant answers based on the specific content of each input sentence.", "cite_spans": [], "ref_spans": [ { "start": 540, "end": 546, "text": "Fig. 2", "ref_id": "FIGREF2" }, { "start": 558, "end": 564, "text": "Fig. 4", "ref_id": null }, { "start": 861, "end": 867, "text": "Fig. 3", "ref_id": null }, { "start": 899, "end": 905, "text": "Fig. 5", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "3.2" }, { "text": "Baseline comparison with the results of the HIPE 12 The absence of documents in the 1850-1870 English split explains the missing values for English in that period. 2020 evaluation campaign 13 confirms that our implementation of zero-shot NER with T0 is below SOTA performance. As baselines, we considered the micro precision, recall and F1-score of coarse NER (literal sense) with fuzzy boundary matching from HIPE 2020 (see Table 3 ). All the scores from our experiments with T0 are below the best results from HIPE 2020. We note that the results from HIPE 2020 are based on experiments conducted on the HIPE test sets in each language (these are different from the test sets we used in our experiments, for which we combined the original HIPE training and validation sets; see Sec. 2.1). For this reason, we re-run our experiments on the original HIPE test sets, keeping the threshold for Levenshtein distance at 0.4. We observe no significant improvement in precision and F1-score compared to the results of our experiments on the combined training and validation sets. We observe some improvements in recall, especially for English and for TIME, with recall reaching 1.0 for some combinations of language, entity and time period. However, we believe that this improvement is not significant and it is due to our choice of the Levenshtein threshold, as already explained above.", "cite_spans": [ { "start": 49, "end": 51, "text": "12", "ref_id": null } ], "ref_spans": [ { "start": 425, "end": 432, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "3.2" }, { "text": "In addition to our main experiment on NER, we run two further experiments to assess T0's ability to do inference in a multilingual setting and to identify historical variation in textual corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prompt-based factual probing", "sec_num": "4" }, { "text": "Probing for language To gauge T0's ability to reason in a multilingual setting, we test the model's language identification ability. To that end, we use a trilingual 14 subset of the WiLI-2018 -Wikipedia Language Identification dataset (Thoma, 2018 ) and prompt the model on language (Table 2) . We find that the model is able to correctly classify 83% of French sentences, 74.1% of German sentences, but only 35.4% of English sentences. The previously mentioned potential sensitivity of the model to its own mother tongue might explain this result.", "cite_spans": [ { "start": 236, "end": 248, "text": "(Thoma, 2018", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 284, "end": 293, "text": "(Table 2)", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Prompt-based factual probing", "sec_num": "4" }, { "text": "Probing for publication date To assess T0's treatment of historical text, we study how well it predicts the likely date of publication for a piece of text from our test dataset by prompting on publication date (Table 2) Table 4 shows the prediction errors. Subtle language change can occur in a measurable way in as short a period as a decade (Juola, 2003) , and therefore a median absolute error of 30 suggests that T0 is good in predicting publication dates. We notice some variation in performance between different languages, with French performing slightly worse on both metrics (possibly because it belongs to a different language family from English, contrary to German).", "cite_spans": [ { "start": 343, "end": 356, "text": "(Juola, 2003)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 210, "end": 219, "text": "(Table 2)", "ref_id": "TABREF2" }, { "start": 220, "end": 227, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Prompt-based factual probing", "sec_num": "4" }, { "text": "We have presented our experiment to evaluate T0 for zero-shot historical NER, as well as on the pre-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "The figures below and in the next page provide full results of evaluation on Levenshtein distance, including precision, recall and F1-score at different thresholds, at threshold 0.4, and across different time periods in the CLEF-HIPE 2020 dataset. score 1790 1810 1830 1850 1870 1890 1910 1930 period 1790 1810 1830 1850 1870 1890 1910 1930 period 1790 1810 1830 1850 1870 1890 1910 1930 period 1790 1810 1830 1850 1870 1890 1910 1930 period 1790 1810 1830 1850 1870 1890 1910 1930 period metric = F1-Score en de fr Figure 5 : Precision, recall and F1-score (resp. first, second and third rows) at Levenshtein threshold 0.4 across periods for different languages. Languages are distinguished by both the line color and the type of dot.", "cite_spans": [], "ref_spans": [ { "start": 248, "end": 534, "text": "score 1790 1810 1830 1850 1870 1890 1910 1930 period 1790 1810 1830 1850 1870 1890 1910 1930 period 1790 1810 1830 1850 1870 1890 1910 1930 period 1790 1810 1830 1850 1870 1890 1910 1930 period 1790 1810 1830 1850 1870 1890 1910 1930", "ref_id": "TABREF1" }, { "start": 569, "end": 577, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Appendix: Full scores of Levenshtein distance", "sec_num": null }, { "text": "https://github.com/bigscience-workshop/ historical_texts", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Conference and Labs of the Evaluation Forum -Identifying Historical People, Places and other Entities.4 https://github.com/impresso/CLEF-HIPE-20205 For English, we use only the validation set, as the training set is absent6 We chose 20-year spans as the smallest time range producing somewhat balanced splits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The added specific pretraining of T0 uses a set of 11 varied tasks represented by a total of 55 datasets.8 Ecological concerns and funding inequalities raise considerations on how to best use already existing models for lower-resourced tasks, and with spending as little further computing power in fine-tuning as possible(Bender et al., 2021).9 For PROD entities, the generation prompt explicitly mentioned media and doctrines, as we regarded the word product as too generic to return an accurate answer from T0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "E.g. Respelling words that were garbled due to noisy OCR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Normalization was done with regard to the length of the longest token (predicted or correct), and results were kept below a threshold. We tried 0.0, 0.1, 0.2, 0.3, 0.4 and 0.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/impresso/CLEF-HIPE-2020/ blob/master/evaluation-results/ranking_summary_ final.md", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "French, German, and English; 1000 sentences each. diction of language and publication date of historical texts. Our results show that historical texts present additional challenges for zero-shot NER (especially because historical datasets often include noisy OCR), but that T0 can however be used as is for language and date prediction. Next steps will be experimenting on different prompts and matching methods, as well as testing few-shot NER.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work took place under the umbrella of the \"Language Models for Historical Texts\" working group of the BigScience \"Summer of Language Models 21\" workshop 15 . We are thankful to the organizers of this workshop for providing a forum conducive to collaborative and open scientific inquiry. We are especially grateful to Suzana Ili\u0107 for her help setting up and organising the working group.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "In this paper, we take exploratory first steps toward instrumentalising the T0 large language model on the task of historical NER. We deem it appropriate to briefly discuss the ethical considerations that are implied by such a usage. First, if a model can be used in a context for which it was not explicitly intended for, it stands to reason that it can be misused in that same context: while recognizing entities in historical texts might at first glance seem innocuous, numerous studies focused on BIPOC representation in history have shown that this is not the case, as some marginalized groups tend to suffer from history erasure (Kellow, 1999; Ram, 2020; Stanley, 2021) . Second, the automation and scaling of historical inquiry could potentially lead to unreflected (mis)interpretations of the past (Gibbs and Owens, 2013; Gibbs, 2016) . Third, the experimental nature of prompt-based inference could lead to a considerable carbon footprint, owing to the trial-and-error nature of manual prompt calibration, though this cost would still be lower than training a new model from scratch or fine-tuning an existing LLM (see footnote 8).", "cite_spans": [ { "start": 635, "end": 649, "text": "(Kellow, 1999;", "ref_id": "BIBREF14" }, { "start": 650, "end": 660, "text": "Ram, 2020;", "ref_id": "BIBREF23" }, { "start": 661, "end": 675, "text": "Stanley, 2021)", "ref_id": "BIBREF27" }, { "start": 806, "end": 829, "text": "(Gibbs and Owens, 2013;", "ref_id": "BIBREF7" }, { "start": 830, "end": 842, "text": "Gibbs, 2016)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Broader Impacts Statement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Transferring Modern Named Entity Recognition to the Historical Domain: 15", "authors": [ { "first": "Blouin", "middle": [], "last": "Baptiste", "suffix": "" }, { "first": "Jeremy", "middle": [], "last": "Benoit Favre", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Auguste", "suffix": "" }, { "first": "", "middle": [], "last": "Henriot", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blouin Baptiste, Benoit Favre, Jeremy Auguste, and Christian Henriot. 2021. Transferring Modern Named Entity Recognition to the Historical Domain: 15 https://bigscience.huggingface.co/ 79", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "How to Take the Step?", "authors": [], "year": null, "venue": "Workshop on Natural Language Processing for Digital Humanities (NLP4DH)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "How to Take the Step? In Workshop on Natural Lan- guage Processing for Digital Humanities (NLP4DH), Silchar (Online), India.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "On the dangers of stochastic parrots: Can language models be too big?", "authors": [ { "first": "Emily", "middle": [ "M" ], "last": "Bender", "suffix": "" }, { "first": "Timnit", "middle": [], "last": "Gebru", "suffix": "" }, { "first": "Angelina", "middle": [], "last": "Mcmillan-Major", "suffix": "" }, { "first": "Shmargaret", "middle": [], "last": "Shmitchell", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21", "volume": "", "issue": "", "pages": "610--623", "other_ids": { "DOI": [ "10.1145/3442188.3445922" ] }, "num": null, "urls": [], "raw_text": "Emily M. Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Confer- ence on Fairness, Accountability, and Transparency, FAccT '21, pages 610-623, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Alleviating digitization errors in named entity recognition for historical documents", "authors": [ { "first": "Emanuela", "middle": [], "last": "Boros", "suffix": "" }, { "first": "Ahmed", "middle": [], "last": "Hamdi", "suffix": "" }, { "first": "Elvys", "middle": [ "Linhares" ], "last": "Pontes", "suffix": "" }, { "first": "Luis", "middle": [ "Adri\u00e1n" ], "last": "Cabrera-Diego", "suffix": "" }, { "first": "Jose", "middle": [ "G" ], "last": "Moreno", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Sidere", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Doucet", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 24th Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "431--441", "other_ids": { "DOI": [ "10.18653/v1/2020.conll-1.35" ] }, "num": null, "urls": [], "raw_text": "Emanuela Boros, Ahmed Hamdi, Elvys Linhares Pontes, Luis Adri\u00e1n Cabrera-Diego, Jose G. Moreno, Nico- las Sidere, and Antoine Doucet. 2020. Alleviating digitization errors in named entity recognition for his- torical documents. In Proceedings of the 24th Confer- ence on Computational Natural Language Learning, pages 431-441, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Elvys Linhares Pontes, Matteo Romanello, and Antoine Doucet. 2021. Named entity recognition and classification on historical documents: A survey", "authors": [ { "first": "Maud", "middle": [], "last": "Ehrmann", "suffix": "" }, { "first": "Ahmed", "middle": [], "last": "Hamdi", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maud Ehrmann, Ahmed Hamdi, Elvys Linhares Pontes, Matteo Romanello, and Antoine Doucet. 2021. Named entity recognition and classification on histor- ical documents: A survey. CoRR, abs/2109.11406.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "HIPE-2022 shared task named entity datasets", "authors": [ { "first": "Maud", "middle": [], "last": "Ehrmann", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Romanello", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Doucet", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Clematide", "suffix": "" } ], "year": 2022, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.5281/zenodo.6089968" ] }, "num": null, "urls": [], "raw_text": "Maud Ehrmann, Matteo Romanello, Antoine Doucet, and Simon Clematide. 2022. HIPE-2022 shared task named entity datasets.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Extended Overview of CLEF HIPE 2020: Named entity processing on historical newspapers", "authors": [ { "first": "Maud", "middle": [], "last": "Ehrmann", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Romanello", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Fl\u00fcckiger", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Clematide", "suffix": "" } ], "year": 2020, "venue": "CLEF 2020 Working Notes. Working Notes of CLEF 2020 -Conference and Labs of the Evaluation Forum", "volume": "2696", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.5281/zenodo.4117566" ] }, "num": null, "urls": [], "raw_text": "Maud Ehrmann, Matteo Romanello, Alex Fl\u00fcckiger, and Simon Clematide. 2020. Extended Overview of CLEF HIPE 2020: Named entity processing on historical newspapers. In CLEF 2020 Working Notes. Working Notes of CLEF 2020 -Conference and Labs of the Evaluation Forum, volume 2696, page 38, Thessaloniki, Greece. CEUR-WS.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The hermeneutics of data and historical writing", "authors": [ { "first": "Fred", "middle": [], "last": "Gibbs", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Owens", "suffix": "" } ], "year": 2013, "venue": "Writing History in the Digital Age", "volume": "", "issue": "", "pages": "159--172", "other_ids": { "DOI": [ "10.3998/dh.12230987.0001.001" ] }, "num": null, "urls": [], "raw_text": "Fred Gibbs and Trevor Owens. 2013. The hermeneutics of data and historical writing. In Kristen Nawrotzki and Jack Dougherty, editors, Writing History in the Digital Age, pages 159-172. University of Michigan Press.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "New forms of history: Critiquing data and its representations", "authors": [ { "first": "W", "middle": [], "last": "Frederick", "suffix": "" }, { "first": "", "middle": [], "last": "Gibbs", "suffix": "" } ], "year": 2016, "venue": "The American Historian", "volume": "7", "issue": "", "pages": "31--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frederick W Gibbs. 2016. New forms of history: Cri- tiquing data and its representations. The American Historian, 7:31-36.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Alice Motion, Andrea Sforzi, Dorte Riemenschneider, and Katrin Vohland. 2021. Contours of citizen science: a vignette study", "authors": [ { "first": "Muki", "middle": [], "last": "Haklay", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Fraisl", "suffix": "" }, { "first": "Bastian", "middle": [], "last": "Tzovaras", "suffix": "" }, { "first": "Susanne", "middle": [], "last": "Hecker", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Gold", "suffix": "" }, { "first": "Gerid", "middle": [], "last": "Hager", "suffix": "" }, { "first": "Luigi", "middle": [], "last": "Ceccaroni", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Kieslinger", "suffix": "" }, { "first": "Uta", "middle": [], "last": "Wehn", "suffix": "" }, { "first": "Sasha", "middle": [], "last": "Woods", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Nold", "suffix": "" }, { "first": "B\u00e1lint", "middle": [], "last": "Bal\u00e1zs", "suffix": "" }, { "first": "Marzia", "middle": [], "last": "Mazzonetto", "suffix": "" }, { "first": "Simone", "middle": [], "last": "Ruefenacht", "suffix": "" }, { "first": "Lea", "middle": [], "last": "Shanley", "suffix": "" }, { "first": "Katherin", "middle": [], "last": "Wagenknecht", "suffix": "" } ], "year": null, "venue": "Royal Society Open Science", "volume": "8", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1098/rsos.202108" ] }, "num": null, "urls": [], "raw_text": "Muki Haklay, Dilek Fraisl, Bastian Tzovaras, Su- sanne Hecker, Margaret Gold, Gerid Hager, Luigi Ceccaroni, Barbara Kieslinger, Uta Wehn, Sasha Woods, Christian Nold, B\u00e1lint Bal\u00e1zs, Marzia Maz- zonetto, Simone Ruefenacht, Lea Shanley, Katherin Wagenknecht, Alice Motion, Andrea Sforzi, Dorte Riemenschneider, and Katrin Vohland. 2021. Con- tours of citizen science: a vignette study. Royal Society Open Science, 8:202108.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "An analysis of the performance of named entity recognition over ocred documents", "authors": [ { "first": "Ahmed", "middle": [], "last": "Hamdi", "suffix": "" }, { "first": "Axel", "middle": [], "last": "Jean-Caurant", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Sid\u00e8re", "suffix": "" }, { "first": "Micka\u00ebl", "middle": [], "last": "Coustaty", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Doucet", "suffix": "" } ], "year": 2019, "venue": "ACM/IEEE Joint Conference on Digital Libraries (JCDL)", "volume": "", "issue": "", "pages": "333--334", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ahmed Hamdi, Axel Jean-Caurant, Nicolas Sid\u00e8re, Micka\u00ebl Coustaty, and Antoine Doucet. 2019. An analysis of the performance of named entity recogni- tion over ocred documents. 2019 ACM/IEEE Joint Conference on Digital Libraries (JCDL), pages 333- 334.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The Data Deluge: An e-Science Perspective, chapter 36", "authors": [ { "first": "Tony", "middle": [], "last": "Hey", "suffix": "" }, { "first": "Anne", "middle": [], "last": "Trefethen", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1002/0470867167.ch36" ] }, "num": null, "urls": [], "raw_text": "Tony Hey and Anne Trefethen. 2003. The Data Deluge: An e-Science Perspective, chapter 36. John Wiley & Sons, Ltd.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The time course of language change", "authors": [ { "first": "Patrick", "middle": [], "last": "Juola", "suffix": "" } ], "year": 2003, "venue": "Computers and the Humanities", "volume": "37", "issue": "1", "pages": "77--96", "other_ids": { "DOI": [ "10.1023/A:1021839220474" ] }, "num": null, "urls": [], "raw_text": "Patrick Juola. 2003. The time course of lan- guage change. Computers and the Humanities, 37(1):77-96.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Big data of the past", "authors": [ { "first": "Fr\u00e9d\u00e9ric", "middle": [], "last": "Kaplan", "suffix": "" }, { "first": "Isabella", "middle": [ "Di" ], "last": "Lenardo", "suffix": "" } ], "year": 2017, "venue": "Frontiers Digit. Humanit", "volume": "4", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.3389/fdigh.2017.00012" ] }, "num": null, "urls": [], "raw_text": "Fr\u00e9d\u00e9ric Kaplan and Isabella Di Lenardo. 2017. Big data of the past. Frontiers Digit. Humanit., 4:12.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Erasing slavery: Memory, history, and race in new england", "authors": [ { "first": "M", "middle": [ "R" ], "last": "Margaret", "suffix": "" }, { "first": "", "middle": [], "last": "Kellow", "suffix": "" } ], "year": 1999, "venue": "Reviews in American History", "volume": "27", "issue": "4", "pages": "526--533", "other_ids": {}, "num": null, "urls": [], "raw_text": "Margaret MR Kellow. 1999. Erasing slavery: Mem- ory, history, and race in new england. Reviews in American History, 27(4):526-533.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Finding names in Trove: Named entity recognition for Australian historical newspapers", "authors": [ { "first": "Mac", "middle": [], "last": "Sunghwan", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Kim", "suffix": "" }, { "first": "", "middle": [], "last": "Cassidy", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Australasian Language Technology Association Workshop", "volume": "", "issue": "", "pages": "57--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sunghwan Mac Kim and Steve Cassidy. 2015. Find- ing names in Trove: Named entity recognition for Australian historical newspapers. In Proceedings of the Australasian Language Technology Association Workshop 2015, pages 57-65, Parramatta, Australia.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "How many data points is a prompt worth?", "authors": [ { "first": "Le", "middle": [], "last": "Teven", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Scao", "suffix": "" }, { "first": "", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "2627--2636", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.208" ] }, "num": null, "urls": [], "raw_text": "Teven Le Scao and Alexander Rush. 2021. How many data points is a prompt worth? In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 2627-2636, Online. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The power of scale for parameter-efficient prompt tuning", "authors": [ { "first": "Brian", "middle": [], "last": "Lester", "suffix": "" }, { "first": "Rami", "middle": [], "last": "Al-Rfou", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3045--3059", "other_ids": { "DOI": [ "10.18653/v1/2021.emnlp-main.243" ] }, "num": null, "urls": [], "raw_text": "Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045-3059, Online and Punta Cana, Domini- can Republic. Association for Computational Lin- guistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "QaNER: Prompting question answering models for few-shot named entity recognition", "authors": [ { "first": "Andy", "middle": [ "T" ], "last": "Liu", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Henghui", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Dejiao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shang-Wen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Arnold", "suffix": "" } ], "year": 2022, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andy T. Liu, Wei Xiao, Henghui Zhu, Dejiao Zhang, Shang-Wen Li, and Andrew Arnold. 2022. QaNER: Prompting question answering models for few-shot named entity recognition.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing", "authors": [ { "first": "Pengfei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Weizhe", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Jinlan", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Zhengbao", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Hiroaki", "middle": [], "last": "Hayashi", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pre- train, prompt, and predict: A systematic survey of prompting methods in natural language processing.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "An open corpus for named entity recognition in historic newspapers", "authors": [ { "first": "Clemens", "middle": [], "last": "Neudecker", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "4348--4352", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clemens Neudecker. 2016. An open corpus for named entity recognition in historic newspapers. In Proceed- ings of the Tenth International Conference on Lan- guage Resources and Evaluation (LREC'16), pages 4348-4352, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Large-scale refinement of digital historic newspapers with named entity recognition", "authors": [ { "first": "Clemens", "middle": [], "last": "Neudecker", "suffix": "" }, { "first": "Lotte", "middle": [], "last": "Wilms", "suffix": "" }, { "first": "Willem", "middle": [ "Jan" ], "last": "Faber", "suffix": "" }, { "first": "Theo", "middle": [], "last": "Van Veen", "suffix": "" } ], "year": 2014, "venue": "IFLA Congress 2014 -Digital Transformation and the Changing Role of News Media in the 21st Century", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clemens Neudecker, Lotte Wilms, Willem Jan Faber, and Theo van Veen. 2014. Large-scale refinement of digital historic newspapers with named entity recog- nition. In IFLA Congress 2014 -Digital Transfor- mation and the Changing Role of News Media in the 21st Century.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Journal of Machine Learning Research", "volume": "21", "issue": "140", "pages": "1--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Black historical erasure: A critical comparative analysis in Rosewood and Ocoee", "authors": [ { "first": "Christelle", "middle": [], "last": "Ram", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christelle Ram. 2020. Black historical erasure: A crit- ical comparative analysis in Rosewood and Ocoee. Ph.D. thesis, Rollins College.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "NLP's ImageNet moment has arrived", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder. 2018. NLP's ImageNet mo- ment has arrived. https://ruder.io/ nlp-imagenet/.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "It's not just size that matters: Small language models are also fewshot learners", "authors": [ { "first": "Timo", "middle": [], "last": "Schick", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "2339--2352", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.185" ] }, "num": null, "urls": [], "raw_text": "Timo Schick and Hinrich Sch\u00fctze. 2021. It's not just size that matters: Small language models are also few- shot learners. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2339-2352, Online. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Beyond erasure: Indigenous genocide denial and settler colonialism", "authors": [ { "first": "A", "middle": [], "last": "Michelle", "suffix": "" }, { "first": "", "middle": [], "last": "Stanley", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michelle A Stanley. 2021. Beyond erasure: Indigenous genocide denial and settler colonialism. Routledge.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "The rise of digitization", "authors": [ { "first": "Melissa", "middle": [ "M" ], "last": "Terras", "suffix": "" } ], "year": 2011, "venue": "Digitisation Perspectives", "volume": "", "issue": "", "pages": "3--20", "other_ids": { "DOI": [ "10.1007/978-94-6091-299-3_1" ] }, "num": null, "urls": [], "raw_text": "Melissa M. Terras. 2011. The rise of digitization. In Ruth Rikowski, editor, Digitisation Perspectives, pages 3-20. Sense Publishers, Rotterdam.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "WiLI-2018 -Wikipedia Language Identification database", "authors": [ { "first": "Martin", "middle": [], "last": "Thoma", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.5281/zenodo.841984" ] }, "num": null, "urls": [], "raw_text": "Martin Thoma. 2018. WiLI-2018 -Wikipedia Language Identification database.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Factual probing is [MASK]: Learning vs. learning to recall", "authors": [ { "first": "Zexuan", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Friedman", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "5017--5033", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.398" ] }, "num": null, "urls": [], "raw_text": "Zexuan Zhong, Dan Friedman, and Danqi Chen. 2021. Factual probing is [MASK]: Learning vs. learning to recall. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5017-5033, Online. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Precision for the different languages at different Levenshtein distance thresholds. Languages are distinguished by the line color.", "num": null, "uris": null }, "FIGREF1": { "type_str": "figure", "text": "1790", "num": null, "uris": null }, "FIGREF2": { "type_str": "figure", "text": "Precision for the different languages at Levenshtein threshold 0.4 across periods. Languages are distinguished by both the line color and the type of dot.", "num": null, "uris": null }, "FIGREF3": { "type_str": "figure", "text": "Precision, recall and F1-score (resp. first, second and third rows) at different Levenshtein distance thresholds and for different languages. Languages are distinguished by line color. Precision, recall and F1-score (resp. first, second and third columns) by entity type at Levenshtein distance threshold 0", "num": null, "uris": null }, "TABREF1": { "num": null, "type_str": "table", "content": "
EntityStep (1) Generation prompt
PERS Input: <sentence>\\n EntitiesStep (3) Disambiguation prompt
PERS, LOC Input: <sentence>\\n In input, is <entity> a person or a location? Give only one answer.
FactFactual probing prompts
Language Date<sentence>\\n Q:Name the language of the previous sentence.\\nA:
", "text": "Data description: splits by date and language of the CLEF-HIPE 2020 dataset. In input, what are the names of person? Separate answers with commas. LOC Input: \\n In input, what are the names of location? Separate answers with commas. PROD Input: \\n In input, what are the names of media or doctrine? Separate answers with commas.", "html": null }, "TABREF2": { "num": null, "type_str": "table", "content": "", "text": "", "html": null }, "TABREF5": { "num": null, "type_str": "table", "content": "
: HIPE 2020's best results for coarse NER (lit-eral) with fuzzy boundary.
", "text": "", "html": null }, "TABREF7": { "num": null, "type_str": "table", "content": "", "text": "Date prediction results.", "html": null } } } }