ACL-OCL / Base_JSON /prefixB /json /bigscience /2022.bigscience-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:11:33.367591Z"
},
"title": "Dataset Debt in Biomedical Language Modeling",
"authors": [
{
"first": "Jason",
"middle": [
"Alan"
],
"last": "Fries",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Natasha",
"middle": [],
"last": "Seelam",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Altay",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tempus Labs, Inc",
"location": {}
},
"email": ""
},
{
"first": "Leon",
"middle": [],
"last": "Weber",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Humboldt-Universit\u00e4t zu",
"location": {}
},
"email": ""
},
{
"first": "Myungsun",
"middle": [],
"last": "Kang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Debajyoti",
"middle": [],
"last": "Datta",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ruisi",
"middle": [],
"last": "Su",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Samuele",
"middle": [],
"last": "Garda",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Humboldt-Universit\u00e4t zu",
"location": {}
},
"email": ""
},
{
"first": "Bo",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Simon",
"middle": [],
"last": "Ott",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Samwald",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Kusa",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Stanford",
"middle": [],
"last": "University",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Snorkel",
"middle": [],
"last": "Ai",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sherlock",
"middle": [],
"last": "Biosciences",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Large-scale language modeling and natural language prompting have demonstrated exciting capabilities for few and zero shot learning in NLP. However, translating these successes to specialized domains such as biomedicine remains challenging, due in part to biomedical NLP's significant dataset debt-the technical costs associated with data that are not consistently documented or easily incorporated into popular machine learning frameworks at scale. To assess this debt, we crowdsourced curation of datasheets for 167 biomedical datasets. We find that only 13% of datasets are available via programmatic access and 30% lack any documentation on licensing and permitted reuse. Our dataset catalog is available at: https://tinyurl.com/bigbio22.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Large-scale language modeling and natural language prompting have demonstrated exciting capabilities for few and zero shot learning in NLP. However, translating these successes to specialized domains such as biomedicine remains challenging, due in part to biomedical NLP's significant dataset debt-the technical costs associated with data that are not consistently documented or easily incorporated into popular machine learning frameworks at scale. To assess this debt, we crowdsourced curation of datasheets for 167 biomedical datasets. We find that only 13% of datasets are available via programmatic access and 30% lack any documentation on licensing and permitted reuse. Our dataset catalog is available at: https://tinyurl.com/bigbio22.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Natural language prompting has recently demonstrated significant benefits for language model pretraining, including unifying task inputs for largescale multi-task supervision (Raffel et al., 2019) and improving zero-shot classification via explicit, multi-task prompted training data (Wei et al., 2022; Sanh et al., 2022) . With performance gains reported when scaling to thousands of prompted training tasks , tools that enable largescale integration of expert-labeled datasets hold great promise for improving zero-shot learning.",
"cite_spans": [
{
"start": 175,
"end": 196,
"text": "(Raffel et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 284,
"end": 302,
"text": "(Wei et al., 2022;",
"ref_id": null
},
{
"start": 303,
"end": 321,
"text": "Sanh et al., 2022)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, translating these successes to specialized domains such as biomedicine face strong headwinds due in part to the current state of dataset accessibility in biomedical NLP. Recently data cascades was proposed as a term-of-art for the costs of undervaluing data in machine learning (Sambasivan et al., 2021) . We propose a similar term, dataset debt, to capture the technical costs (Sculley et al., 2015) of using datasets which are largely open and findable, but inconsistently documented, structured, and otherwise inaccessible via a consistent, programmatic interface. This type of debt creates significant practical challenges when integrating complex domain-specific corpora into popular machine learning frameworks.",
"cite_spans": [
{
"start": 287,
"end": 312,
"text": "(Sambasivan et al., 2021)",
"ref_id": "BIBREF24"
},
{
"start": 387,
"end": 409,
"text": "(Sculley et al., 2015)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We claim that biomedical NLP suffers from significant dataset debt. For example, while Hug-gingFace's popular Datasets library (Lhoest et al., 2021) contains over 3,000 datasets, biomedical data are underrepresented and favor tasks with general domain appeal such as question answering or semantic similarity (PubmedQA, SciTail, BIOSSES). To assess the state of biomedical dataset debt, we built, to our knowledge, the largest catalog of metadata for publicly available biomedical datasets. We document provenance, licensing, and other key attributes per (Gebru et al., 2021) to help guide future efforts for improving dataset access and machine learning reproducibility.",
"cite_spans": [
{
"start": 127,
"end": 148,
"text": "(Lhoest et al., 2021)",
"ref_id": "BIBREF18"
},
{
"start": 555,
"end": 575,
"text": "(Gebru et al., 2021)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our effort found low overall support for programmatic access, with only 13% (22/167) of our datasets present in the Datasets hub. Despite a proliferation of schemas designed to standardize dataset loading and harmonize task semantics. there remains no consistent, API interface for easily incorporating biomedical data into language model training at scale.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Deep learning models are increasingly moving to commodified architectures. Data-centric machine learning (vs. model-centric) is inspired by the observation that the performance gains provided by novel architectures are often smaller than gains obtained using better training data. We outline some key challenges and opportunities in data-centric language modeling. These are broadly applicable to NLP, but have strong relevance to biomedicine and the current state of dataset debt.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data-Centric Machine Learning",
"sec_num": "2"
},
{
"text": "Popular language models such as GPT-3 (Brown et al., 2020) do not incorporate scientific or medical corpora in their training mixture, contributing to their lower performance when used in biomedical domains and few-shot tasks (Moradi et al., 2021) . Additionally, simply training the language model on in-domain data might lead to non-trivial risks associated with the recapitulated biases from the training corpora (Zhang et al., 2020; Gururangan et al., 2022) .",
"cite_spans": [
{
"start": 38,
"end": 58,
"text": "(Brown et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 226,
"end": 247,
"text": "(Moradi et al., 2021)",
"ref_id": "BIBREF20"
},
{
"start": 416,
"end": 436,
"text": "(Zhang et al., 2020;",
"ref_id": "BIBREF38"
},
{
"start": 437,
"end": 461,
"text": "Gururangan et al., 2022)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Curating and Cleaning Training Data",
"sec_num": "2.1"
},
{
"text": "In scientific literature, discounting source provenance could manifest as language models parroting conflicting or inaccurate scientific findings. Zhao et al. (Zhao et al., 2022) curated scientific corpora to identify patient-specific information (e.g., mining PubMed Central to identify case reports that respect licensing for re-use and re-distribution). With sufficient metadata and dataset provenance, this level of curation could be extended to the entire training corpus for a biomedical language model. Data cleaning has a large impact on language model performance. Deduplicating data leads to more accurate, more generalizable models requiring fewer training steps Lee et al., 2021) . Cleaning up the consistency of answer response strings was reported to improve biomedical question answering (Yoon et al., 2021) . Duplication contamination is a serious risk in biomedical datasets, which often iteratively build or extend prior annotations, introducing risk of test leakage in evaluation (Elangovan et al., 2021) .",
"cite_spans": [
{
"start": 159,
"end": 178,
"text": "(Zhao et al., 2022)",
"ref_id": "BIBREF40"
},
{
"start": 674,
"end": 691,
"text": "Lee et al., 2021)",
"ref_id": "BIBREF28"
},
{
"start": 803,
"end": 822,
"text": "(Yoon et al., 2021)",
"ref_id": "BIBREF37"
},
{
"start": 999,
"end": 1023,
"text": "(Elangovan et al., 2021)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Curating and Cleaning Training Data",
"sec_num": "2.1"
},
{
"text": "Biomedical domains require specialized knowledge, making expert-labeled datasets timeconsuming and expensive to generate. In limiteddata settings, distant and weakly supervised methods (Craven and Kumlien, 1999) are often used to combine curated, structured resources (e.g., knowledge bases, ontologies) with expert rules to programmatically label data. These approaches have demonstrated success across NER, relation extraction, and other biomedical applications (Kuleshov et al., 2019; Fries et al., 2021) . However these approaches typically are applied to real, albeit unlabeled data, creating challenges when modeling rare classes. A recent trend is transforming structured resources directly into realistic-looking, but synthetic training examples. KELM (Agarwal et al., 2021) converts Wiki knowledge graph triplets into synthesized natural language text for language model pretraining.",
"cite_spans": [
{
"start": 185,
"end": 211,
"text": "(Craven and Kumlien, 1999)",
"ref_id": "BIBREF6"
},
{
"start": 464,
"end": 487,
"text": "(Kuleshov et al., 2019;",
"ref_id": "BIBREF15"
},
{
"start": 488,
"end": 507,
"text": "Fries et al., 2021)",
"ref_id": "BIBREF9"
},
{
"start": 760,
"end": 782,
"text": "(Agarwal et al., 2021)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Programmatic Labeling",
"sec_num": "2.2"
},
{
"text": "Natural language prompting has emerged as a powerful technique for zero/few shot learning, where task guidance from prompts reduces sample complexity (Le Scao and Rush, 2021). Crosslingual prompting (English prompts, non-English examples) has demonstrated competitive classification performance (Lin et al., 2021) . Training language models directly on prompts has resulted in large gains in zero-shot performance over GPT-3 as well as producing models with fewer trained parameters (Sanh et al., 2022; Wei et al., 2022) .",
"cite_spans": [
{
"start": 295,
"end": 313,
"text": "(Lin et al., 2021)",
"ref_id": "BIBREF19"
},
{
"start": 483,
"end": 502,
"text": "(Sanh et al., 2022;",
"ref_id": "BIBREF25"
},
{
"start": 503,
"end": 520,
"text": "Wei et al., 2022)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Programmatic Labeling",
"sec_num": "2.2"
},
{
"text": "PromptSource (Bach et al., 2022 ) is a recent software platform for creating prompts and applying them to existing labeled datasets to build training data. These developments highlight a promising trend toward defining programmatic transformations on top of existing datasets, enabling them to be configured into new tasks. However, leveraging large-scale prompting remains challenging in biomedicine due to the lack of programmatic access to a large, diverse collections of biomedical datasets and tasks.",
"cite_spans": [
{
"start": 13,
"end": 31,
"text": "(Bach et al., 2022",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Programmatic Labeling",
"sec_num": "2.2"
},
{
"text": "Inspired by standardized benchmarks in general domain NLP research (Wang et al., 2018 (Wang et al., , 2019 , BioNLP takes similar initiatives by establishing a benchmark of 10 datasets spanning 5 tasks (Peng et al., 2019, BLUE) , an improved benchmark on BLUE with 13 datasets in 6 tasks (Gu et al., 2022, BLURB) , and a benchmark of 9 different tasks for Chinese biomedical NLP (Zhang et al., 2021, CBLUE) . While these benchmarks provide tools for consistent evaluation, only BLURB supports a leaderboard and none directly provide dataset access. Evaluation frameworks that provide programmatic access are often restricted to single and well-established tasks and impose pre-processing choices that can make inconsistent performance comparisons (Crichton et al., 2017; Weber et al., 2021) .",
"cite_spans": [
{
"start": 67,
"end": 85,
"text": "(Wang et al., 2018",
"ref_id": "BIBREF30"
},
{
"start": 86,
"end": 106,
"text": "(Wang et al., , 2019",
"ref_id": "BIBREF29"
},
{
"start": 202,
"end": 227,
"text": "(Peng et al., 2019, BLUE)",
"ref_id": null
},
{
"start": 288,
"end": 312,
"text": "(Gu et al., 2022, BLURB)",
"ref_id": null
},
{
"start": 379,
"end": 406,
"text": "(Zhang et al., 2021, CBLUE)",
"ref_id": null
},
{
"start": 747,
"end": 770,
"text": "(Crichton et al., 2017;",
"ref_id": "BIBREF7"
},
{
"start": 771,
"end": 790,
"text": "Weber et al., 2021)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Diverse Evaluation and Benchmarking",
"sec_num": "2.3"
},
{
"text": "To the best of our knowledge, there are currently no zero-shot evaluation frameworks for biomedical data similar to BIG-Bench 1 , which currently contains little-to-no biomedical tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diverse Evaluation and Benchmarking",
"sec_num": "2.3"
},
{
"text": "Evaluation frameworks must also allow probing the trained language models' intrinsic properties, rather than only measure downstream classification performance. Following (Petroni et al., 2019) in the general NLP domain, introduce BioLAMA, a benchmark making available 49K biomedical knowledge triplets to probe the relational knowledge present in pre-trained language models.",
"cite_spans": [
{
"start": 171,
"end": 193,
"text": "(Petroni et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Diverse Evaluation and Benchmarking",
"sec_num": "2.3"
},
{
"text": "3 Datasets Summary",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diverse Evaluation and Benchmarking",
"sec_num": "2.3"
},
{
"text": "Our inclusion criteria targeted expert-annotated datasets designated as public, reusable research benchmarks for one or more NLP tasks. We excluded: (1) multimodal datasets where removing the non-text modality undermines the task, e.g., visual question answering, audio transcription, image-to-text generation; (2) general resource datasets, e.g, the PMC Open Access Subset, MIMIC-III (Johnson et al., 2016); (3) derived resources, e.g., knowledge bases constructed via text mining; and (4) modeling artifacts, e.g., static embeddings or pretrained language models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metadata/Datasheet Curation",
"sec_num": "3.1"
},
{
"text": "We recruited 8 volunteers to identify datasets and crowdsource their metadata curation for an open, community dataset catalog. Participants reviewed dataset publications and websites which described the curation process, and then completed the metadata schema outlined in Table 1 This schema loosely assesses compliance with FAIR data principles (Wilkinson et al., 2016) .",
"cite_spans": [
{
"start": 346,
"end": 370,
"text": "(Wilkinson et al., 2016)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 272,
"end": 279,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Metadata/Datasheet Curation",
"sec_num": "3.1"
},
{
"text": "Our initial effort identified 101 datasets. We combined this list with a contemporaneously curated catalog of biomedical datasets, identified via systematic literature review (Blagec et al., 2022 ). Since the catalog described in Blagec et al. (2022) was generated using broader inclusion criteria (e.g., non-public data, imaging and video datasets) we identified 104/475 entries that met our criteria. After merging, we conducted a second round of crowdsourcing to annotate metadata, resulting in our current catalog of 167 biomedical datasets. We did not conduct a formal assessment of interannotator agreement.",
"cite_spans": [
{
"start": 175,
"end": 195,
"text": "(Blagec et al., 2022",
"ref_id": "BIBREF2"
},
{
"start": 230,
"end": 250,
"text": "Blagec et al. (2022)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metadata/Datasheet Curation",
"sec_num": "3.1"
},
{
"text": "Only 22/167 (13%) of biomedical datasets are available via the Datasets API, despite 123/167 (74%) being openly hosted on public websites. The remaining datasets require authentication to access Table 2 outlines the diversity of commonly used biomedical file formats. Most datasets are provided in semi-structured form (51%), followed by structured (22%), and non-standard plain text files (17%). There are several structured formats which propose a data model for parsing and standardizing task semantics (e.g., BRAT (Stenetorp et al., 2012 ), BioC (Comeau et al., 2013 ). However, for information extraction tasks which could use these formats, only 31/86 (36%) actually do. Table 2 outlines dataset licensing, broken down into six categories, largely based on commercial vs. non-commercial restrictions. These cover broad classes of licensing, ranging from permissive Creative Commons Share-Alike licenses to datasetspecific data-use agreements (DUA ",
"cite_spans": [
{
"start": 518,
"end": 541,
"text": "(Stenetorp et al., 2012",
"ref_id": "BIBREF27"
},
{
"start": 542,
"end": 570,
"text": "), BioC (Comeau et al., 2013",
"ref_id": null
}
],
"ref_spans": [
{
"start": 195,
"end": 202,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 677,
"end": 684,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset Access",
"sec_num": "4.1"
},
{
"text": "Biomedical datasets (i.e., tasks built from scientific publications) made up 68% of available datasets while clinical datasets (patient notes, health news, clinical trial reports) made up 32%. Figure 2 : Cumulative count of datasets by task, ordered by year of dataset release. The black dashed line indicates the total number available via the Datasets API. Fig.2 shows the overall homogeneity of public biomedical datasets as of 2022. Information extraction tasks (e.g., NER, NED, releation extraction, coreference resolution) comprise 56%, followed by 20% text classification (e.g, document labeling, sentiment analysis), 13% question answering, and 6% semantic similarity.",
"cite_spans": [],
"ref_spans": [
{
"start": 193,
"end": 201,
"text": "Figure 2",
"ref_id": null
},
{
"start": 359,
"end": 364,
"text": "Fig.2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset and Task Diversity",
"sec_num": "4.2"
},
{
"text": "Eng. Non-Eng. Given all tasks, 14 languages are covered. Five languages make up 95% of all datasets. English is the majority (80%), followed by Spanish (7.5%), German (2.4%), French (2.4%), and Chinese (2.4%). Table 4 contains counts of task categories binned into English and Non-English . Question answering and semantic similarity have zero non-English datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 210,
"end": 217,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Task Category",
"sec_num": null
},
{
"text": "In this work, we outlined several challenges in training biomedical language models. With increasingly large biomedical language models , limitations in the quality and properties of training data grow more stark. We argue that biomedical NLP suffers from significant dataset debt, with only 13% of datasets accessible via API access and readily usable in state-of-the-art NLP tools. Current biomedical datasets are homogeneous, largely focusing on NER and relation extraction tasks, and predominantly English language. These limitations highlight opportunities presented by recent data-centric machine learning methods such as prompting, which enables experts to inject task guidance into training and more easily reconfigure existing datasets into new training tasks. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://github.com/google/BIG-bench",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This section contains detailed descriptions of each metadata field collected for the dataset catalog.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Metadata Overview",
"sec_num": null
},
{
"text": "The dataset name, preferring short forms (BC5CDR) as typically used on homepages or scientific publications over verbose ones (\"BioCreative 5 Chemical Disease Relation Task\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.1 Name",
"sec_num": null
},
{
"text": "Datasets contain labels for one or more tasks. Tables 5 and 6 outline the tasks we consider in this work. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.2 Task Types",
"sec_num": null
},
{
"text": "Source domain of the dataset.\u2022 Biomedical: Tasks are defined for scientific literature (e.g., PubMed abstacts, full-text publications from the PMC Open Access Subset).\u2022 Clinical: Tasks are defined for clinical notes from patient electronic health records, healthrelated questions from social media or news websites, clinical trial reports, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.3 Domain",
"sec_num": null
},
{
"text": "File formats provided by the original dataset creators. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.4 File format",
"sec_num": null
},
{
"text": "Provenance of labels used to create a dataset.\u2022 Manual: Expert annotators directly label data instances. This may include multiple rounds of adjudication.\u2022 Model-assisted Manual: Experts verify, correct, or augment the output of a model (e.g., pre-annotated entities are used by annotators to define relations).\u2022 Crowdsourced: Labels are the result of a voting process over multiple annotator's labels.\u2022 Rules: Heuristics developed by experts and applied to unlabeled text to create annotations. This includes a wide range of weak/distant supervision techniques.\u2022 Found: Generated from \"in-the-wild\" data, such as aligned pairs of translated text mined from web pages.\u2022 Unlabeled: no human-generated labels (e.g., the PMC Open Subset).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.5 Annotations",
"sec_num": null
},
{
"text": "URL of HuggingFace's Datasets implementation, otherwise \"no\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.6 API Access",
"sec_num": null
},
{
"text": "Are canonical train, validation, and test sets defined by the dataset creators? If so, which sets are provided. value \u2208 { NONE, train, valid, test }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.7 Splits",
"sec_num": null
},
{
"text": "License information accompanying the dataset. Unknown licenses means the annotator could not find any information or formal legal documents on the homepage, software repository (e.g, GitHub, Google Code), or README with the data itself.\u2022 Public: Creative Commons (CC BY 3.0/4.0, CC BY-SA 3.0/4.0), Public Domain, GNU Free Documentation License, GNU Common Public License v3.0, MIT License, Apache License 2.0\u2022 Public Non-commercial: Creative Commons (CC BY NC 2.0/3.0/4.0, CC BY-NC-SA 4.0), CSIRO Data License (Non-commercial), Public for Research\u2022 DUA-NC: DUA for non-commercial use only.\u2022 DUA-C/NC: DUA for commercial and noncommercial uses.\u2022 DUA-UNK: DUA with unknown restrictions.\u2022 Unknown: Public-Unknown, Public w/ Registration",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.8 License",
"sec_num": null
},
{
"text": "Languages used in the labeled dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.9 Languages",
"sec_num": null
},
{
"text": "Dataset contains aligned pairs for two or more languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.10 Multilingual",
"sec_num": null
},
{
"text": "URL to the manuscript, DOI, and year of publication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.11 Publication, Year",
"sec_num": null
},
{
"text": "Current citation count from Google Scholar, as of 02-22-2022. This measure was collected to provide a weak measure of dataset visibility. We note that citation count is a problematic measure of valuation and subject to many criticisms (Gruber, 2014) .A.1.13 Homepage, Public URL URL of website describing and hosting the dataset.If the dataset has a direct download link, denote if it is public or only available after authentication.",
"cite_spans": [
{
"start": 235,
"end": 249,
"text": "(Gruber, 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.12 Citations",
"sec_num": null
},
{
"text": "URL of dataset homepage, as documented in the source publication, is no longer active.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.14 Dead Link",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training",
"authors": [
{
"first": "Oshin",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Heming",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Siamak",
"middle": [],
"last": "Shakeri",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "3554--3565",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.278"
]
},
"num": null,
"urls": [],
"raw_text": "Oshin Agarwal, Heming Ge, Siamak Shakeri, and Rami Al-Rfou. 2021. Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training. In Proceedings of the 2021 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 3554-3565, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Dragomir Radev, Mike Tian-Jian Jiang, and Alexander M. Rush. 2022. Promptsource: An integrated development environment and repository for natural language prompts",
"authors": [
{
"first": "H",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Zheng-Xin",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Yong",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Webson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Nihal",
"suffix": ""
},
{
"first": "Abheesht",
"middle": [],
"last": "Nayak",
"suffix": ""
},
{
"first": "Taewoon",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Thibault",
"middle": [],
"last": "Bari",
"suffix": ""
},
{
"first": "Zaid",
"middle": [],
"last": "Fevry",
"suffix": ""
},
{
"first": "Manan",
"middle": [],
"last": "Alyafeai",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Dey",
"suffix": ""
},
{
"first": "Zhiqing",
"middle": [],
"last": "Santilli",
"suffix": ""
},
{
"first": "Srulik",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Ben-David",
"suffix": ""
},
{
"first": "Gunjan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Chhablani",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"Alan"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Maged",
"middle": [
"S"
],
"last": "Fries",
"suffix": ""
},
{
"first": "Shanya",
"middle": [],
"last": "Al-Shaibani",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sharma",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen H. Bach, Victor Sanh, Zheng-Xin Yong, Al- bert Webson, Colin Raffel, Nihal V. Nayak, Ab- heesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Fevry, Zaid Alyafeai, Manan Dey, An- drea Santilli, Zhiqing Sun, Srulik Ben-David, Can- wen Xu, Gunjan Chhablani, Han Wang, Jason Alan Fries, Maged S. Al-shaibani, Shanya Sharma, Ur- mish Thakker, Khalid Almubarak, Xiangru Tang, Dragomir Radev, Mike Tian-Jian Jiang, and Alexan- der M. Rush. 2022. Promptsource: An integrated development environment and repository for natural language prompts.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Benchmark datasets driving artificial intelligence development fail to capture the needs of medical professionals",
"authors": [
{
"first": "Kathrin",
"middle": [],
"last": "Blagec",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Kraiger",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Fr\u00fchwirt",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Samwald",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2201.07040"
]
},
"num": null,
"urls": [],
"raw_text": "Kathrin Blagec, Jakob Kraiger, Wolfgang Fr\u00fchwirt, and Matthias Samwald. 2022. Benchmark datasets driv- ing artificial intelligence development fail to capture the needs of medical professionals. arXiv preprint arXiv:2201.07040.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Language models are few-shot learners",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Ryder",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Subbiah",
"suffix": ""
},
{
"first": "Jared",
"middle": [
"D"
],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Dhariwal",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Shyam",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Askell",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in neural information processing systems",
"volume": "33",
"issue": "",
"pages": "1877--1901",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Redundancy in electronic health record corpora: analysis, impact on text mining performance and mitigation strategies",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Elhadad",
"suffix": ""
},
{
"first": "No\u00e9mie",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 2013,
"venue": "BMC bioinformatics",
"volume": "14",
"issue": "1",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Cohen, Michael Elhadad, and No\u00e9mie El- hadad. 2013. Redundancy in electronic health record corpora: analysis, impact on text mining perfor- mance and mitigation strategies. BMC bioinformat- ics, 14(1):1-15.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bioc: a minimalist approach to interoperability for biomedical text processing",
"authors": [
{
"first": "C",
"middle": [],
"last": "Donald",
"suffix": ""
},
{
"first": "Rezarta",
"middle": [],
"last": "Comeau",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Islamaj Dogan",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"Bretonnel"
],
"last": "Ciccarese",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Krallinger",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Leitner",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Manabu",
"middle": [],
"last": "Rinaldi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Torii",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donald C Comeau, Rezarta Islamaj Dogan, Paolo Ci- ccarese, Kevin Bretonnel Cohen, Martin Krallinger, Florian Leitner, Zhiyong Lu, Yifan Peng, Fabio Ri- naldi, Manabu Torii, et al. 2013. Bioc: a minimalist approach to interoperability for biomedical text pro- cessing. Database, 2013.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Constructing biological knowledge bases by extracting information from text sources",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Craven",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Kumlien",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the Seventh International Conference on Intelligent Systems for Molecular Biology",
"volume": "",
"issue": "",
"pages": "77--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Craven and Johan Kumlien. 1999. Constructing biological knowledge bases by extracting information from text sources. In Proceedings of the Seventh International Conference on Intelligent Systems for Molecular Biology, August 6-10, 1999, Heidelberg, Germany, pages 77-86. AAAI.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A neural network multi-task learning approach to biomedical named entity recognition",
"authors": [
{
"first": "Gamal",
"middle": [],
"last": "Crichton",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Billy",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2017,
"venue": "BMC bioinformatics",
"volume": "18",
"issue": "1",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gamal Crichton, Sampo Pyysalo, Billy Chiu, and Anna Korhonen. 2017. A neural network multi-task learn- ing approach to biomedical named entity recognition. BMC bioinformatics, 18(1):1-14.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Memorization vs. generalization : Quantifying data leakage in NLP performance evaluation",
"authors": [
{
"first": "Aparna",
"middle": [],
"last": "Elangovan",
"suffix": ""
},
{
"first": "Jiayuan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Karin",
"middle": [],
"last": "Verspoor",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "1325--1335",
"other_ids": {
"DOI": [
"10.18653/v1/2021.eacl-main.113"
]
},
"num": null,
"urls": [],
"raw_text": "Aparna Elangovan, Jiayuan He, and Karin Verspoor. 2021. Memorization vs. generalization : Quantify- ing data leakage in NLP performance evaluation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 1325-1335, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Ontology-driven weak supervision for clinical entity classification in electronic health records",
"authors": [
{
"first": "A",
"middle": [],
"last": "Jason",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Fries",
"suffix": ""
},
{
"first": "Saelig",
"middle": [],
"last": "Steinberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Khattar",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Fleming",
"suffix": ""
},
{
"first": "Alison",
"middle": [],
"last": "Posada",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Callahan",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Nigam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 2021,
"venue": "Nature Communications",
"volume": "12",
"issue": "1",
"pages": "1--11",
"other_ids": {
"DOI": [
"10.1038/s41467-021-22328-4"
]
},
"num": null,
"urls": [],
"raw_text": "Jason A Fries, Ethan Steinberg, Saelig Khattar, Scott L Fleming, Jose Posada, Alison Callahan, and Nigam H Shah. 2021. Ontology-driven weak supervision for clinical entity classification in electronic health records. Nature Communications, 12(1):1-11.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Datasheets for datasets",
"authors": [
{
"first": "Timnit",
"middle": [],
"last": "Gebru",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Morgenstern",
"suffix": ""
},
{
"first": "Briana",
"middle": [],
"last": "Vecchione",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [
"Wortman"
],
"last": "Vaughan",
"suffix": ""
},
{
"first": "Hanna",
"middle": [
"M"
],
"last": "Wallach",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Crawford",
"suffix": ""
}
],
"year": 2021,
"venue": "Commun. ACM",
"volume": "64",
"issue": "12",
"pages": "86--92",
"other_ids": {
"DOI": [
"10.1145/3458723"
]
},
"num": null,
"urls": [],
"raw_text": "Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna M. Wallach, Hal Daum\u00e9 III, and Kate Crawford. 2021. Datasheets for datasets. Commun. ACM, 64(12):86-92.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Academic sell-out: how an obsession with metrics and rankings is damaging academia",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Gruber",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Marketing for Higher Education",
"volume": "24",
"issue": "2",
"pages": "165--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Gruber. 2014. Academic sell-out: how an obsession with metrics and rankings is damaging academia. Journal of Marketing for Higher Educa- tion, 24(2):165-177.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Jianfeng Gao, and Hoifung Poon. 2022. Domain-specific language model pretraining for biomedical natural language processing",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Tinn",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Lucas",
"suffix": ""
},
{
"first": "Naoto",
"middle": [],
"last": "Usuyama",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
}
],
"year": null,
"venue": "ACM Trans. Comput. Heal",
"volume": "3",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3458754"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jian- feng Gao, and Hoifung Poon. 2022. Domain-specific language model pretraining for biomedical natural language processing. ACM Trans. Comput. Heal., 3(1):2:1-2:23.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Whose language counts as high quality? measuring language ideologies in text data selection",
"authors": [
{
"first": "Suchin",
"middle": [],
"last": "Gururangan",
"suffix": ""
},
{
"first": "Dallas",
"middle": [],
"last": "Card",
"suffix": ""
},
{
"first": "Sarah",
"middle": [
"K"
],
"last": "Dreier",
"suffix": ""
},
{
"first": "Emily",
"middle": [
"K"
],
"last": "Gade",
"suffix": ""
},
{
"first": "Leroy",
"middle": [
"Z"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Zeyu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Dallas Card, Sarah K. Dreier, Emily K. Gade, Leroy Z. Wang, Zeyu Wang, Luke Zettlemoyer, and Noah A. Smith. 2022. Whose language counts as high quality? measuring lan- guage ideologies in text data selection. CoRR, abs/2201.10474.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Mimic-iii, a freely accessible critical care database",
"authors": [
{
"first": "E",
"middle": [
"W"
],
"last": "Alistair",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tom",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "Li-Wei H",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Mengling",
"middle": [],
"last": "Lehman",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Ghassemi",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Moody",
"suffix": ""
},
{
"first": "Leo",
"middle": [
"Anthony"
],
"last": "Szolovits",
"suffix": ""
},
{
"first": "Roger G",
"middle": [],
"last": "Celi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mark",
"suffix": ""
}
],
"year": 2016,
"venue": "Scientific data",
"volume": "3",
"issue": "1",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alistair EW Johnson, Tom J Pollard, Lu Shen, Li-wei H Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic-iii, a freely accessi- ble critical care database. Scientific data, 3(1):1-9.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A machine-compiled database of genomewide association studies",
"authors": [
{
"first": "Volodymyr",
"middle": [],
"last": "Kuleshov",
"suffix": ""
},
{
"first": "Jialin",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Vo",
"suffix": ""
},
{
"first": "Braden",
"middle": [],
"last": "Hancock",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
},
{
"first": "Serafim",
"middle": [],
"last": "Batzoglou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Snyder",
"suffix": ""
}
],
"year": 2019,
"venue": "Nature communications",
"volume": "10",
"issue": "1",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Volodymyr Kuleshov, Jialin Ding, Christopher Vo, Braden Hancock, Alexander Ratner, Yang Li, Christo- pher R\u00e9, Serafim Batzoglou, and Michael Snyder. 2019. A machine-compiled database of genome- wide association studies. Nature communications, 10(1):1-8.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "How many data points is a prompt worth?",
"authors": [
{
"first": "Le",
"middle": [],
"last": "Teven",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "2627--2636",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.208"
]
},
"num": null,
"urls": [],
"raw_text": "Teven Le Scao and Alexander Rush. 2021. How many data points is a prompt worth? In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 2627-2636, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Chris Callison-Burch, and Nicholas Carlini. 2021. Deduplicating training data makes language models better",
"authors": [
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Ippolito",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Nystrom",
"suffix": ""
},
{
"first": "Chiyuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Eck",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2107.06499"
]
},
"num": null,
"urls": [],
"raw_text": "Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. 2021. Deduplicating training data makes language models better. arXiv preprint arXiv:2107.06499.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Datasets: A community library for natural language processing",
"authors": [
{
"first": "Quentin",
"middle": [],
"last": "Lhoest",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Villanova Del Moral",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Abhishek",
"middle": [],
"last": "Thakur",
"suffix": ""
},
{
"first": "Suraj",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Lewis",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Tunstall",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Gunjan",
"middle": [],
"last": "\u0160a\u0161ko",
"suffix": ""
},
{
"first": "Bhavitvya",
"middle": [],
"last": "Chhablani",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Malik",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Brandeis",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Angelina",
"middle": [],
"last": "Patry",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Mcmillan-Major",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Schmid",
"suffix": ""
},
{
"first": "Cl\u00e9ment",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Delangue",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "175--184",
"other_ids": {
"DOI": [
"10.18653/v1/2021.emnlp-demo.21"
]
},
"num": null,
"urls": [],
"raw_text": "Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario \u0160a\u0161ko, Gun- jan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Cl\u00e9ment Delangue, Th\u00e9o Matus- si\u00e8re, Lysandre Debut, Stas Bekman, Pierric Cis- tac, Thibault Goehringer, Victor Mustar, Fran\u00e7ois Lagunas, Alexander Rush, and Thomas Wolf. 2021. Datasets: A community library for natural language processing. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Process- ing: System Demonstrations, pages 175-184, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Few-shot learning with multilingual language models",
"authors": [
{
"first": "Todor",
"middle": [],
"last": "Xi Victoria Lin",
"suffix": ""
},
{
"first": "Mikel",
"middle": [],
"last": "Mihaylov",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Shuohui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Simig",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Shruti",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Bhosale",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Du",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2112.10668"
]
},
"num": null,
"urls": [],
"raw_text": "Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Na- man Goyal, Shruti Bhosale, Jingfei Du, et al. 2021. Few-shot learning with multilingual language models. arXiv preprint arXiv:2112.10668.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Gpt-3 models are poor few-shot learners in the biomedical domain",
"authors": [
{
"first": "Milad",
"middle": [],
"last": "Moradi",
"suffix": ""
},
{
"first": "Kathrin",
"middle": [],
"last": "Blagec",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Haberl",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Samwald",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2109.02555"
]
},
"num": null,
"urls": [],
"raw_text": "Milad Moradi, Kathrin Blagec, Florian Haberl, and Matthias Samwald. 2021. Gpt-3 models are poor few-shot learners in the biomedical domain. arXiv preprint arXiv:2109.02555.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Transfer learning in biomedical natural language processing: An evaluation of BERT and ELMo on ten benchmarking datasets",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Shankai",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task",
"volume": "",
"issue": "",
"pages": "58--65",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5006"
]
},
"num": null,
"urls": [],
"raw_text": "Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Trans- fer learning in biomedical natural language process- ing: An evaluation of BERT and ELMo on ten bench- marking datasets. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 58-65, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Language models as knowledge bases?",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Bakhtin",
"suffix": ""
},
{
"first": "Yuxiang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2463--2473",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1250"
]
},
"num": null,
"urls": [],
"raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.10683"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "everyone wants to do the model work, not the data work\": Data cascades in high-stakes ai",
"authors": [
{
"first": "Nithya",
"middle": [],
"last": "Sambasivan",
"suffix": ""
},
{
"first": "Shivani",
"middle": [],
"last": "Kapania",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Highfill",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Akrong",
"suffix": ""
},
{
"first": "Praveen",
"middle": [],
"last": "Paritosh",
"suffix": ""
},
{
"first": "Lora",
"middle": [
"M"
],
"last": "Aroyo",
"suffix": ""
}
],
"year": 2021,
"venue": "proceedings of the 2021 CHI Conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo. 2021. \"everyone wants to do the model work, not the data work\": Data cascades in high-stakes ai. In proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1-15.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Multitask prompted training enables zero-shot task generalization",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Webson",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Lintang",
"middle": [],
"last": "Sutawika",
"suffix": ""
},
{
"first": "Zaid",
"middle": [],
"last": "Alyafeai",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Chaffin",
"suffix": ""
},
{
"first": "Arnaud",
"middle": [],
"last": "Stiegler",
"suffix": ""
},
{
"first": "Arun",
"middle": [],
"last": "Raja",
"suffix": ""
},
{
"first": "Manan",
"middle": [],
"last": "Dey",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Bari",
"suffix": ""
},
{
"first": "Urmish",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Shanya",
"middle": [],
"last": "Thakker",
"suffix": ""
},
{
"first": "Eliza",
"middle": [],
"last": "Sharma Sharma",
"suffix": ""
},
{
"first": "Taewoon",
"middle": [],
"last": "Szczechla",
"suffix": ""
},
{
"first": "Gunjan",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Nihal",
"middle": [],
"last": "Chhablani",
"suffix": ""
},
{
"first": "Debajyoti",
"middle": [],
"last": "Nayak",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Datta",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tian-Jian",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Manica",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2022,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Tr- ishala Neeraj, Jos Rozen, Abheesht Sharma, An- drea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multi- task prompted training enables zero-shot task gener- alization. In International Conference on Learning Representations.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Hidden technical debt in machine learning systems",
"authors": [
{
"first": "David",
"middle": [],
"last": "Sculley",
"suffix": ""
},
{
"first": "Gary",
"middle": [],
"last": "Holt",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Golovin",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Davydov",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Phillips",
"suffix": ""
},
{
"first": "Dietmar",
"middle": [],
"last": "Ebner",
"suffix": ""
},
{
"first": "Vinay",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Jean-Francois",
"middle": [],
"last": "Crespo",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Dennison",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-Francois Crespo, and Dan Dennison. 2015. Hidden technical debt in machine learning systems. Advances in neural infor- mation processing systems, 28.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Brat: a web-based tool for nlp-assisted text annotation",
"authors": [
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Topi\u0107",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "102--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pontus Stenetorp, Sampo Pyysalo, Goran Topi\u0107, Tomoko Ohta, Sophia Ananiadou, and Jun'ichi Tsujii. 2012. Brat: a web-based tool for nlp-assisted text annotation. In Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 102-107.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Can language models be biomedical knowledge bases?",
"authors": [
{
"first": "Mujeen",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Yi",
"suffix": ""
},
{
"first": "Minji",
"middle": [],
"last": "Jeon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4723--4734",
"other_ids": {
"DOI": [
"10.18653/v1/2021.emnlp-main.388"
]
},
"num": null,
"urls": [],
"raw_text": "Mujeen Sung, Jinhyuk Lee, Sean Yi, Minji Jeon, Sung- dong Kim, and Jaewoo Kang. 2021. Can language models be biomedical knowledge bases? In Proceed- ings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4723-4734, Online and Punta Cana, Dominican Republic. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Superglue: A stickier benchmark for general-purpose language understanding systems",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Aman- preet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stick- ier benchmark for general-purpose language under- standing systems. Advances in neural information processing systems, 32.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5446"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for nat- ural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Hunflair: an easy-to-use tool for state-of-the-art biomedical named entity recognition",
"authors": [
{
"first": "Leon",
"middle": [],
"last": "Weber",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "S\u00e4nger",
"suffix": ""
},
{
"first": "Jannes",
"middle": [],
"last": "M\u00fcnchmeyer",
"suffix": ""
},
{
"first": "Maryam",
"middle": [],
"last": "Habibi",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Leser",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
}
],
"year": 2021,
"venue": "Bioinformatics",
"volume": "37",
"issue": "17",
"pages": "2792--2794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leon Weber, Mario S\u00e4nger, Jannes M\u00fcnchmeyer, Maryam Habibi, Ulf Leser, and Alan Akbik. 2021. Hunflair: an easy-to-use tool for state-of-the-art biomedical named entity recognition. Bioinformatics, 37(17):2792-2794.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Finetuned language models are zero-shot learners",
"authors": [
{
"first": "",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2022,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dai, and Quoc V Le. 2022. Finetuned language mod- els are zero-shot learners. In International Confer- ence on Learning Representations.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "The fair guiding principles for scientific data management and stewardship",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Mark D Wilkinson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dumontier",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Ijsbrand",
"suffix": ""
},
{
"first": "Gabrielle",
"middle": [],
"last": "Aalbersberg",
"suffix": ""
},
{
"first": "Myles",
"middle": [],
"last": "Appleton",
"suffix": ""
},
{
"first": "; -Willem",
"middle": [],
"last": "Axton",
"suffix": ""
},
{
"first": "Luiz",
"middle": [],
"last": "Boiten",
"suffix": ""
},
{
"first": "Silva",
"middle": [],
"last": "Bonino Da",
"suffix": ""
},
{
"first": "Philip",
"middle": [
"E"
],
"last": "Santos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bourne",
"suffix": ""
}
],
"year": 2016,
"venue": "Scientific data",
"volume": "3",
"issue": "1",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark D Wilkinson, Michel Dumontier, IJsbrand Jan Aalbersberg, Gabrielle Appleton, Myles Axton, Arie Baak, Niklas Blomberg, Jan-Willem Boiten, Luiz Bonino da Silva Santos, Philip E Bourne, et al. 2016. The fair guiding principles for scientific data management and stewardship. Scientific data, 3(1):1- 9.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Zeroprompt: Scaling prompt-based pretraining to 1,000 tasks improves zero-shot generalization",
"authors": [
{
"first": "Hanwei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yujun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yulun",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "Yanggang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Haiyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2201.06910"
]
},
"num": null,
"urls": [],
"raw_text": "Hanwei Xu, Yujun Chen, Yulun Du, Nan Shao, Yang- gang Wang, Haiyu Li, and Zhilin Yang. 2022. Zero- prompt: Scaling prompt-based pretraining to 1,000 tasks improves zero-shot generalization. arXiv preprint arXiv:2201.06910.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Gatortron: A large clinical language model to unlock patient information from unstructured electronic health records",
"authors": [
{
"first": "Xi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Hoo",
"middle": [
"Chang"
],
"last": "Nima Pour Nejatian",
"suffix": ""
},
{
"first": "Kaleb",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Parisien",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Compas",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Flores",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Magoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harle",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xi Yang, Nima Pour Nejatian, Hoo Chang Shin, Kaleb Smith, Christopher Parisien, Colin Compas, Mona Flores, Ying Zhang, Tanja Magoc, Christopher Harle, et al. 2022. Gatortron: A large clinical language model to unlock patient information from unstruc- tured electronic health records. medRxiv.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Ku-dmis at bioasq 9: Data-centric and modelcentric approaches for biomedical question answering",
"authors": [
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Jaehyo",
"middle": [],
"last": "Yoo",
"suffix": ""
},
{
"first": "Sumin",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Mujeen",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "Minbyul",
"middle": [],
"last": "Jeong",
"suffix": ""
},
{
"first": "Gangwoo",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2021,
"venue": "CEUR Workshop Proceedings",
"volume": "2936",
"issue": "",
"pages": "351--359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wonjin Yoon, Jaehyo Yoo, Sumin Seo, Mujeen Sung, Minbyul Jeong, Gangwoo Kim, and Jaewoo Kang. 2021. Ku-dmis at bioasq 9: Data-centric and model- centric approaches for biomedical question answer- ing. In CEUR Workshop Proceedings, volume 2936, pages 351-359. CEUR-WS.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Hurtful words: quantifying biases in clinical contextual word embeddings",
"authors": [
{
"first": "Haoran",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Amy",
"middle": [
"X"
],
"last": "Lu",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Abdalla",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Mcdermott",
"suffix": ""
},
{
"first": "Marzyeh",
"middle": [],
"last": "Ghassemi",
"suffix": ""
}
],
"year": 2020,
"venue": "proceedings of the ACM Conference on Health, Inference, and Learning",
"volume": "",
"issue": "",
"pages": "110--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haoran Zhang, Amy X Lu, Mohamed Abdalla, Matthew McDermott, and Marzyeh Ghassemi. 2020. Hurtful words: quantifying biases in clinical contextual word embeddings. In proceedings of the ACM Conference on Health, Inference, and Learning, pages 110-120.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "CBLUE: A chinese biomedical language understanding evaluation benchmark",
"authors": [
{
"first": "Ningyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhen",
"middle": [],
"last": "Bi",
"suffix": ""
},
{
"first": "Xiaozhuan",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Shumin",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Luoqiu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Hongbin",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Kangping",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Chuanqi",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Mosha",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Luo",
"middle": [],
"last": "Si",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "Guotong",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Zhifang",
"middle": [],
"last": "Sui",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Zong",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Linfeng",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ningyu Zhang, Zhen Bi, Xiaozhuan Liang, Lei Li, Xi- ang Chen, Shumin Deng, Luoqiu Li, Xin Xie, Hong- bin Ye, Xin Shang, Kangping Yin, Chuanqi Tan, Jian Xu, Mosha Chen, Fei Huang, Luo Si, Yuan Ni, Guo- tong Xie, Zhifang Sui, Baobao Chang, Hui Zong, Zheng Yuan, Linfeng Li, Jun Yan, Hongying Zan, Kunli Zhang, Huajun Chen, Buzhou Tang, and Qing- cai Chen. 2021. CBLUE: A chinese biomedical lan- guage understanding evaluation benchmark. CoRR, abs/2106.08087.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Pmcpatients: A large-scale dataset of patient notes and relations extracted from case reports in pubmed central",
"authors": [
{
"first": "Zhengyun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Qiao",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2202.13876"
]
},
"num": null,
"urls": [],
"raw_text": "Zhengyun Zhao, Qiao Jin, and Sheng Yu. 2022. Pmc- patients: A large-scale dataset of patient notes and relations extracted from case reports in pubmed cen- tral. arXiv preprint arXiv:2202.13876.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "All NLP tasks, broken down into 5 categories (see legend). Note datasets often support multiple tasks."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Scientific/biomedical domain (e.g., PubMed abstracts) cumulative distribution of available tasks, ordered by year of dataset release. Clinical domain (e.g., patient notes) cumulative distribution of available tasks, ordered by year of dataset release."
},
"TABREF1": {
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">(21%) or were dead links (5%).</td><td/><td/></tr><tr><td>Format</td><td>Name</td><td colspan=\"2\">Count Total</td></tr><tr><td>Structured</td><td>BioC</td><td>5</td><td>3%</td></tr><tr><td>Structured</td><td>BRAT</td><td colspan=\"2\">16 10%</td></tr><tr><td>Structured</td><td>CoNLL</td><td>11</td><td>7%</td></tr><tr><td>Structured</td><td>PubTator</td><td>4</td><td>2%</td></tr><tr><td colspan=\"2\">Semi-structured XML</td><td colspan=\"2\">26 16%</td></tr><tr><td colspan=\"2\">Semi-structured JSON</td><td colspan=\"2\">43 26%</td></tr><tr><td colspan=\"2\">Semi-structured TSV/CSV</td><td>15</td><td>9%</td></tr><tr><td colspan=\"2\">Semi-structured TMX</td><td>1</td><td>1%</td></tr><tr><td>Plain Text</td><td>Standoff</td><td>13</td><td>8%</td></tr><tr><td>Plain Text</td><td>Text</td><td colspan=\"2\">25 15%</td></tr><tr><td>Plain Text</td><td>ARFF</td><td>1</td><td>1%</td></tr><tr><td>Binary</td><td>Word</td><td>1</td><td>1%</td></tr><tr><td>Binary</td><td>Excel</td><td>2</td><td>1%</td></tr><tr><td>Unknown</td><td>Unknown</td><td>4</td><td>2%</td></tr></table>",
"num": null,
"text": "Metadata collected for all biomedical datasets. See Appendix A for more details on each category."
},
"TABREF2": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Distribution of file formats for biomedical datasets."
},
"TABREF4": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>: Dataset licenses. Restrictions are commercial (C), non-commercial (NC) and unknown (?).</td></tr></table>",
"num": null,
"text": ""
},
"TABREF6": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Task category counts by English (Eng.) and Non-English (Non-Eng.) languages."
},
"TABREF8": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Tasks by language 145"
}
}
}
}