|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:27:31.142494Z" |
|
}, |
|
"title": "Evaluation of Transfer Learning for Adverse Drug Event (ADE) and Medication Entity Extraction", |
|
"authors": [ |
|
{ |
|
"first": "Sankaran", |
|
"middle": [], |
|
"last": "Narayanan", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Kaivalya", |
|
"middle": [], |
|
"last": "Mannam", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Sreeranga", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Rajan", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"Venkat" |
|
], |
|
"last": "Rangan", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Amrita", |
|
"middle": [], |
|
"last": "Vishwa", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Vidyapeetham", |
|
"middle": [], |
|
"last": "Amritapuri", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We evaluate several biomedical contextual embeddings (based on BERT, ELMo, and Flair) for the detection of medication entities such as Drugs and Adverse Drug Events (ADE) from Electronic Health Records (EHR) using the 2018 ADE and Medication Extraction (Track 2) n2c2 data-set. We identify best practices for transfer learning, such as languagemodel fine-tuning and scalar mix. Our transfer learning models achieve strong performance in the overall task (F1=92.91%) as well as in ADE identification (F1=53.08%). Flairbased embeddings out-perform in the identification of context-dependent entities such as ADE. BERT-based embeddings out-perform in recognizing clinical terminology such as Drug and Form entities. ELMo-based embeddings deliver competitive performance in all entities. We develop a sentence-augmentation method for enhanced ADE identification benefiting BERT-based and ELMo-based models by up to 3.13% in F1 gains. Finally, we show that a simple ensemble of these models outpaces most current methods in ADE extraction (F1=55.77%).", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We evaluate several biomedical contextual embeddings (based on BERT, ELMo, and Flair) for the detection of medication entities such as Drugs and Adverse Drug Events (ADE) from Electronic Health Records (EHR) using the 2018 ADE and Medication Extraction (Track 2) n2c2 data-set. We identify best practices for transfer learning, such as languagemodel fine-tuning and scalar mix. Our transfer learning models achieve strong performance in the overall task (F1=92.91%) as well as in ADE identification (F1=53.08%). Flairbased embeddings out-perform in the identification of context-dependent entities such as ADE. BERT-based embeddings out-perform in recognizing clinical terminology such as Drug and Form entities. ELMo-based embeddings deliver competitive performance in all entities. We develop a sentence-augmentation method for enhanced ADE identification benefiting BERT-based and ELMo-based models by up to 3.13% in F1 gains. Finally, we show that a simple ensemble of these models outpaces most current methods in ADE extraction (F1=55.77%).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Adverse Drug Events (ADE) arising from the medical intervention of drugs account for 1.3 million visits to the emergency department in the United States alone (CDC, 2017) . Randomized controlled trials (RCTs), the primary mechanism for monitoring and identifying ADEs, are hampered by insufficient sample sizes of clinical trials (Sultana et al., 2013) . Pharmacovigilance databases such as the Food and Drug Administration's Adverse Event Reporting System (FAERS) strive to be authoritative sources for Physicians; however, they require regular manual data entry (Hoffman et al., 2014; Chedid et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 170, |
|
"text": "(CDC, 2017)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 330, |
|
"end": 352, |
|
"text": "(Sultana et al., 2013)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 564, |
|
"end": 586, |
|
"text": "(Hoffman et al., 2014;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 587, |
|
"end": 607, |
|
"text": "Chedid et al., 2018)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Electronic Health Records (EHRs) contain valuable information about patient medication history: drugs prescribed, reasons for administration, dosages/strengths, and ADEs. Automated extraction of these medication entities by Natural Language Processing (NLP) techniques can facilitate wide-scale pharmacovigilance (Moore and Furberg, 2015; Liu et al., 2019a) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 313, |
|
"end": 338, |
|
"text": "(Moore and Furberg, 2015;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 339, |
|
"end": 357, |
|
"text": "Liu et al., 2019a)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Incorporating such a predictive system within the clinical note-taking interface may help the Physician by alleviating the need to access external clinical decision support applications (Chen et al., 2016) . For instance, if a physician notes down 'started on Dilantin for seizure prophylaxis for a few days', the text could be quickly parsed -highlighting 'Dilantin' as a drug, 'seizure prophylaxis' as the reason for administration, 'few days' as the duration, and warnings of 'eye discharge', 'oral sores', etc. as potential ADEs. In the example given, 'seizure prophylaxis' and 'few days' may occur any where in the clinical text, but only in the context of 'Dilantin' they indicate reason / duration for administration. Besides, such 'dynamic' interfaces can aid medical students to learn from their collective experiences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 186, |
|
"end": 205, |
|
"text": "(Chen et al., 2016)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Among medication entities, ADE and Reason are challenging to disambiguate (Henry et al., 2020) . Frequently, the specific reason for drug administration may appear in a subsequent sentence (Dandala et al., 2020) . Besides, ADE data-sets include goldannotations for these entities, only if they are associated with a drug. Doing so leads to a significant reduction in the number of gold annotations (Wei et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 94, |
|
"text": "(Henry et al., 2020)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 189, |
|
"end": 211, |
|
"text": "(Dandala et al., 2020)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 398, |
|
"end": 416, |
|
"text": "(Wei et al., 2020)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As part of our work in uniting clinical decision support functions and note-taking interfaces, we needed to develop a high-performing medication extraction model using open-source NLP frameworks. Following (Miller et al., 2019) , we modeled this as a named-entity recognition task (Uzuner S.No. Author Method Overall F1 ADE F1 1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 206, |
|
"end": 227, |
|
"text": "(Miller et al., 2019)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 301, |
|
"text": "(Uzuner S.No. Author", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Alibaba Inc. BiLSTM-CRF 94.18 58.73 (Henry et al., 2020) + ELMo embedding, Section Features 2. Dandala et al. (2020) BiLSTM-CRF 93.5 53.5 Custom-trained ELMo using MIMIC-III Knowledge-embeddings from FAERS Custom pre-processing 3.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 56, |
|
"text": "(Henry et al., 2020)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 95, |
|
"end": 116, |
|
"text": "Dandala et al. (2020)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Wei et al. 2020 et al., 2011; Si et al., 2019) and experimented with transfer learning using openly available biomedical contextual embeddings. It is in this context, 1. We evaluate transfer learning models incorporating: BioBERT (Lee et al., 2020) , Clinical-BERT (Alsentzer et al., 2019) , ELMo (Peters et al., 2018) and Flair (Akbik et al., 2018) contextual embeddings pre-trained on PubMed abstracts (Fiorini et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 29, |
|
"text": "et al., 2011;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 30, |
|
"end": 46, |
|
"text": "Si et al., 2019)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 230, |
|
"end": 248, |
|
"text": "(Lee et al., 2020)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 265, |
|
"end": 289, |
|
"text": "(Alsentzer et al., 2019)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 318, |
|
"text": "(Peters et al., 2018)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 329, |
|
"end": 349, |
|
"text": "(Akbik et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 404, |
|
"end": 426, |
|
"text": "(Fiorini et al., 2018)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2. We evaluate embedding-specific methods to maximize performance: language-model finetuning, scalar mix, sub-word token aggregation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "3. Based on the performance of the transfer learning models, we develop procedures for enhanced ADE and Reason identification. Sentence-augmentation at predictiontime benefits ADE extraction by up to +3.13% in F1 gains. It also facilitates a deeper understanding of the behavior of the embeddings. Ensembling strategies help improve performance of all three challenging enities: ADE, Duration, and Reason with up to +2.63% in F1 gains for ADE.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our main intention was to get a transfer learning pipeline working with these embeddings and therefore we did not perform any detailed hyperparameter optimization. Despite this, we were able to achieve strong performance with all the embeddings. Standalone models achieved F1-scores of 53.08% in ADE extraction and 92.91% in the overall task with default features. A basic ensemble constructed from these standalone models achieved F1scores of 55.77% in ADE extraction and 92.82% in the overall task confirming the viability of the overall strategy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Classical research in this area focused on rulebased systems (such as MedEx (Xu et al., 2010) , ADEPt (Iqbal et al., 2017) ) and CRF-based machine-learning leveraging hand-crafted features (Aramaki et al., 2010; Chapman et al., 2019; Nikfarjam et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 93, |
|
"text": "(Xu et al., 2010)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 102, |
|
"end": 122, |
|
"text": "(Iqbal et al., 2017)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 189, |
|
"end": 211, |
|
"text": "(Aramaki et al., 2010;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 212, |
|
"end": 233, |
|
"text": "Chapman et al., 2019;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 257, |
|
"text": "Nikfarjam et al., 2015)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The 2018 n2c2 Adverse Drug Events and Medication Extraction in EHR data-set (Buchan et al.) and Medications tion. Most participants leveraged the BiLSTM-CRF neural model in their work (Chalapathy et al., 2016) . We have listed the top performing methods from the 2018 n2c2 ADE challenge in Table 1 . Dandala et al. (2020) custom-trained biomedical ELMo embeddings using the MIMIC-III data-set (Johnson et al., 2016) ; they also used a rich set of sentence tokenization rules. Ju et al. (2020) leveraged a tree-architecture to detect overlapping spans in addition to lexical and knowledge features (e.g., word shapes, Human Disease Ontology / MedDRA side-effect database information).", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 91, |
|
"text": "(Buchan et al.)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 96, |
|
"end": 107, |
|
"text": "Medications", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 184, |
|
"end": 209, |
|
"text": "(Chalapathy et al., 2016)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 300, |
|
"end": 321, |
|
"text": "Dandala et al. (2020)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 393, |
|
"end": 415, |
|
"text": "(Johnson et al., 2016)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 290, |
|
"end": 297, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Relationship association for medication entities is complementary to our work and can be implemented either jointly or in a pipeline. Such a joint architecture utilizes the signals from the relations task to filter out unwanted medication entities. Wei et al. (2020) adopted such a joint-approach with a three-classifier ensemble achieving 52.95% in ADE extraction. Chen et al. (2020) also used a joint-architecture supplemented by UMLS (Bodenreider, 2004) concept lookups and unique modeling of temporal entities. Dai et al. (2020) cascaded classifiers sequentially to widen the contextual information available for ADE identification. This model also facilitates improved identification when spans overlap. They evaluated ten pre-trained embedding models: half of them were based on MIMIC-III while the rest were general-purpose. Kim and Meystre (2020) uniquely leveraged SEARN (Daum\u00e9 et al., 2009) , a search-based prediction algorithm for its preference of precision over recall.", |
|
"cite_spans": [ |
|
{ |
|
"start": 249, |
|
"end": 266, |
|
"text": "Wei et al. (2020)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 366, |
|
"end": 384, |
|
"text": "Chen et al. (2020)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 515, |
|
"end": 532, |
|
"text": "Dai et al. (2020)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 832, |
|
"end": 854, |
|
"text": "Kim and Meystre (2020)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 880, |
|
"end": 900, |
|
"text": "(Daum\u00e9 et al., 2009)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our work is most similar to Miller et al. (2019) ; they demonstrate that strong medication extraction models can be constructed with minimal engineering using contextual embeddings. The main differences from above mentioned studies are the evaluation of a broader array of contemporary biomedical embeddings, detailed study of fine-tuning strategies, and augmentation methods for ADE extraction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 48, |
|
"text": "Miller et al. (2019)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We use the 2018 n2c2 Adverse Drug Events and Medication Extraction (Track 2) data-set for our experiments. The data-set has a total of 505 clinical notes with nine medication-entities, as shown in Table 2 . We convert these files into CoNLL 2000 BIO (Begin, Inside, Outside) format after pre-processing: split sentences into words, normalize numeric values, treat a subset of punctuation characters as word-boundary markers.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 197, |
|
"end": 204, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data and Pre-Processing", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We formulate the medication extraction task as a standard NER task incorporating a single biomedical embedding from the list below:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transfer Learning Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "1. BioBERT (BB) is a pre-trained version of BERT using PubMed abstracts. We used the Base version.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transfer Learning Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "2. ClinicalBERT (CB) is also BERT-based, trained on clinical notes corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transfer Learning Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "3. ELMo-PubMed (EP) is based on ELMo, pretrained on PubMed abstracts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transfer Learning Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "4. Flair-PubMed (FP) is a Flair contextual embedding pre-trained on PubMed abstracts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transfer Learning Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We also incorporated the Glove (Pennington et al., 2014) classical word embedding as part of our model after a brief evaluation (Section 4.2). Our architectural formulation allows for experimenting with newer embeddings or combined embeddings with incremental effort.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transfer Learning Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We implement our models using the Flair opensource framework (Akbik et al., 2019) . Flair, based on PyTorch, provides off-the-shelf BiLSTM+CRF model, a pluggable architecture for adding embeddings and data-sets. We have retained default hyperparameters and training procedures (details in Appendix A). During parameter selection, we train for 50 epochs. Final models are trained for 150 epochs or until convergence. We used the evaluation script provided as part of the data-set to appraise our models using the test-set. We report the 'Relaxed F1' score per prevailing practice.", |
|
"cite_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 81, |
|
"text": "(Akbik et al., 2019)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In Transfer Learning, the linguistic-information encoded by contextual embedding acts as a primary input to the downstream task layer (BiLSTM). Fine-tuning is generally accepted to be beneficial. However, it requires familiarity with the scripts / associated frameworks specific to the embedding and data-set adaptation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Selection Procedures", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "BERT models have close to a dozen layers (heads). Understanding the linguistic information encoded by these layers and their relative contribution to downstream tasks is an active research area (Liu et al., 2019b; Kovaleva et al., 2019) . Flair uses the last four layers of the BERT models to generate embeddings by default.", |
|
"cite_spans": [ |
|
{ |
|
"start": 194, |
|
"end": 213, |
|
"text": "(Liu et al., 2019b;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 214, |
|
"end": 236, |
|
"text": "Kovaleva et al., 2019)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT Embeddings", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "1. Choice of Layers (4L vs All): The default setting of the end four transformer layers leads to sub-optimal performance (under-fitting) on the training set (Table 3 , Row 1). Rather than choosing specific layers, we tried using all layers. This option generates a vast number of features (11 x 768), for the downstream task (Bi-LSTM), and causes training to run out-of-memory.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 165, |
|
"text": "(Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BERT Embeddings", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "As an alternate, we adopted Scalar Mix (Peters et al., 2018) , a pooling mechanism on the layer-generated representations. Scalar Mix results in a reasonable number of features (768) and performs optimally (Row 2).", |
|
"cite_spans": [ |
|
{ |
|
"start": 39, |
|
"end": 60, |
|
"text": "(Peters et al., 2018)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scalar Mix (SM):", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "3. Mean-Pooling of sub-tokens (MP): BERT models uniquely use word-piece tokenization for out-of-vocabulary (OOV) words. Embeddings can be generated using first sub-token, or first and last sub-tokens, or using an aggregate (mean-pooling) of all sub-tokens. The latter provides best performance (Row 3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scalar Mix (SM):", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "These settings deliver optimal performance for the BERT-models. embeddings enhance NER task performance. Table 4 shows the impact of adding Glove. For the CB model, the noticeable gains were Reason (+1.00 F1) and ADE (+9.00 F1). For the FP model, ADE reduction (-2.00 F1) was offset by gains in Reason (+1.00 F1), Duration (+0.50 F1), and Drug (+0.40 F1). The EP model did not show any meaningful difference. We used the paired method for the rest of our experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 113, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Scalar Mix (SM):", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Language-model fine-tuning aims to improve the performance of Flair-PubMed contextual embeddings on speciality corpora. We performed finetuning for 10 epochs using the 4391 clinical notes from the i2b2/n2c2 data-sets. While all entities exhibited gains, the prominent gainers are shown in Table 5 . We used this fine-tuned model for the rest of our experiments. Table 8 shows the proportion of overlap between two entities. We use TP / (TP+FN) where TP is the number of 'Gold' entities identified correctly and FN is the number of mispredictions ('Pred'). Smaller values indicate higher overlap.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 289, |
|
"end": 296, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 362, |
|
"end": 369, |
|
"text": "Table 8", |
|
"ref_id": "TABREF12" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Flair Embedding Fine-Tuning", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "1. Drug: BERT-models out-perform in the recognition of entities that are predominantly part of the clinical lexicon (e.g., Drug and Form) with CB model out-performing in both. We think that clinical note pre-training contributes to this out-performance. BERT-based models seem to misclassify Drug entities when special characters are involved. 3. Form and Route: Unusual Routes ('take one tab under your tongue') were naturally ignored by all models. Commonly, the method of drug administration is used to describe the drug form also. In 'Heparin 5,000 unit/mL Solution Sig: One (1) Injection TID (3 times a day)', 'Injection' refers to the former and hence a Route while in 'EGD with epinephrine injection and BICAP cautery', it refers to the drug Form. Likewise, 'infusion' generates disagreement. BERT-models generally do well.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "4. Dosages and Strength: Dosages were mislabeled most commonly for Strengths ('iron 0.5 ml per day') by all models followed by Frequency. In 'levophed @ 12 mcg/min', the FP model identifies 'mcg/min' as 'Strength' (correctly) while other models identify 'mcg/min' as 'Frequency'.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Entity BB CB EP FP Drug 29 (0.27%) 21 (0.2%) 27 (0.26%) 55 (0.52%) Strength 4 (0.09%) 10 (0.24%) 2 (0.05%) 12 (0.28%) Duration 2 (0.53%) 1 (0.26%) 2 (0.53%) 6 (1.59%) Route 11 (0.31%) 5 (0.14%) 5 (0.14%) 9 (0.26%) Form 5 (0.11%) 6 (0.14%) 7 (0.16%) 6 (0.14%) ADE 11 (1.76%) 12 (1.92%) 24 (3.84%) 46 (7.36%) Dosage 14 (0.52%) 20 (0.75%) 16 (0.6%) 16 (0.6%) Reason 45 (1.77%) 29 (1.14%) 38 (1.49%) 95 (3.73%) Frequency 3 (0.07%) 6 (0.15%) 2 (0.05%) 9 (0.22%) 5 . Each model uniquely detects several entities not detected by other models (Table 9) . Consider the two sentences that occur next to each other in a clinical note: 'could affect your Coumadin??????/warfarin dosage.' 'Coumadin (Warfarin) and diet:'. The former contains '?' and '/' inter-mixed with the entities. All models detect the entities in the second sentence. However, for the first sentence, the FP model identifies a single Drug entity Coumadin??????/warfarin while the others ignore it altogether.", |
|
"cite_spans": [ |
|
{ |
|
"start": 457, |
|
"end": 458, |
|
"text": "5", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 535, |
|
"end": 544, |
|
"text": "(Table 9)", |
|
"ref_id": "TABREF13" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "6. ADE and Reason: FP model out-performed in ADE recognition (F1=53.08%) followed by the EP model (F1=50.73%). Although the top three models (FP, EP, BB) differ only marginally in Precision (0.6%) they exhibit significant divergence in Recall (+5.76%). There are three significant factors:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Mislabeling between ADE and Reason: CB model generates the highest number of mislabels (low recall) while EP does the best as shown in Table 8 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 142, |
|
"text": "Table 8", |
|
"ref_id": "TABREF12" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In 'Heme/onc was consulted regarding hemolysis and anticoagulation. ... Given her multiple indications for anticoagulation, decision was made to begin coumadin ...', the first reference to 'anticoagulation' is a Drug gold annotation ('blood thinners') while the latter is a Reason ('medical indication'). This example demonstrates the need for good contextual disambiguation. BB/FP models identify correctly. The EP model, ignores the former, and incorrectly identifies the latter as Drug. The CB model fails to identify both entities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mislabeling of ADE/Reason with Drug:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Incomplete word context: Often a Drug entity is needed to successfully infer the presence of an ADE or a Reason entity. However, it may occur in a subsequent sentence creating a challenge for the model. To verify this hypothesis, we evaluated model behavior by combining a sentence with one or more of its subsequent sentences. This is discussed in the next section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mislabeling of ADE/Reason with Drug:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We evaluated model behavior by combining a sentence with one or more of its subsequent sentences. For example, the 'Look-ahead-1 strategy', pairs a sentence with the one immediately following it. We progressively increased the pairing length up to a paragraph. Table 10 shows the ADE performance resulting from this augmentation strategy. Table 11 lists several examples (Drug entities are marked bold when they occur in the subsequent sentence).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 261, |
|
"end": 269, |
|
"text": "Table 10", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 339, |
|
"end": 348, |
|
"text": "Table 11", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Prediction-time Sentence Augmentation", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "1. Reason: 'Hypothyroid' is detected by augmentation due to the co-occurrence of 'Syn- 3. In Ex. 5, altered mental status is identified at sentence-level but is un-annotated (despite 'somnolent' indicating the state of 'feeling drowsy'). 'AMS' is recognized by augmentation but is un-annotated probably because of its diagnostic nature.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Prediction-time Sentence Augmentation", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The 'Look-ahead-1' strategy is the most effective: ADE F1 scores increase by +3.11%, +2.21%, +1.67% for the BB, CB, EP models despite a reduction in Precision. Recall gains for the FP model are offset by a higher reduction in Precision. For Reason entity, all models benefit by augmentation, with the gains ranging between 0.51% to 1.23%. This exercise basically shows that inter-sentence word context impacts ADE and Reason identification and is beneficial when the underlying model is unable to contextualize effectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Prediction-time Sentence Augmentation", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We briefly evaluated model ensembling strategies for enhanced ADE performance. We generate predictions on the underlying models. We combine non-conflicting entities. In the case of a conflict, we prioritize ADE predictions; otherwise, we choose the entity using the confidence score. Table 12 shows three ensemble models based on their 'Overall F1' scores. Table 13 shows the entity-wise performance for the FP+EP ensemble model (selected based on the highest ADE F1 score). The ensemble model delivers the best performance in all three challenging entities: ADE, Duration, and Reason validating the feasibility of the strategy. There are a few limitations in this study that we plan to address in future works:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 284, |
|
"end": 292, |
|
"text": "Table 12", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 357, |
|
"end": 365, |
|
"text": "Table 13", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Ensembles", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Recall F1 F1 \u2206", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Precision", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. We did not fine-tune BERT and ELMo-based embedding models. Doing so may alter the performance profile of these models. Hence, an apples-to-apples comparison between the models is not recommended.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Precision", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2. Adoption of better tokenization methods (e.g., clinical text processing tools), and handling special-cases (such as abbreviations) may further enhance model robustness.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Precision", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3. We also did not do an exhaustive survey of the available embeddings. There may be other more effective embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Precision", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this study, we presented strong performing transfer learning models for the extraction of medication entities using several biomedical contextual embeddings. Our experiments shed light on the strengths of the various embeddings: Flair-PubMed embedding out-performs in ADE extraction. BioBERT and ClinicalBERT embeddings outperform in recognition of Drug and Form medication entities. ELMo-PubMed embedding delivers competitive performance in all medication entities. We showed that sentence-augmentation and ensembling are viable strategies to enhance ADE performance. Our approach is free of hand-generated features and built using off-the-shelf neural models, default hyper-parameters, and training procedures. These factors decrease the development effort. A detailed analysis of embedding-specific factors contributing to mis-classification and inclusion of finetuning procedures are part of our ongoing work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "'contrast dye' is given to a patient to accentuate structures in the CT Scan (Cedars-Sinai)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank the anonymous reviewers for their valuable suggestions and feedback. This work was supported by the biomedical AI groups of Amrita Technologies, Amritapuri, India and Amrita Institute of Medical Sciences, Kochi, India.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": "8" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Flair: An easy-to-use framework for state-of-the-art nlp", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Akbik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tanja", |
|
"middle": [], |
|
"last": "Bergmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Duncan", |
|
"middle": [], |
|
"last": "Blythe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kashif", |
|
"middle": [], |
|
"last": "Rasul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Schweter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roland", |
|
"middle": [], |
|
"last": "Vollgraf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "54--59", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. Flair: An easy-to-use framework for state-of-the-art nlp. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics (Demonstrations), pages 54- 59.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Contextual string embeddings for sequence labeling", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Akbik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Duncan", |
|
"middle": [], |
|
"last": "Blythe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roland", |
|
"middle": [], |
|
"last": "Vollgraf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1638--1649", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638-1649.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Publicly available clinical bert embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Alsentzer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Murphy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Boag", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Hung", |
|
"middle": [], |
|
"last": "Weng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Di", |
|
"middle": [], |
|
"last": "Jindi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tristan", |
|
"middle": [], |
|
"last": "Naumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Mcdermott", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "72--78", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clini- cal bert embeddings. In Proceedings of the 2nd Clin- ical Natural Language Processing Workshop, pages 72-78.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Extraction of adverse drug effects from clinical records", |
|
"authors": [ |
|
{ |
|
"first": "Eiji", |
|
"middle": [], |
|
"last": "Aramaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yasuhide", |
|
"middle": [], |
|
"last": "Miura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masatsugu", |
|
"middle": [], |
|
"last": "Tonoike", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomoko", |
|
"middle": [], |
|
"last": "Ohkuma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hiroshi", |
|
"middle": [], |
|
"last": "Masuichi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kayo", |
|
"middle": [], |
|
"last": "Waki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kazuhiko", |
|
"middle": [], |
|
"last": "Ohe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "MedInfo", |
|
"volume": "160", |
|
"issue": "", |
|
"pages": "739--743", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eiji Aramaki, Yasuhide Miura, Masatsugu Tonoike, Tomoko Ohkuma, Hiroshi Masuichi, Kayo Waki, and Kazuhiko Ohe. 2010. Extraction of ad- verse drug effects from clinical records. MedInfo, 160:739-743.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "The unified medical language system (umls): integrating biomedical terminology", |
|
"authors": [ |
|
{ |
|
"first": "Olivier", |
|
"middle": [], |
|
"last": "Bodenreider", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Nucleic acids research", |
|
"volume": "32", |
|
"issue": "1", |
|
"pages": "267--270", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Olivier Bodenreider. 2004. The unified medical lan- guage system (umls): integrating biomedical termi- nology. Nucleic acids research, 32(suppl 1):D267- D270.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "n2c2 2018-track 2: Adverse drug events and medication extraction in ehrs", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Buchan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kahyun", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Susanne", |
|
"middle": [], |
|
"last": "Churchill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isaac", |
|
"middle": [], |
|
"last": "Kohane", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Buchan, Kahyun Lee, Susanne Churchill, and Isaac Kohane. n2c2 2018-track 2: Adverse drug events and medication extraction in ehrs.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Adverse drug events in adults", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cdc", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "CDC. 2017. Adverse drug events in adults.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Last reviewed on 2017-10-17. Cedars-Sinai", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "https://www.cdc.gov/medicationsafety/ adult_adversedrugevents.html, Last re- viewed on 2017-10-17. Cedars-Sinai. Ct scan of the abdomen.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Bidirectional lstm-crf for clinical concept extraction", |
|
"authors": [ |
|
{ |
|
"first": "Raghavendra", |
|
"middle": [], |
|
"last": "Chalapathy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Crc", |
|
"middle": [], |
|
"last": "Markets", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ehsan", |
|
"middle": [ |
|
"Zare" |
|
], |
|
"last": "Borzeshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Piccardi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "ClinicalNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Raghavendra Chalapathy, Capital Markets CRC, Ehsan Zare Borzeshi, and Massimo Piccardi. 2016. Bidirectional lstm-crf for clinical concept extraction. ClinicalNLP 2016, page 7.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Detecting adverse drug events with rapidly trained classification models", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Chapman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Kelly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Peterson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Alba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Scott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Duvall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Patterson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Drug safety", |
|
"volume": "42", |
|
"issue": "1", |
|
"pages": "147--156", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec B Chapman, Kelly S Peterson, Patrick R Alba, Scott L DuVall, and Olga V Patterson. 2019. Detect- ing adverse drug events with rapidly trained classifi- cation models. Drug safety, 42(1):147-156.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Invited editorial: Advantages and limitations of faers in assessing adverse event reporting for eluxadoline", |
|
"authors": [ |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Chedid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Priya", |
|
"middle": [], |
|
"last": "Vijayvargiya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Camilleri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Clinical gastroenterology and hepatology: the official clinical practice journal of the American Gastroenterological Association", |
|
"volume": "16", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Chedid, Priya Vijayvargiya, and Michael Camil- leri. 2018. Invited editorial: Advantages and lim- itations of faers in assessing adverse event report- ing for eluxadoline. Clinical gastroenterology and hepatology: the official clinical practice journal of the American Gastroenterological Association, 16(3):336.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Dynamically evolving clinical practices and implications for predicting medical decisions", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Jonathan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Goldstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Steven", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Russ B", |
|
"middle": [], |
|
"last": "Asch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Altman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Biocomputing 2016: Proceedings of the Pacific Symposium", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "195--206", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan H Chen, Mary K Goldstein, Steven M Asch, and Russ B Altman. 2016. Dynamically evolv- ing clinical practices and implications for predict- ing medical decisions. In Biocomputing 2016: Pro- ceedings of the Pacific Symposium, pages 195-206. World Scientific.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Extracting medications and associated adverse drug events using a natural language processing system combining knowledge base and deep learning", |
|
"authors": [ |
|
{ |
|
"first": "Long", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xin", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haodan", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "27", |
|
"issue": "1", |
|
"pages": "56--64", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Long Chen, Yu Gu, Xin Ji, Zhiyong Sun, Haodan Li, Yuan Gao, and Yang Huang. 2020. Extracting medications and associated adverse drug events us- ing a natural language processing system combin- ing knowledge base and deep learning. Journal of the American Medical Informatics Association, 27(1):56-64.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Adverse drug event and medication extraction in electronic health records via a cascading architecture with different sequence labeling models and word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Hong-Jie", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chu-Hsien", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chi-Shin", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "27", |
|
"issue": "1", |
|
"pages": "47--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hong-Jie Dai, Chu-Hsien Su, and Chi-Shin Wu. 2020. Adverse drug event and medication extraction in electronic health records via a cascading architecture with different sequence labeling models and word embeddings. Journal of the American Medical In- formatics Association, 27(1):47-55.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Extraction of information related to drug safety surveillance from electronic health record notes: Joint modeling of entities and relations using knowledge-aware neural attentive models", |
|
"authors": [ |
|
{ |
|
"first": "Bharath", |
|
"middle": [], |
|
"last": "Dandala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Venkata", |
|
"middle": [], |
|
"last": "Joopudi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ching-Huei", |
|
"middle": [], |
|
"last": "Tsou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Parthasarathy", |
|
"middle": [], |
|
"last": "Suryanarayanan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "JMIR medical informatics", |
|
"volume": "8", |
|
"issue": "7", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bharath Dandala, Venkata Joopudi, Ching-Huei Tsou, Jennifer J Liang, and Parthasarathy Suryanarayanan. 2020. Extraction of information related to drug safety surveillance from electronic health record notes: Joint modeling of entities and relations us- ing knowledge-aware neural attentive models. JMIR medical informatics, 8(7):e18417.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Search-based structured prediction. Machine learning", |
|
"authors": [ |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Langford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "75", |
|
"issue": "", |
|
"pages": "297--325", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hal Daum\u00e9, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine learn- ing, 75(3):297-325.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "How user intelligence is improving pubmed", |
|
"authors": [ |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Fiorini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Leaman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyong", |
|
"middle": [], |
|
"last": "Lipman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Nature biotechnology", |
|
"volume": "36", |
|
"issue": "10", |
|
"pages": "937--945", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicolas Fiorini, Robert Leaman, David J Lipman, and Zhiyong Lu. 2018. How user intelligence is improv- ing pubmed. Nature biotechnology, 36(10):937- 945.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "n2c2 shared task on adverse drug events and medication extraction in electronic health records", |
|
"authors": [ |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Henry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Buchan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michele", |
|
"middle": [], |
|
"last": "Filannino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amber", |
|
"middle": [], |
|
"last": "Stubbs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ozlem", |
|
"middle": [], |
|
"last": "Uzuner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "27", |
|
"issue": "1", |
|
"pages": "3--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sam Henry, Kevin Buchan, Michele Filannino, Amber Stubbs, and Ozlem Uzuner. 2020. 2018 n2c2 shared task on adverse drug events and medication extrac- tion in electronic health records. Journal of the American Medical Informatics Association, 27(1):3- 12.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Stimulated reporting: the impact of us food and drug administration-issued alerts on the adverse event reporting system (faers)", |
|
"authors": [ |
|
{ |
|
"first": "Andrea", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Keith B Hoffman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mo", |
|
"middle": [], |
|
"last": "Demakas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dimbil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Nicholas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colin", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Tatonetti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Erdman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Drug safety", |
|
"volume": "37", |
|
"issue": "11", |
|
"pages": "971--980", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Keith B Hoffman, Andrea R Demakas, Mo Dimbil, Nicholas P Tatonetti, and Colin B Erdman. 2014. Stimulated reporting: the impact of us food and drug administration-issued alerts on the adverse event re- porting system (faers). Drug safety, 37(11):971- 980.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Adept, a semanticallyenriched pipeline for extracting adverse drug events from free-text electronic health records", |
|
"authors": [ |
|
{ |
|
"first": "Ehtesham", |
|
"middle": [], |
|
"last": "Iqbal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robbie", |
|
"middle": [], |
|
"last": "Mallah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Rhodes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Honghan", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alvin", |
|
"middle": [], |
|
"last": "Romero", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nynn", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olubanke", |
|
"middle": [], |
|
"last": "Dzahini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chandra", |
|
"middle": [], |
|
"last": "Pandey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Broadbent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Stewart", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "PloS one", |
|
"volume": "12", |
|
"issue": "11", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ehtesham Iqbal, Robbie Mallah, Daniel Rhodes, Hong- han Wu, Alvin Romero, Nynn Chang, Olubanke Dzahini, Chandra Pandey, Matthew Broadbent, Robert Stewart, et al. 2017. Adept, a semantically- enriched pipeline for extracting adverse drug events from free-text electronic health records. PloS one, 12(11):e0187121.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Overview of the first natural language processing challenge for extracting medication, indication, and adverse drug events from electronic health record notes (made 1.0). Drug safety", |
|
"authors": [ |
|
{ |
|
"first": "Abhyuday", |
|
"middle": [], |
|
"last": "Jagannatha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Feifan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weisong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hong", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "42", |
|
"issue": "", |
|
"pages": "99--111", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abhyuday Jagannatha, Feifan Liu, Weisong Liu, and Hong Yu. 2019. Overview of the first natural lan- guage processing challenge for extracting medica- tion, indication, and adverse drug events from elec- tronic health record notes (made 1.0). Drug safety, 42(1):99-111.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Mimiciii, a freely accessible critical care database", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Alistair", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu", |
|
"middle": [], |
|
"last": "Pollard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H Lehman", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mengling", |
|
"middle": [], |
|
"last": "Li-Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Ghassemi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Moody", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leo", |
|
"middle": [ |
|
"Anthony" |
|
], |
|
"last": "Szolovits", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roger G", |
|
"middle": [], |
|
"last": "Celi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Scientific data", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-wei, Mengling Feng, Moham- mad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic- iii, a freely accessible critical care database. Scien- tific data, 3:160035.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "An ensemble of neural models for nested adverse drug events and medication extraction with subwords", |
|
"authors": [ |
|
{ |
|
"first": "Meizhi", |
|
"middle": [], |
|
"last": "Ju", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Nhung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Makoto", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sophia", |
|
"middle": [], |
|
"last": "Miwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ananiadou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "27", |
|
"issue": "1", |
|
"pages": "22--30", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Meizhi Ju, Nhung TH Nguyen, Makoto Miwa, and Sophia Ananiadou. 2020. An ensemble of neural models for nested adverse drug events and medica- tion extraction with subwords. Journal of the Amer- ican Medical Informatics Association, 27(1):22-30.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Ensemble method-based extraction of medication and related information from clinical texts", |
|
"authors": [ |
|
{ |
|
"first": "Youngjun", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "St\u00e9phane M Meystre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "27", |
|
"issue": "1", |
|
"pages": "31--38", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Youngjun Kim and St\u00e9phane M Meystre. 2020. En- semble method-based extraction of medication and related information from clinical texts. Journal of the American Medical Informatics Association, 27(1):31-38.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Revealing the dark secrets of bert", |
|
"authors": [ |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Kovaleva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexey", |
|
"middle": [], |
|
"last": "Romanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Rogers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Rumshisky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4365--4374", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of bert. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4365-4374.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", |
|
"authors": [ |
|
{ |
|
"first": "Jinhyuk", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wonjin", |
|
"middle": [], |
|
"last": "Yoon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sungdong", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donghyeon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunkyu", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chan", |
|
"middle": [], |
|
"last": "Ho So", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaewoo", |
|
"middle": [], |
|
"last": "Kang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Bioinformatics", |
|
"volume": "36", |
|
"issue": "4", |
|
"pages": "1234--1240", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomed- ical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Towards drug safety surveillance and pharmacovigilance: current progress in detecting medication and adverse drug events from electronic health records", |
|
"authors": [ |
|
{ |
|
"first": "Feifan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abhyuday", |
|
"middle": [], |
|
"last": "Jagannatha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hong", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Feifan Liu, Abhyuday Jagannatha, and Hong Yu. 2019a. Towards drug safety surveillance and phar- macovigilance: current progress in detecting med- ication and adverse drug events from electronic health records.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Linguistic knowledge and transferability of contextual representations", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Nelson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah A", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1073--1094", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nelson F Liu, Matt Gardner, Yonatan Belinkov, Matthew E Peters, and Noah A Smith. 2019b. Lin- guistic knowledge and transferability of contextual representations. In Proceedings of NAACL-HLT, pages 1073-1094.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Extracting adverse drug event information with minimal engineering", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Geva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dmitriy", |
|
"middle": [], |
|
"last": "Dligach", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "22--27", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy Miller, Alon Geva, and Dmitriy Dligach. 2019. Extracting adverse drug event information with min- imal engineering. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 22- 27.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Electronic health data for postmarket surveillance: a vision not realized", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Curt", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Moore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Furberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Drug safety", |
|
"volume": "38", |
|
"issue": "7", |
|
"pages": "601--610", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas J Moore and Curt D Furberg. 2015. Electronic health data for postmarket surveillance: a vision not realized. Drug safety, 38(7):601-610.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features", |
|
"authors": [ |
|
{ |
|
"first": "Azadeh", |
|
"middle": [], |
|
"last": "Nikfarjam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abeed", |
|
"middle": [], |
|
"last": "Sarker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karen", |
|
"middle": [ |
|
"O" |
|
], |
|
"last": "'connor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rachel", |
|
"middle": [], |
|
"last": "Ginn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graciela", |
|
"middle": [], |
|
"last": "Gonzalez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "22", |
|
"issue": "3", |
|
"pages": "671--681", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Azadeh Nikfarjam, Abeed Sarker, Karen O'connor, Rachel Ginn, and Graciela Gonzalez. 2015. Phar- macovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features. Journal of the American Medical Informatics Association, 22(3):671-681.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2227--2237", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227- 2237.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Enhancing clinical concept extraction with contextual embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Yuqi", |
|
"middle": [], |
|
"last": "Si", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingqi", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kirk", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "26", |
|
"issue": "11", |
|
"pages": "1297--1304", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuqi Si, Jingqi Wang, Hua Xu, and Kirk Roberts. 2019. Enhancing clinical concept extraction with contex- tual embeddings. Journal of the American Medical Informatics Association, 26(11):1297-1304.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Clinical and economic burden of adverse drug reactions", |
|
"authors": [ |
|
{ |
|
"first": "Janet", |
|
"middle": [], |
|
"last": "Sultana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paola", |
|
"middle": [], |
|
"last": "Cutroneo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gianluca", |
|
"middle": [], |
|
"last": "Trifir\u00f2", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Journal of pharmacology & pharmacotherapeutics", |
|
"volume": "4", |
|
"issue": "Suppl1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Janet Sultana, Paola Cutroneo, and Gianluca Trifir\u00f2. 2013. Clinical and economic burden of adverse drug reactions. Journal of pharmacology & phar- macotherapeutics, 4(Suppl1):S73.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "i2b2/va challenge on concepts, assertions, and relations in clinical text", |
|
"authors": [ |
|
{ |
|
"first": "Ozlem", |
|
"middle": [], |
|
"last": "Uzuner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Brett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuying", |
|
"middle": [], |
|
"last": "South", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Scott L", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Duvall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "18", |
|
"issue": "5", |
|
"pages": "552--556", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ozlem Uzuner, Brett R South, Shuying Shen, and Scott L DuVall. 2011. 2010 i2b2/va challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Asso- ciation, 18(5):552-556.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "A study of deep learning approaches for medication and adverse drug event extraction from clinical text", |
|
"authors": [ |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zongcheng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiheng", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingcheng", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingqi", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Xiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Firat", |
|
"middle": [], |
|
"last": "Tiryaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yaoyun", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "27", |
|
"issue": "1", |
|
"pages": "13--21", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qiang Wei, Zongcheng Ji, Zhiheng Li, Jingcheng Du, Jingqi Wang, Jun Xu, Yang Xiang, Firat Tiryaki, Stephen Wu, Yaoyun Zhang, et al. 2020. A study of deep learning approaches for medication and ad- verse drug event extraction from clinical text. Jour- nal of the American Medical Informatics Associa- tion, 27(1):13-21.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Medex: a medication information extraction system for clinical narratives", |
|
"authors": [ |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Shane", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Son", |
|
"middle": [], |
|
"last": "Stenner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Doan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lemuel", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Waitman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Denny", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "A Appendices A.1 List of Hyper Parameters 1. LSTM: Single-Layer, Bi-Directional", |
|
"volume": "17", |
|
"issue": "1", |
|
"pages": "19--24", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hua Xu, Shane P Stenner, Son Doan, Kevin B John- son, Lemuel R Waitman, and Joshua C Denny. 2010. Medex: a medication information extraction system for clinical narratives. Journal of the American Med- ical Informatics Association, 17(1):19-24. A Appendices A.1 List of Hyper Parameters 1. LSTM: Single-Layer, Bi-Directional, 256 hid- den states.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "SGD optimizer with initial learning rate: 0.1, annealing rate of 0.5, and patience of 3", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "SGD optimizer with initial learning rate: 0.1, annealing rate of 0.5, and patience of 3.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Batch Size: 16. For BERT experiments, we used a batch size of 8 to avoid GPU out-ofmemory issues", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Batch Size: 16. For BERT experiments, we used a batch size of 8 to avoid GPU out-of- memory issues.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "We train with both training and development data-set (train with dev=True)", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "We train with both training and development data-set (train with dev=True).", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "All experiments were conducted on Google Colab GPU + High-RAM configuration", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "All experiments were conducted on Google Colab GPU + High-RAM configuration.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Relevant related work." |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Dataset Characteristics." |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">S.No Method</td><td colspan=\"3\">Reason ADE Overall</td></tr><tr><td/><td/><td>F1</td><td>F1</td><td>F1</td></tr><tr><td/><td>ClinicalBERT</td><td/><td/><td/></tr><tr><td>1.</td><td>Default (4L)</td><td>62.87</td><td>11.83</td><td>91.50</td></tr><tr><td>2.</td><td>All + SM</td><td>63.10</td><td>32.07</td><td>92.11</td></tr><tr><td>3.</td><td>All + SM/MP</td><td>65.02</td><td>32.47</td><td>92.41</td></tr><tr><td>4.</td><td>3. w/o Glove</td><td>64.17</td><td>22.73</td><td>92.15</td></tr><tr><td/><td>BioBERT (Base)</td><td/><td/><td/></tr><tr><td>5.</td><td>4L + SM/MP</td><td>63.27</td><td>39.73</td><td>92.11</td></tr><tr><td>6.</td><td>All + SM/MP</td><td>64.04</td><td>43.07</td><td>92.20</td></tr><tr><td>7.</td><td>6. w/o Glove</td><td>64.65</td><td>43.74</td><td>92.17</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Akbik et al. (2018) show that paired use of classic word embeddings (such as Glove) and contextual" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"content": "<table><tr><td>Embedding</td><td colspan=\"3\">Standalone +Glove F1 \u2206</td></tr><tr><td>ClinicalBERT</td><td>92.15</td><td>92.41</td><td>+0.26</td></tr><tr><td>BioBERT</td><td>92.17</td><td>92.20</td><td>+0.03</td></tr><tr><td colspan=\"2\">ELMo-PubMed 92.31</td><td>92.23</td><td>-0.08</td></tr><tr><td>Flair-PubMed</td><td>92.39</td><td>92.92</td><td>+0.53</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "BERT Parameter Selection (50 epochs)" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "" |
|
}, |
|
"TABREF8": { |
|
"num": null, |
|
"content": "<table><tr><td>Entity</td><td colspan=\"6\">BB-Pr BB-Re BB-F1 CB-Pr CB-Re CB-F1</td></tr><tr><td>Drug</td><td>95.24</td><td colspan=\"3\">94.64 94.94 2 95.78</td><td colspan=\"2\">94.24 95.00 1</td></tr><tr><td>Strength</td><td>97.94</td><td colspan=\"3\">97.95 97.85 2 97.30</td><td>97.99</td><td>97.64</td></tr><tr><td>Duration</td><td>88.86</td><td>80.16</td><td>84.28</td><td>90.32</td><td colspan=\"2\">81.48 85.67 1</td></tr><tr><td>Route</td><td>95.59</td><td>94.93</td><td>95.26</td><td>95.69</td><td>94.79</td><td>95.24</td></tr><tr><td>Form</td><td>96.83</td><td colspan=\"3\">94.70 95.76 2 97.20</td><td colspan=\"2\">94.75 95.96 1</td></tr><tr><td>ADE</td><td>64.55</td><td>39.04</td><td>48.65</td><td>58.79</td><td>31.04</td><td>40.63</td></tr><tr><td>Dosage</td><td>93.05</td><td colspan=\"3\">93.92 93.48 2 93.19</td><td>93.47</td><td>93.33</td></tr><tr><td>Reason</td><td>77.00</td><td>59.06</td><td>66.84</td><td>80.71</td><td colspan=\"2\">57.52 67.17 2</td></tr><tr><td colspan=\"2\">Frequency 96.84</td><td>97.06</td><td>96.95</td><td>97.52</td><td colspan=\"2\">96.96 97.24 1</td></tr><tr><td>Overall</td><td colspan=\"4\">94.32 91.34 2 92.81 94.85 1</td><td>90.93</td><td>92.85</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "" |
|
}, |
|
"TABREF9": { |
|
"num": null, |
|
"content": "<table><tr><td>5 Discussion</td></tr><tr><td>Tables 6 and 7 show the overall performance of</td></tr><tr><td>the various models. The prefixes (BB, CB, EP,</td></tr><tr><td>FP) shows the contextual embedding used; and</td></tr><tr><td>the suffix (Pr, Re, F1) shows the Precision, Recall,</td></tr><tr><td>F1 metrics. The two highest F1 score for each</td></tr><tr><td>entity are indicated via subscripts. The three most</td></tr><tr><td>challenging entities are underlined.</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "BB and CB Models" |
|
}, |
|
"TABREF11": { |
|
"num": null, |
|
"content": "<table><tr><td>Gold</td><td>Pred</td><td>BB</td><td>CB</td><td>EP</td><td>FP</td></tr><tr><td>ADE</td><td>Reason</td><td>81.8%</td><td>79.2%</td><td colspan=\"2\">86.08% 83.82%</td></tr><tr><td>Reason</td><td>ADE</td><td colspan=\"2\">97.32% 96.6%</td><td colspan=\"2\">97.97% 97.09%</td></tr><tr><td>A/R</td><td>Drug</td><td>98.8%</td><td colspan=\"3\">98.48% 98.60% 99.19%</td></tr><tr><td>Form</td><td>Route</td><td colspan=\"4\">98.49% 98.51% 98.41% 98.37%</td></tr><tr><td>Route</td><td>Form</td><td colspan=\"4\">98.43% 98.57% 98.66% 98.43%</td></tr><tr><td>Dosage</td><td>Strength</td><td colspan=\"4\">99.01% 98.30% 98.61% 98.61%</td></tr><tr><td>Dosage</td><td colspan=\"5\">Frequency 99.21% 99.84% 99.87% 99.36%</td></tr><tr><td colspan=\"6\">Duration Frequency 96.80% 96.55% 96.58% 96.01%</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "EP and FP Models" |
|
}, |
|
"TABREF12": { |
|
"num": null, |
|
"content": "<table><tr><td>Henry et al. (2020)'s observation that col-</td></tr><tr><td>loquial language use is a leading contribu-</td></tr><tr><td>tor to the confusion also implies the under-</td></tr><tr><td>lying context-sensitivity. In 'CLOBETASOL</td></tr><tr><td>... x up to 2 weeks per month', '2 weeks per</td></tr><tr><td>month' gets incorrectly tagged as Frequency.</td></tr><tr><td>In Section 5.3 we show that ensembling FP</td></tr><tr><td>model with any one of the other models deliv-</td></tr><tr><td>ers best overall Duration performance.</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Confusion Matrix 2. Duration: Having the fewest entities (378),Duration gets mislabeled maximally with Frequency and to a lesser degree with Dosage." |
|
}, |
|
"TABREF13": { |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Unique Counts (Count / Total)" |
|
}, |
|
"TABREF15": { |
|
"num": null, |
|
"content": "<table><tr><td>: ADE augmentation (150 epochs)</td></tr><tr><td>Reason (True Positive)</td></tr><tr><td>1. -Hypothyroid. Continued Synthroid</td></tr><tr><td>2. ... admitted ... due to H1N1 influenza A.</td></tr><tr><td>... 6 days of Tamiflu and Levaquin ...</td></tr><tr><td>Reason (False Positive)</td></tr><tr><td>3. You were ... right foot cellulitis and osteomyelitis.</td></tr><tr><td>You were started on antibiotics.</td></tr><tr><td>ADE (True Positive)</td></tr><tr><td>4. ... developed AMS and decreased respiratory rate.</td></tr><tr><td>... thought to be secondary to methadone overdose ...</td></tr><tr><td>ADE (False Positive)</td></tr><tr><td>5. His AMS was due to pain ... He had significant</td></tr><tr><td>altered mental status after one day when he appeared</td></tr><tr><td>more somnolent after a dose of Morphine 2mg IV.</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "" |
|
}, |
|
"TABREF16": { |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Augmentation TP / FP Examples" |
|
}, |
|
"TABREF19": { |
|
"num": null, |
|
"content": "<table><tr><td>: FP+EP Ensemble</td></tr><tr><td>6 Limitations and Future Work</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "" |
|
} |
|
} |
|
} |
|
} |