|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T11:19:44.825323Z" |
|
}, |
|
"title": "Explainable Clinical Decision Support from Text", |
|
"authors": [ |
|
{ |
|
"first": "Jinyue", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Unity Health Toronto", |
|
"location": { |
|
"settlement": "Toronto", |
|
"region": "ON" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Chantal", |
|
"middle": [], |
|
"last": "Shaib", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Rudzicz", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Toronto", |
|
"location": { |
|
"settlement": "Toronto", |
|
"region": "ON" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Clinical prediction models often use structured variables and provide outcomes that are not readily interpretable by clinicians. Further, free-text medical notes may contain information not immediately available in structured variables. We propose a hierarchical CNNtransformer model with explicit attention as an interpretable, multi-task clinical language model, which achieves an AUROC of 0.75 and 0.78 on sepsis and mortality prediction on the English MIMIC-III dataset, respectively. We also explore the relationships between learned features from structured and unstructured variables using projection-weighted canonical correlation analysis. Finally, we outline a protocol to evaluate model usability in a clinical decision support context. From domain-expert evaluations, our model generates informative rationales that have promising real-life applications.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Clinical prediction models often use structured variables and provide outcomes that are not readily interpretable by clinicians. Further, free-text medical notes may contain information not immediately available in structured variables. We propose a hierarchical CNNtransformer model with explicit attention as an interpretable, multi-task clinical language model, which achieves an AUROC of 0.75 and 0.78 on sepsis and mortality prediction on the English MIMIC-III dataset, respectively. We also explore the relationships between learned features from structured and unstructured variables using projection-weighted canonical correlation analysis. Finally, we outline a protocol to evaluate model usability in a clinical decision support context. From domain-expert evaluations, our model generates informative rationales that have promising real-life applications.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Electronic medical records (EMRs) store both structured data (e.g., vitals and laboratory measurements) and unstructured data (e.g., nursing and physician notes). Previous clinical prediction tasks have focused on structured data (e.g., Desautels et al., 2016; Gultepe et al., 2013; Ghassemi et al., 2014) which, despite their utility, may not capture all of the useful information in associated text. Clinical decision support systems rarely take advantage of free-text notes due to the complex nature of clinical language and interpretation. Rules and specialized grammars can be applied to circumvent issues around clinical language; however, these methods rely on the presence of certain phrases and spelling, and do not account for the highly variable note structures across departments and hospitals (Yao et al., 2019; Mykowiecka et al., 2009; Assale et al., 2019) . Further, opaque models without explainability are often met with resistance in medical contexts (Challen et al., 2019; Ahmad et al., 2018; Gordon et al., 2019) . To address these challenges, we propose a novel multi-task language model that also provides rationales for decisions in medicine.", |
|
"cite_spans": [ |
|
{ |
|
"start": 237, |
|
"end": 260, |
|
"text": "Desautels et al., 2016;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 282, |
|
"text": "Gultepe et al., 2013;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 283, |
|
"end": 305, |
|
"text": "Ghassemi et al., 2014)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 806, |
|
"end": 824, |
|
"text": "(Yao et al., 2019;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 825, |
|
"end": 849, |
|
"text": "Mykowiecka et al., 2009;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 850, |
|
"end": 870, |
|
"text": "Assale et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 969, |
|
"end": 991, |
|
"text": "(Challen et al., 2019;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 992, |
|
"end": 1011, |
|
"text": "Ahmad et al., 2018;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1012, |
|
"end": 1032, |
|
"text": "Gordon et al., 2019)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our multi-task model leverages ClinicalBERT (Alsentzer et al., 2019) , which is a transformerbased model pre-trained on clinical corpora. Given the uniqueness of medical text, we introduce a combination of CNN and transformer encoders to capture phrase-level patterns and global contextual relationships. Additionally, we explore latent attention layers to generate rationales.", |
|
"cite_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 68, |
|
"text": "(Alsentzer et al., 2019)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Based on availability, we use the MIMIC-III database (Johnson et al., 2016) to predict two outcomes: sepsis and mortality in the intensive care unit (ICU). All experiments are conducted on notes written in English. We define the task of sepsis prediction more rigorously than previous work due both to using textual data only, and to emphasize the practicality of this model in real-world applications. Moreover, we use canonical correlation analysis (CCA; Hotelling 1992) to explore relationships between latent features learned from both structured and unstructured data. Finally, we propose an evaluation protocol to examine the usability of our model as an interpretable decision support tool.", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 75, |
|
"text": "(Johnson et al., 2016)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 451, |
|
"end": 456, |
|
"text": "(CCA;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 457, |
|
"end": 472, |
|
"text": "Hotelling 1992)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Transformers (Vaswani et al., 2017) have gained popularity given their strong performance and parallelizability. The success of the transformer-based BERT (Devlin et al., 2019) has inspired numerous studies to apply it in various domains. For example, BioBERT was pretrained on PubMed abstracts and articles and was able to better identify biomedical entities and boundaries than base BERT (Lee et al., 2020) . Alsentzer et al. (2019) further fine-tuned BioBERT on the MIMIC-III clinical dataset (Johnson et al., 2016 ) and released the model as Clin-icalBERT. We use these pretrained BERT-based models as static feature extractors and build layers upon the word embeddings to learn task-specific representations spanning long documents.", |
|
"cite_spans": [ |
|
{ |
|
"start": 13, |
|
"end": 35, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 155, |
|
"end": 176, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 408, |
|
"text": "(Lee et al., 2020)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 411, |
|
"end": 434, |
|
"text": "Alsentzer et al. (2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 496, |
|
"end": 517, |
|
"text": "(Johnson et al., 2016", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work 2.1 Transformers in the clinical domain", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Explainable AI is an emerging field with no standardized methodology or evaluation metrics. The definition of model explainability also varies by application; however, a generally accepted approach to language model explainability is through extractive rationales (Lei et al., 2016; Mullenbach et al., 2018; Wiegreffe and Pinter, 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 264, |
|
"end": 282, |
|
"text": "(Lei et al., 2016;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 283, |
|
"end": 307, |
|
"text": "Mullenbach et al., 2018;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 335, |
|
"text": "Wiegreffe and Pinter, 2019)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language model explainability", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The wide application of attention mechanisms has led to an ongoing debate over whether attention can be used as explanation (Serrano and Smith, 2019; Jain and Wallace, 2019; Wiegreffe and Pinter, 2019) . Jain and Wallace (2019) claimed that attention scores in recurrent neural networks (RNNs) did not correlate with other feature-importance measures, and adversarial attentions did not affect model predictions, concluding that attention was not explanation. Wiegreffe and Pinter (2019) challenged these assumptions by proposing diagnostic tests that allow for meaningful interpretation of attention, but also showed that adversarial attention distributions failed to achieve the same level of prediction performance as real model attention.", |
|
"cite_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 149, |
|
"text": "(Serrano and Smith, 2019;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 150, |
|
"end": 173, |
|
"text": "Jain and Wallace, 2019;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 174, |
|
"end": 201, |
|
"text": "Wiegreffe and Pinter, 2019)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 227, |
|
"text": "Jain and Wallace (2019)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 460, |
|
"end": 487, |
|
"text": "Wiegreffe and Pinter (2019)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language model explainability", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We propose a clinical decision support tool that uses explanations to enhance model usability and reliability. Therefore, we adopt a view similar to that of Wiegreffe and Pinter (2019) , in that attention provides plausible rationales for use in practice, even though it may not provide a complete internal representation of the model's behaviour (Serrano and Smith, 2019; Jain and Wallace, 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 184, |
|
"text": "Wiegreffe and Pinter (2019)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 347, |
|
"end": 372, |
|
"text": "(Serrano and Smith, 2019;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 373, |
|
"end": 396, |
|
"text": "Jain and Wallace, 2019)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language model explainability", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Sepsis is an extreme systemic inflammatory response to infection. If left untreated, sepsis can lead to life-threatening complications such as organ failure and septic shock. The ability to predict sepsis before symptom onset allows for earlier intervention, thus improving patient outcomes. Previous work on sepsis detection focused on both post-hoc identification as well as predicting the need for early intervention from structured data (Desautels et al., 2016; Taylor et al., 2016; Nemati et al., 2018; Gultepe et al., 2013) . As mortality has an explicit label in EMRs, the focus has been on expiry likelihood for early intervention rather than post-hoc identification (Ghassemi et al., 2014; Grnarova et al., 2016) . We focus on work that used the MIMIC-III database (Johnson et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 441, |
|
"end": 465, |
|
"text": "(Desautels et al., 2016;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 466, |
|
"end": 486, |
|
"text": "Taylor et al., 2016;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 487, |
|
"end": 507, |
|
"text": "Nemati et al., 2018;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 508, |
|
"end": 529, |
|
"text": "Gultepe et al., 2013)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 675, |
|
"end": 698, |
|
"text": "(Ghassemi et al., 2014;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 699, |
|
"end": 721, |
|
"text": "Grnarova et al., 2016)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 774, |
|
"end": 796, |
|
"text": "(Johnson et al., 2016)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clinical tasks", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Insight (Desautels et al., 2016) provided a method for predicting sepsis from vital signs within a fixed-time window before suspected onset on retrospective data. Gultepe et al. (2013) proposed a similar structured-data model for mortality and sepsis prediction; however, the features were preselected and only considered five measurements. While these methods achieved robust results compared to traditional clinical measures (e.g., MEWS, qSOFA, SIRS; Churpek et al. 2017) , none took advantage of the unstructured data found in EMRs. Culliton et al. (2017) claimed that unstructured data in EMRs contain information not found in the structured variables. They used GloVe word embeddings to represent notes for each patient, and only excluded discharge summaries to minimize explicit mentions of sepsis. Simply excluding discharge summaries, however, is not sufficient to avoid label leakage -a diagnosis may appear in the notes as the clinician becomes aware of symptoms. We carefully filter notes to ensure no label leakage occurs and further refine our definition of sepsis prediction, as described in Section 4. Ghassemi et al. (2014) used topic modeling for textual representations aggregated with structured patient data to predict mortality, but Grnarova et al. (2016) showed that using convolutional document embeddings for each patient outperformed these topic modelling strategies for mortality prediction. Similarly, we deploy convolutional layers in our model to obtain sentence-level embeddings. Horng et al. combined structured and unstructured data for sepsis prediction, using topic models and continuousbag-of-words (CBOW) to represent text. Despite success, GloVE word embeddings, topic models, and CBOW do not generally capture the complexity and contextual relationships between words in a given text. Specifically, these methods rely primarily on word frequency and collapse multiple meanings of a word into a single representation. To this end, we implement a transformer-based model to represent our clinical notes, which we hypothesize may capture the contextual complexity between tokens more completely.", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 32, |
|
"text": "(Desautels et al., 2016)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 163, |
|
"end": 184, |
|
"text": "Gultepe et al. (2013)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 434, |
|
"end": 452, |
|
"text": "MEWS, qSOFA, SIRS;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 453, |
|
"end": 473, |
|
"text": "Churpek et al. 2017)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 536, |
|
"end": 558, |
|
"text": "Culliton et al. (2017)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1254, |
|
"end": 1276, |
|
"text": "Grnarova et al. (2016)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clinical tasks", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The structure of our model is illustrated in Figure 1 . We now explain each component in detail.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 54, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model architectures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "BERT word embeddings: BERT and its variants have exhibited strong performance in various tasks and we are interested in its application specifically in medical contexts. As shown in Figure 2 , medical documents can easily contain thousands of tokens. With the sequence length limit of 512 tokens, using BERT as a fine-tuning language model on long documents is practically challenging or impossible. Instead, we approach this problem in a depth-first manner and use BERT as a static feature extractor on a sentence-by-sentence basis. Such a feature-based approach with BERT has proved to be nearly as effective as the fine-tuning approach in other tasks (Devlin et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 654, |
|
"end": 675, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 190, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model architectures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We split each document into n sentences of m tokens and use a separate data loader with a sequential sampler to group them into sub-batches. The input is truncated or padded at both the sentenceand token-level. We then feed the sentences into a BERT model and take the mean of the last four encoder layers as token embeddings. For tokenization, we omit two irrelevant tokens [CLS] , which is used as a pooling mechanism in fine-tuning models, and [SEP] , which is used in next sentence prediction and sentence-pair classification tasks. BERT-related modeling and processing code comes from HuggingFace's Transformers library (Wolf et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 375, |
|
"end": 380, |
|
"text": "[CLS]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 447, |
|
"end": 452, |
|
"text": "[SEP]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 625, |
|
"end": 644, |
|
"text": "(Wolf et al., 2019)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model architectures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Given an input T = [t 11 , t 12 ... t ij ... t nm ], where t ij denotes the j th token of the i th sentence, the BERT feature extractor outputs", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model architectures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "X = [x11 ... xnm] = BERT (T ),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model architectures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where x ij is a d emb -dimensional vector (i.e., the hidden dimension of the BERT configuration) corresponding to t ij .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model architectures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Convolutional layer: Previous studies using CNNs to process medical notes have achieved good results on tasks such as mortality prediction and ICD-9-CM diagnosis code classification (Grnarova et al., 2016; Mullenbach et al., 2018; Si and Roberts, 2019) . Specifically, a qualitative evaluation of text snippets from an attentional CNN indicated the model's ability to learn features that are deemed informative and diagnosis-relevant by a physician (Mullenbach et al., 2018) . This suggests that the CNN is suitable for extracting information regarding patient status at the phrase-level. We use a simple 1D convolutional layer along the sequence of each sentence followed by ReLU activation and 1D max-pooling to obtain sentence representations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 205, |
|
"text": "(Grnarova et al., 2016;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 206, |
|
"end": 230, |
|
"text": "Mullenbach et al., 2018;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 231, |
|
"end": 252, |
|
"text": "Si and Roberts, 2019)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 449, |
|
"end": 474, |
|
"text": "(Mullenbach et al., 2018)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model architectures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Taking X as the input, the CNN outputs an n \u00d7 d f eature matrix.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model architectures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "S = M axP ool(ReLU (Conv(X)))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model architectures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where d f eature is the number of output channels of the convolution layer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model architectures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Transformer patient encoder: Medical notes frequently contain repeated segments of medical histories as well as plans for future treatment. Although related work in patient-clinician dialogue has explicitly used time-series information (Khattak et al., 2019), the strict temporal order of patient conditions in clinical notes can be disrupted by repeating information. Yet, the highly complex mechanisms of medical outcomes entail that the coexistence of some conditions may change the indication of others. We apply a two-layer transformer encoder on top of sentence features to capture a unified representation among descriptions. This step of encoding results in a matrix", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model architectures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "ST = T ransf ormer(S)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model architectures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "that shares the same dimension as S.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model architectures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Although multi-head attention is powerful (Clark et al., 2019) , it is not yet clear how to derive rationales for model prediction from such an approach. For model explainability, we instead apply an explicit attention mechanism that is directly implementable and interpretable.", |
|
"cite_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 62, |
|
"text": "(Clark et al., 2019)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model architectures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Latent attention: The outputs of the transformer encoders are sentence-level features. To obtain patient representations, we use a latent attention mechanism adapted from similar work in if-then program synthesis (Liu et al., 2016) . The goal of latent attention is to dedicate a component of the model to explicitly learning the importance of each unit of explanation such as the sentence or word.", |
|
"cite_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 231, |
|
"text": "(Liu et al., 2016)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model architectures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The latent attention scores are calculated from sentence features using a position-wise feedforward network (Vaswani et al., 2017) . Given S T , an n-dimensional vector a input is computed as ainput = F eedF orward (ST ) and the attention weight is a = Sof tmax(ainput + a mask ), where a mask is an n-dimensional vector for which values unmasked positions are 0 and values at padding positions are \u221210, 000.", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 130, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 215, |
|
"end": 220, |
|
"text": "(ST )", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model architectures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The final n f eature -dimensional patient vector p is computed as the weighted sum of sentence features, which we can define as the dot product,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model architectures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "p = n i=1 ST i ai = ST \u2022 a", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model architectures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "and feeds a linear layer and a softmax classifier.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model architectures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Classic canonical correlation analysis (CCA) provides a set of linear transformations that maximally correlate data points from multiple views (Hotelling, 1992) . We use projection-weighted CCA (PWCCA) (Morcos et al., 2018) to investigate the correlation between learned textual features and various structured data that are split into their respective clinical tests, shown in Table 1 . Given two vectors, x \u2208 R d \u00d7 n and y \u2208 R d \u00d7 m , where n and m denote feature dimensions and d denotes number of data points, the objective is where K XY denotes the cross covariance and K XX and K YY denote the covariances. Following the method of singular value CCA (Raghu et al., 2017) , we use singular value decomposition to obtain the weights w 1 , w 2 . From this, we get a total of min{n, m} canonical correlation coefficients. The high dimensionality of the feature representations may result in noisy coefficients that hinder the similarity measurements. We use projection weighting to compute a weighted mean of the canonical variates, which accounts for the importance of CCA vectors relative to the original input (Morcos et al., 2018) . The PWCCA similarity between vectors x and y is computed with", |
|
"cite_spans": [ |
|
{ |
|
"start": 143, |
|
"end": 160, |
|
"text": "(Hotelling, 1992)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 202, |
|
"end": 223, |
|
"text": "(Morcos et al., 2018)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 656, |
|
"end": 676, |
|
"text": "(Raghu et al., 2017)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1115, |
|
"end": 1136, |
|
"text": "(Morcos et al., 2018)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 378, |
|
"end": 385, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Canonical Correlation Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(w1 * , w2 * ) = arg max w1,w2 w1 KXYw2 \u221a w1 KXXw1w2 KYYw2 ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Canonical Correlation Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "d pwcca (x, y) = 1 \u2212 i=1 c \u03b1 i \u03c1 (i)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Canonical Correlation Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where \u03b1 i denotes the normalized importance weights, and \u03c1 (i) the i th CCA coefficient. We use an open-source implementation of PWCCA 1 in our experiments. Understanding the correlated information in patient features between textual and structured data may provide insight on what latent information is learnt from the text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Canonical Correlation Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "4 Data MIMIC-III: MIMIC-III is a clinical database comprising de-identified EMRs of 58,976 hospital admissions to the critical care units of the Beth Israel Deaconess Medical Center (Johnson et al., 2016) . All variables are recorded between 2001 and 2012. Note that, although ClinicalBERT is pretrained on MIMIC-III, this does not preclude its use from downstream tasks on the same dataset; Alsentzer et al. emphasize that any impact is negligible given the size of the entire MIMIC-III corpus compared to sub-sampled task corpora. In this study, we choose sepsis and mortality tasks because these are the standard tasks of this dataset. However, our model is not specifically tailored to these tasks, and may be generalized to wide range of potential applications.", |
|
"cite_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 204, |
|
"text": "(Johnson et al., 2016)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Canonical Correlation Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Data preprocessing: To avoid data leakage among hospital admissions of the same patient, we only include patients with one hospital admission. We select adult patients from the single-admission group and obtain a base population of 31,245 hospital admissions. We randomly sample negative cases to balance the dataset in both tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Canonical Correlation Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For text, we concatenate text from different note entries into one document for each patient and remove punctuation (except periods and commas), masked identifiers, digits, and single characters. When merging patients' notes, we remove sentences that have already appeared in previous notes to avoid repetition. The notes are appended in chronological order according to their timestamps and truncated to a maximum of 50,000 tokens.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Canonical Correlation Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For mortality prediction, we do not differentiate note types. For sepsis, we find differences in the frequencies of note types between positive and negative populations, which may result in a trivially learned solution. After consulting with clinicians, we exclude note types that are irrelevant to sepsis and select nursing and physician notes only.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Canonical Correlation Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Whereas structured variables have explicit timestamps that can be easily related to symptom onset, the timestamp of a note may not. For example, a note containing descriptions of possible infection may be entered after antibiotic administration. Anchoring notes with lab measurement timestamps significantly limits the number of positive cases in our dataset, especially when compared to other studies containing similar sepsis cohorts (Section 2.3). Nonetheless, we view the imposed time-window constraints as necessary to create an honest representation of prediction. Discharge summaries and any notes written after patient outcomes occurred are excluded to avoid direct access to the solution. Unfortunately, these steps are not always taken in the literature.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Canonical Correlation Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For the structured data used in Section 3.2, we use MIMIC-Extract 2 to ensure a standard patient population. After obtaining time-binned cohort data, we extract measurements within the same time frames as the selected notes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Canonical Correlation Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Sepsis: Systemic inflammatory response syndrome (SIRS), characterized by abnormal body temperature, heart rate, respiratory rate, and white blood cell count, often precedes sepsis. In this task, we aim to predict whether a patient in SIRS would become septic. In contrast to previous work where the negative sepsis populations did not necessarily have SIRS (Section 2.3), our task is more restrictive, as the model must learn features that are distinctive of sepsis onset rather than general indications of SIRS. We use ICD-9-CM codes to label cases, where patients with codes for explicit sepsis, or a combination of infection and either organ failure or SIRS, are considered positive. Although ICD-9-CM codes can be unreliable (O'Malley et al., 2005) , we use multiple criteria to deal with false negatives and SIRS as a filter to avoid false positives (Angus and Wax, 2001). We notice that very few notes are recorded before the first onset of SIRS, possibly due to a time delay in writing or logging notes. To compensate for the lack of data, notes before and within 24 hours of the first onset of SIRS are included. To avoid possible label leakage, we remove sentences containing mentions of \"sepsis\" or \"septic\". The final cohort contains 1262 positive cases and 1500 negative cases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 729, |
|
"end": 752, |
|
"text": "(O'Malley et al., 2005)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Canonical Correlation Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In-ICU mortality: MIMIC-III has an expiry timestamp for patients who died in the hospital, which identifies the positive cohort for in-ICU mortality prediction. To ensure that all samples represent patient conditions in the ICU, we only include notes written within ICU stays. The dataset has 2562 positive cases and 2587 negative cases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Canonical Correlation Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Our experiments explore 1) differences in prediction due to pretraining, 2) multiview projection, and 3) evaluable explainable AI.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To compare the effect of pretraining BERT with domain-specific clinical data on the overall quality and performance of the model, we substitute BioBERT (Lee et al., 2020) and base BERT (Devlin et al., 2019) as the token embedding component. We run both sepsis and mortality tasks on the different *BERT models and compare the final performance. The results are shown in Table 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 170, |
|
"text": "(Lee et al., 2020)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 185, |
|
"end": 206, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 370, |
|
"end": 377, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Clinical vs Non-Clinical BERT.", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In comparing performance between tasks, the models achieve better performance in mortality than sepsis. Considering that patients in the negative cases in sepsis task all had SIRS, which is one of the diagnostic criteria of sepsis, the high false positive rate among all three models is expected.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clinical vs Non-Clinical BERT.", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "ClinicalBERT models converge faster and outperform the other two models in both sepsis and mortality tasks. BioBERT and BERT models are comparable in performance; however, BioBERT models exhibit a tendency to output positive results, resulting in high recall and high false positive rates. The fact that BioBERT does not perform better than base BERT suggests that clinical-specific pretraining is crucial and cannot be replaced by pretraining on general biomedical corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clinical vs Non-Clinical BERT.", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "To investigate the relationships between patient features extracted from structured and text data, we separately train RNN models to learn representations from different groups (see Table 1 ) of laboratory measurements, and we conduct PWCCA (Figure 3) to compute their similarities to patient features from the language model.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 189, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 251, |
|
"text": "(Figure 3)", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Structured vs Textual Data", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "To obtain a single vector from time-series structured data, we construct a 2-layer single-directional GRU network followed by a linear layer to project the mean GRU output to a feature vector that has the same dimension as the language model feature vectors. Only the patients that appear in the language model cohort are selected. Each model is trained for 50 epochs, and the best-performing one is used to extract features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structured data model:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "CCA details: To avoid spurious correlations typically found in small datasets, the number of data points (n sample ) should be at least five times 3 the feature dimension (d f eature ). Therefore, we include all shared patients between structured and unstructured datasets, and over-sample the data for the sepsis task. We set up random baselines for each test where we randomly generate n sample d f eature -dimensional vectors using the same sampling strategy as the real features. To ensure that our features are meaningful, we only analyze features extracted by models that reach an AUROC of at least 0.75. It is important to note that we constructed the structured dataset to obtain the patient representation, not to compare model performance. The structured inputs contain measurements after the onset of patient outcomes, so the metrics should not be compared to those of the language model. Additionally, the structured data models fail to learn to predict sepsis from SIRS cohort, so we include negative samples without SIRS whose data are extracted from random time frames. Model performance and PWCCA similarity (described by Morcos et al. (2018) ) are listed in Table 3 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1138, |
|
"end": 1158, |
|
"text": "Morcos et al. (2018)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1175, |
|
"end": 1182, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Structured data model:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Feature correlation: The similarity scores are subject to confounding factors such as noise and sample size. Due to limited data availability, we can only comment on the general patterns. The structured data model and language model converge to correlated solutions, compared to random baselines. We do not observe any clear relationship between structured model performance and similarity. The features learned from all lab measurements, which supposedly encode a more comprehensive patient representation than any subgroup alone, are close to the features learned from medical notes, especially in the mortality task. For the sepsis task, the test groups that are highly related to systematic inflammation or organ dysfunction (CBC, BP, IND) show especially strong correlation with the textual features. The results suggest that our language models learn to encode the most relevant patient conditions for each outcome. Future work includes further examining representation correlations, and other multi-view models combining structured and unstructured data as inputs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structured data model:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Evaluating model explainability remains a broad area of research. Our primary objective is a usable model that can be deployed as a real-life decision support tool. Therefore, we focus on human evaluation as our assessment of rationale quality. We outline a novel evaluation protocol that measures the quality of the extracted rationales by leveraging clinical domain expertise. To avoid arbitrary judgements, we work with the physician to tailor the definition of utility for each task; this is expanded upon in the Appendix along with a stand-alone quantitative evaluation on non-clinical data of latent attention as an explanation mechanism.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Explanations", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "To obtain succinct meaningful explanations, we calculate an attention threshold score where a denotes attention scores, n s is the number of sentences, and i = min(20, ns 10 ). This ensures that selected sentences have higher attention scores than uniform attention and at most 10% of the original texts are included. To avoid burdening the evaluator, at most 20 sentences are selected for documents with more than 200 sentences. Figure 4 shows an example distribution of attention scores and demonstrates our explanation generation criteria. To prevent overly complicated results, we only evaluate the correctly predicted cases.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 430, |
|
"end": 438, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluating Explanations", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "a threshold = max 1 ns , asentence i ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Explanations", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "All independent evaluation uses a command-line user interface.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Explanations", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Labeling is designed to evaluate the informativeness of our generated explanations. Sentences are presented sequentially to an expert physician who chooses at each step to either predict patient outcome or check the next sentence. Sepsis has defined diagnosis criteria that must be followed in clinical practice, and information about such criteria are not necessarily available even in complete documents. However, mortality risk assessment, despite its difficulty, is common in critical care. Therefore, we only conduct the labeling task on the mortality dataset. We compare human predictions to those of our model and note the number of selected sentences necessary for each prediction. A test case fails if the evaluator does not make a decision after reviewing all selected sentences. This Table 4 : Labeling task results. We list the number of cases, percentage of concluded cases out of all cases, percentage of correct cases out of total concluded cases, and the average number of sentences read for both correct (c) and incorrect (i) cases.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 795, |
|
"end": 802, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Labeling task", |
|
"sec_num": "5.3.1" |
|
}, |
|
{ |
|
"text": "method evaluates whether the attended sentences are sufficient to provide enough information for a clinical decision, and empirically evaluates the number of sentences needed for rationales. The results are presented in Table 4 . On average, the evaluator reaches a correct conclusion in mortality prediction 82.7% of the time by reading approximately 4 sentences per case (or a selected 0.5% of the note, on average). Such evidence strongly suggests that our model is capable of extracting the most relevant information from long documents. We also observe a general pattern that fewer sentences are needed for a correctly predicted case, which indicates that the ordering of sentences based on attention is generally reliable.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 220, |
|
"end": 227, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Labeling task", |
|
"sec_num": "5.3.1" |
|
}, |
|
{ |
|
"text": "Interestingly, the evaluator almost correctly predicts all negative cases but not positive cases in the mortality task. Multiple reasons may account for the high false negative rate. First, mortality prediction is an intrinsically challenging task for humans. A bias towards survival may naturally occur when a sentence can be interpreted differently based on various contexts. Second, explanations for negative cases are more likely to be independent from the contextual information that are not included in the rationales. Our evaluator comments that a seemingly poor patient condition may translate to completely opposite outcomes depending on the coexistence of other conditions. In real-life applications, providing full documents with highlighted explanations may be an easy solution that helps to direct users' attention to the most important parts without losing reference to additional contexts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Labeling task", |
|
"sec_num": "5.3.1" |
|
}, |
|
{ |
|
"text": "In a second evaluation, we sample cases not used in the labeling task. We present model predictions and the entirety of the rationales sentence-by-sentence to an expert physician. The physician is instructed to decide whether each sentence in the rationale contains information that helps explain the model decision. To avoid arbitrary judgements, we work with the physician to develop clear definitions of explanation utility, as shown in the appendix. This method assesses the average informativeness of selected sentences as well as the usability of our model for the purpose of clinical decision support. Given the characteristics of mortality and sepsis (see the appendix for a detailed discussion), the evaluation is meaningful at the sentence-and caselevels for the two tasks. Table 5 summarizes the results. Between the positive and negatives cases, an average of 72.2% of sentences in the mortality task and 86% of cases in the sepsis task are rated as helpful for understanding model decisions. A closer look at the results shows that 80% of the first four sentences are rated as helpful, which indicates that the specific algorithm that generates rationales should be refined in future work to further exclude sentences with lower attention scores (see Figure 4) . Nonetheless, the application of our model as an explainable decision support tool is very promising.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 784, |
|
"end": 791, |
|
"text": "Table 5", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 1264, |
|
"end": 1274, |
|
"text": "Figure 4)", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Rating task", |
|
"sec_num": "5.3.2" |
|
}, |
|
{ |
|
"text": "Language can provide valuable support to improve clinical decision-making. We conduct a diverse set of experiments to explore several aspects of the applicability of deep NLP in the clinical domain. We also address challenges in extracting medical documents that are representative of a predictive task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We augment the power of domain-specific BERT and build a hierarchical CNN-Transformer that can potentially be applied to any long-document processing task. The model achieves AUROC scores of 0.75 and 0.78 on sepsis and mortality tasks, respectively. We also address model explainability by experimenting with a simple (yet effective) linear attention mechanism, and emphasize the interaction between models and users in the design of a novel protocol to evaluate explanations. Not only are we able to sufficiently predict cases with performance comparable to models that use structured EMR data, but we are also able to provide useful rationales to support the predictions, as validated by medical domain expertise. This has important implications for real-world application of explainable clinical decision support from text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "A Appendix A. On explainability evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Quantitatively validating latent attention as explanation: As previously noted, evaluating language model explanations is not yet standardized. Despite the effort to make human evaluation fair and reliable, such qualitative measurements are still prone to bias and subjectivity. To validate that latent attention can be used as an explanation, we conduct a stand-alone experiment on the BeerAdvocate dataset used by McAuley et al. (2012) and adapted by Lei et al. (2016) . This is a dataset that has ground-truth annotations of sentences relevant to prediction results. Although the dataset is not crafted for the purpose of rationale evaluation, we use it as a proxy to examine the quality of our attention scores.", |
|
"cite_spans": [ |
|
{ |
|
"start": 416, |
|
"end": 437, |
|
"text": "McAuley et al. (2012)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 453, |
|
"end": 470, |
|
"text": "Lei et al. (2016)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Blue background: attended tokens in annotation Red background: attended tokens not in annotation Underscore: annotation The full BeerAdvocate dataset contains 1.5 million beer reviews describing four aspects (i.e., appearance, smell, palate, and taste), each corresponding to a rating on a scale of 0 to 5. Lei et al. (2016) published a subset of 90k reviews selected to minimize correlation between appearance and other aspects. In our experiment, we use these 90k reviews for training, and 994 annotated reviews for testing. The training set only has rating labels, whereas the testing set has both rating labels and human annotations of sentence-level relevancy. Since all aspects have the exact same setups, it suffices to use the appearance rating prediction as a proof-of-concept. We build a model with only two components, described in Section 3.1, namely BERT (pretrained base-case model) and latent attention. We feed static token embeddings from BERT to a latent attention layer, which output sequence representations to be used for regression through a linear layer with a sigmoid activation. We train the model for 20 epochs and select the best performing one for testing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In contrast to our clinical model, this model only attends to individual tokens and only generates word-level explanations. For words separated by the WordPiece tokenizer, we merge the tokens and average the attention weights. For each sentence, we sort the words based on their attention weights and take the top n words as the prediction rationale, where n equals the total length of the human-annotated sentences. We only use attention mechanisms without additional constraints, such as selection continuity, which makes the testing task even more challenging, as the annotations are ranges of words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The model is evaluated according to mean squared error (MSE) and rationale precision", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "P rationale = N i=1 |S i \u222a A i | N i=1 |S i | ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "where N is the number of test cases, y is the ground truth rating of appearance,\u0177 is the predicted rating, A i is the set of word indices in the annotated covers, S is the set of word indices selected as model explanations, and |S| = |A|.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our model reaches a rationale precision of 76.39%, which indicates that our most attended words are mostly consistent with the annotations. Figure 5 shows an example of appearance test results. The experiment demonstrates the usability of latent attention as an explanation mechanism.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 148, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Definition of explanation utility in the rating task: For mortality, each sentence is evaluated individually based on how the described situation would contribute to a patient's survival rate. Sentences describing highly life-threatening complications (such as multiple organ failures) support a positive prediction, whereas sentences indicating improving conditions (such as stable lab measurements) support a negative prediction. In both cases, these sentences are considered helpful. Sentences that are irrelevant (i.e., that support neither a positive nor negative prediction) are considered unhelpful in both populations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Many of the conditions that present themselves with sepsis onset (such as hypotension) can have numerous etiologies. Diagnostic criteria specify that bacteremia (i.e., bacteria in the bloodstream) must be present in order to predict the development of sepsis. Yet the administration of antibiotics is also not considered as a direct indication of bacteremia without other indications of potential sepsis. Therefore, sentences describing sepsis-related symptoms are not rated as helpful in understanding a positive sepsis prediction until the indication of infection (for example, compromised skin integrity) Figure 6 : Example explanations. Highlighted sentences are rationales picked by our model. Elaboration on the meanings of sentences is written in footnotes. These examples have been edited for increased privacy. also appears, and vice versa. For negative cases, sentences that are either irrelevant to sepsis or explain other origins of sepsis-related symptoms are rated as helpful. Given this definition, the existence of any helpful sentences means the explanation is valid for a positive case. Similarly, the existence of any unhelpful sentences invalidates a negative case.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 608, |
|
"end": 616, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Examples of sepsis and mortality explanations are shown in Figure 6 . We truncate and edit these texts to avoid data disclosure.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 67, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "https://github.com/google/svcca/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/MLforHealth/MIMIC_ Extract", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Experiments demonstrating the choice of sample sizes in CCA can be found at https://github.com/google/svcca", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Rudzicz is supported by a CIFAR Chair in artificial intelligence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Interpretable machine learning in healthcare", |
|
"authors": [ |
|
{ |
|
"first": "Carly", |
|
"middle": [], |
|
"last": "Muhammad Aurangzeb Ahmad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankur", |
|
"middle": [], |
|
"last": "Eckert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Teredesai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "559--560", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/ICHI.2018.00095" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Muhammad Aurangzeb Ahmad, Carly Eckert, and Ankur Teredesai. 2018. Interpretable machine learn- ing in healthcare. In Proceedings of the 2018 ACM International Conference on Bioinformatics, Com- putational Biology, and Health Informatics, pages 559-560. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Epidemiology of sepsis: An update", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Derek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Randy", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Angus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wax", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Critical care medicine", |
|
"volume": "29", |
|
"issue": "7", |
|
"pages": "109--116", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1097/00003246-200107001-00035" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Derek C Angus and Randy S Wax. 2001. Epidemiol- ogy of sepsis: An update. Critical care medicine, 29(7):S109-S116.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The revival of the notes field: Leveraging the unstructured content in electronic health records", |
|
"authors": [ |
|
{ |
|
"first": "Michela", |
|
"middle": [], |
|
"last": "Assale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linda", |
|
"middle": [ |
|
"Greta" |
|
], |
|
"last": "Dui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Cina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Seveso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Federico", |
|
"middle": [], |
|
"last": "Cabitza", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Frontiers in Medicine", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3389/fmed.2019.00066" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michela Assale, Linda Greta Dui, Andrea Cina, Andrea Seveso, and Federico Cabitza. 2019. The revival of the notes field: Leveraging the unstructured content in electronic health records. Frontiers in Medicine, 6.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Artificial intelligence, bias and clinical safety", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Challen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [], |
|
"last": "Denny", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Pitt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Gompels", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Edwards", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krasimira", |
|
"middle": [], |
|
"last": "Tsaneva-Atanasova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "BMJ Qual Saf", |
|
"volume": "28", |
|
"issue": "3", |
|
"pages": "231--237", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1136/bmjqs-2018-008370" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert Challen, Joshua Denny, Martin Pitt, Luke Gompels, Tom Edwards, and Krasimira Tsaneva- Atanasova. 2019. Artificial intelligence, bias and clinical safety. BMJ Qual Saf, 28(3):231-237.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Quick sepsis-related organ failure assessment, systemic inflammatory response syndrome, and early warning scores for detecting clinical deterioration in infected patients outside the intensive care unit", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashley", |
|
"middle": [], |
|
"last": "Churpek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuan", |
|
"middle": [], |
|
"last": "Snyder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natasha", |
|
"middle": [], |
|
"last": "Sokol", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pettit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dana", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Howell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Edelson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "American Journal of Respiratory and Critical Care Medicine", |
|
"volume": "195", |
|
"issue": "7", |
|
"pages": "906--911", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1164/rccm.201604-0854OC" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew M Churpek, Ashley Snyder, Xuan Han, Sarah Sokol, Natasha Pettit, Michael D Howell, and Dana P Edelson. 2017. Quick sepsis-related organ failure assessment, systemic inflammatory response syndrome, and early warning scores for detecting clinical deterioration in infected patients outside the intensive care unit. American Journal of Respira- tory and Critical Care Medicine, 195(7):906-911.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "What does BERT look at? an analysis of BERT's attention", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Urvashi", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "276--286", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-4828" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT's attention. In Pro- ceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276-286, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Predicting severe sepsis using text from the electronic health record", |
|
"authors": [ |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Culliton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Levinson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alice", |
|
"middle": [], |
|
"last": "Ehresman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [], |
|
"last": "Wherry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Jay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen I", |
|
"middle": [], |
|
"last": "Steingrub", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gallant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Phil Culliton, Michael Levinson, Alice Ehresman, Joshua Wherry, Jay S Steingrub, and Stephen I Gal- lant. 2017. Predicting severe sepsis using text from the electronic health record.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Prediction of sepsis in the intensive care unit with minimal electronic health record data: A machine learning approach", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Desautels", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Calvert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jana", |
|
"middle": [], |
|
"last": "Hoffman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melissa", |
|
"middle": [], |
|
"last": "Jay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yaniv", |
|
"middle": [], |
|
"last": "Kerem", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lisa", |
|
"middle": [], |
|
"last": "Shieh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Shimabukuro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Uli", |
|
"middle": [], |
|
"last": "Chettipally", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Feldman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Barton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "JMIR Medical Informatics", |
|
"volume": "4", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.2196/MEDINFORM.5909" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Desautels, Jacob Calvert, Jana Hoffman, Melissa Jay, Yaniv Kerem, Lisa Shieh, David Shimabukuro, Uli Chettipally, Mitchell D Feldman, Chris Barton, et al. 2016. Prediction of sepsis in the intensive care unit with minimal electronic health record data: A machine learning approach. JMIR Medical Informatics, 4(3):e28.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirec- tional transformers for language understanding. In NAACL-HLT.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Unfolding physiological state: Mortality modelling in intensive care units", |
|
"authors": [ |
|
{ |
|
"first": "Marzyeh", |
|
"middle": [], |
|
"last": "Ghassemi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tristan", |
|
"middle": [], |
|
"last": "Naumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Finale", |
|
"middle": [], |
|
"last": "Doshi-Velez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicole", |
|
"middle": [], |
|
"last": "Brimmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rohit", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Rumshisky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Szolovits", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "75--84", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/2623330.2623742" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marzyeh Ghassemi, Tristan Naumann, Finale Doshi-Velez, Nicole Brimmer, Rohit Joshi, Anna Rumshisky, and Peter Szolovits. 2014. Unfolding physiological state: Mortality modelling in inten- sive care units. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 75-84.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Explainable Artificial Intelligence for Safe Intraoperative Decision Support", |
|
"authors": [], |
|
"year": null, |
|
"venue": "JAMA Surgery", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "10--11", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1001/jamasurg.2019.2821" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Explainable Artificial Intelligence for Safe Intraoperative Decision Support. JAMA Surgery, pages 10-11.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Neural document embeddings for intensive care patient mortality prediction", |
|
"authors": [ |
|
{ |
|
"first": "Paulina", |
|
"middle": [], |
|
"last": "Grnarova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Florian", |
|
"middle": [], |
|
"last": "Schmidt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephanie", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Hyland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carsten", |
|
"middle": [], |
|
"last": "Eickhoff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "NIPS 2016 Workshop on Machine Learning for Health", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paulina Grnarova, Florian Schmidt, Stephanie L Hy- land, and Carsten Eickhoff. 2016. Neural document embeddings for intensive care patient mortality pre- diction. NIPS 2016 Workshop on Machine Learning for Health.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "From vital signs to clinical outcomes for patients with sepsis: A machine learning basis for a clinical decision support system", |
|
"authors": [ |
|
{ |
|
"first": "Eren", |
|
"middle": [], |
|
"last": "Gultepe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Jeffrey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hien", |
|
"middle": [], |
|
"last": "Green", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Adams", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "21", |
|
"issue": "2", |
|
"pages": "315--325", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1136/amiajnl-2013-001815" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eren Gultepe, Jeffrey P Green, Hien Nguyen, Jason Adams, Timothy Albertson, and Ilias Tagkopou- los. 2013. From vital signs to clinical outcomes for patients with sepsis: A machine learning ba- sis for a clinical decision support system. Journal of the American Medical Informatics Association, 21(2):315-325.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Creating an automated trigger for sepsis clinical decision support at emergency department triage using machine learning", |
|
"authors": [ |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Horng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoni", |
|
"middle": [], |
|
"last": "Sontag", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yacine", |
|
"middle": [], |
|
"last": "Halpern", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jernite", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Nathan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Larry", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Shapiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nathanson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "PloS ONE", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1371/journal.pone.0174708" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven Horng, David A Sontag, Yoni Halpern, Yacine Jernite, Nathan I Shapiro, and Larry A Nathanson. Creating an automated trigger for sepsis clinical de- cision support at emergency department triage using machine learning. PloS ONE.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Relations between two sets of variates", |
|
"authors": [ |
|
{ |
|
"first": "Harold", |
|
"middle": [], |
|
"last": "Hotelling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Breakthroughs in statistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "162--190", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-1-4612-4380-9_14" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harold Hotelling. 1992. Relations between two sets of variates. In Breakthroughs in statistics, pages 162- 190. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Attention is not explanation", |
|
"authors": [ |
|
{ |
|
"first": "Sarthak", |
|
"middle": [], |
|
"last": "Jain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Byron", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Wallace", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1357" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarthak Jain and Byron C. Wallace. 2019. Attention is not explanation. In NAACL-HLT.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "MIMIC-III, a freely accessible critical care database", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Alistair", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu", |
|
"middle": [], |
|
"last": "Pollard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H Lehman", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mengling", |
|
"middle": [], |
|
"last": "Li-Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Ghassemi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Moody", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leo", |
|
"middle": [ |
|
"Anthony" |
|
], |
|
"last": "Szolovits", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roger G", |
|
"middle": [], |
|
"last": "Celi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1038/sdata.2016.35" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-wei, Mengling Feng, Moham- mad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. MIMIC-III, a freely accessible critical care database. Scientific data, 3:160035.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Allocation of physician time in ambulatory practice: A time and motion study in 4 specialties", |
|
"authors": [ |
|
{ |
|
"first": "Serena", |
|
"middle": [], |
|
"last": "Faiza Khan Khattak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [], |
|
"last": "Jeblee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Muhammad", |
|
"middle": [], |
|
"last": "Crampton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Mamdani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rudzicz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "MED-INFO 2019", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1512--1513", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3233/SHTI190510" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Faiza Khan Khattak, Serena Jeblee, Noah Crampton, Muhammad Mamdani, and Frank Rudzicz. 2019. Allocation of physician time in ambulatory practice: A time and motion study in 4 specialties. In MED- INFO 2019, pages 1512-1513.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", |
|
"authors": [ |
|
{ |
|
"first": "Jinhyuk", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wonjin", |
|
"middle": [], |
|
"last": "Yoon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sungdong", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunkyu", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chan", |
|
"middle": [], |
|
"last": "Ho So", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaewoo", |
|
"middle": [], |
|
"last": "Kang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1093/bioinformatics/btz682" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, D. Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language represen- tation model for biomedical text mining. Bioinfor- matics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Rationalizing neural predictions", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Tao Lei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jaakkola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D16-1011" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tao Lei, R. Barzilay, and T. Jaakkola. 2016. Rational- izing neural predictions. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Latent attention for if-then program synthesis", |
|
"authors": [ |
|
{ |
|
"first": "Chang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xinyun", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Shin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4574--4582", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chang Liu, Xinyun Chen, Richard Shin, Mingcheng Chen, and Dawn Song. 2016. Latent attention for if-then program synthesis. In Advances in Neural Information Processing Systems, pages 4574-4582.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Learning attitudes and attributes from multiaspect reviews", |
|
"authors": [ |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Mcauley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jure", |
|
"middle": [], |
|
"last": "Leskovec", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "2012 IEEE 12th International Conference on Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1020--1025", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/ICDM.2012.110" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Julian McAuley, Jure Leskovec, and Dan Jurafsky. 2012. Learning attitudes and attributes from multi- aspect reviews. In 2012 IEEE 12th International Conference on Data Mining, pages 1020-1025. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Insights on representational similarity in neural networks with canonical correlation", |
|
"authors": [ |
|
{ |
|
"first": "Ari", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Morcos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maithra", |
|
"middle": [], |
|
"last": "Raghu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samy", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "NeurIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ari S. Morcos, Maithra Raghu, and Samy Bengio. 2018. Insights on representational similarity in neu- ral networks with canonical correlation. In NeurIPS.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Explainable prediction of medical codes from clinical text", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Mullenbach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Wiegreffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jon", |
|
"middle": [], |
|
"last": "Duke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimeng", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1101--1111", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-1100" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explainable pre- diction of medical codes from clinical text. pages 1101-1111.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Rule-based information extraction from patients' clinical data", |
|
"authors": [ |
|
{ |
|
"first": "Agnieszka", |
|
"middle": [], |
|
"last": "Mykowiecka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ma\u0142gorzata", |
|
"middle": [], |
|
"last": "Marciniak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Kup\u015b\u0107", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Journal of Biomedical Informatics", |
|
"volume": "42", |
|
"issue": "5", |
|
"pages": "923--936", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.jbi.2009.07.007" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Agnieszka Mykowiecka, Ma\u0142gorzata Marciniak, and Anna Kup\u015b\u0107. 2009. Rule-based information extrac- tion from patients' clinical data. Journal of Biomed- ical Informatics, 42(5):923-936.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "An interpretable machine learning model for accurate prediction of sepsis in the ICU", |
|
"authors": [ |
|
{ |
|
"first": "Shamim", |
|
"middle": [], |
|
"last": "Nemati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andre", |
|
"middle": [], |
|
"last": "Holder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fereshteh", |
|
"middle": [], |
|
"last": "Razmi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Stanley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Gari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Clifford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Buchman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Critical care medicine", |
|
"volume": "46", |
|
"issue": "4", |
|
"pages": "547--553", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1097/CCM.0000000000002936" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shamim Nemati, Andre Holder, Fereshteh Razmi, Matthew D Stanley, Gari D Clifford, and Timothy G Buchman. 2018. An interpretable machine learning model for accurate prediction of sepsis in the ICU. Critical care medicine, 46(4):547-553.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Measuring diagnoses: ICD code accuracy", |
|
"authors": [ |
|
{ |
|
"first": "J O'", |
|
"middle": [], |
|
"last": "Kimberly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karon", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Malley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Cook", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kimberly", |
|
"middle": [ |
|
"Raiford" |
|
], |
|
"last": "Price", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Wildes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carol", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Hurdle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ashton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Health services research", |
|
"volume": "40", |
|
"issue": "5p2", |
|
"pages": "1620--1639", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1111/J.1475-6773.2005.00444.X" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kimberly J O'Malley, Karon F Cook, Matt D Price, Kimberly Raiford Wildes, John F Hurdle, and Carol M Ashton. 2005. Measuring diagnoses: ICD code accuracy. Health services research, 40(5p2):1620-1639.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "SVCCA: Singular vector canonical correlation analysis for deep learning dynamics and interpretability", |
|
"authors": [ |
|
{ |
|
"first": "Maithra", |
|
"middle": [], |
|
"last": "Raghu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Justin", |
|
"middle": [], |
|
"last": "Gilmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Yosinski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jascha", |
|
"middle": [], |
|
"last": "Sohl-Dickstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6076--6085", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maithra Raghu, Justin Gilmer, Jason Yosinski, and Jascha Sohl-Dickstein. 2017. SVCCA: Singular vec- tor canonical correlation analysis for deep learning dynamics and interpretability. In Advances in Neu- ral Information Processing Systems, pages 6076- 6085.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Is attention interpretable?", |
|
"authors": [ |
|
{ |
|
"first": "Sofia", |
|
"middle": [], |
|
"last": "Serrano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2931--2951", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1282" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2931-2951, Florence, Italy. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Deep patient representation of clinical notes via multi-task learning for mortality prediction", |
|
"authors": [ |
|
{ |
|
"first": "Yuqi", |
|
"middle": [], |
|
"last": "Si", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kirk", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "AMIA Summits on Translational Science Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuqi Si and Kirk Roberts. 2019. Deep patient rep- resentation of clinical notes via multi-task learning for mortality prediction. AMIA Summits on Transla- tional Science Proceedings, 2019:779.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Prediction of in-hospital mortality in emergency department patients with sepsis: A local big data-driven, machine learning approach. Academic emergency medicine", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Taylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Joseph R Pare", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Arjun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hani", |
|
"middle": [], |
|
"last": "Venkatesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mowafi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Edward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Melnick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M Kennedy", |
|
"middle": [], |
|
"last": "Fleischman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "23", |
|
"issue": "", |
|
"pages": "269--278", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1111/acem.12876" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R Andrew Taylor, Joseph R Pare, Arjun K Venkatesh, Hani Mowafi, Edward R Melnick, William Fleis- chman, and M Kennedy Hall. 2016. Prediction of in-hospital mortality in emergency department pa- tients with sepsis: A local big data-driven, machine learning approach. Academic emergency medicine, 23(3):269-278.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kaiser", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6000--6010", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.5555/3295222.3295349" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, undefine- dukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st Interna- tional Conference on Neural Information Processing Systems, NIPS'17, page 6000-6010, Red Hook, NY, USA. Curran Associates Inc.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Attention is not not explanation", |
|
"authors": [ |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Wiegreffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuval", |
|
"middle": [], |
|
"last": "Pinter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "11--20", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1002" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. pages 11-20.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Huggingface's transformers: State-of-the-art natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00e9mi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Clinical text classification with rule-based features and knowledge-guided convolutional neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chengsheng", |
|
"middle": [], |
|
"last": "Mao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "BMC Medical Informatics and Decision Making", |
|
"volume": "19", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1186/s12911-019-0781-4" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Clinical text classification with rule-based features and knowledge-guided convolutional neural net- works. BMC Medical Informatics and Decision Making, 19(3):71.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Model architecture and data flow. Each patient document undergoes various levels of feature extraction to arrive at token-, sentence-, and patient-level representations. The explicit attention layer provides a latent representation for a patient. The final, attended-to patient representation is used in the classification task.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Distribution of documents based on token lengths for the mortality dataset. 2842 out of 5147 documents exceed the token limits of BERT, indicated by the vertical dashed line.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Visualization of PWCCA. The patient representations are taken from the models before the classifier. First, a) a latent space is learned with SVCCA; then, b) The original representation is projected onto the learned latent space, and the PWCCA is computed.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Example attention distribution over sentences in one patient document.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Test case example of BeerAdvocate dataset.", |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Mapping of clinical tests to their corresponding structured variables.", |
|
"content": "<table/>" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Test performance scores using different BERT models.", |
|
"content": "<table><tr><td/><td>Model</td><td colspan=\"8\">Sepsis AUROC F1 Precision Recall AUROC F1 Precision Recall Mortality</td></tr><tr><td/><td>BERT</td><td>0.72</td><td>69.3</td><td>64.3</td><td>75.0</td><td>0.75</td><td>74.2</td><td>77.7</td><td>70.9</td></tr><tr><td/><td>BioBERT</td><td>0.72</td><td>71.2</td><td>59.8</td><td>88.1</td><td>0.76</td><td>76.8</td><td>72.6</td><td>81.6</td></tr><tr><td/><td>ClinicalBERT</td><td>0.75</td><td>73.0</td><td>64.4</td><td>84.3</td><td>0.78</td><td>78.9</td><td>78.2</td><td>79.7</td></tr><tr><td colspan=\"5\">Sepsis Table 2: Features AUROC Similarity AUROC Similarity Mortality</td><td/><td/><td/><td/></tr><tr><td>All</td><td>0.75</td><td>0.68</td><td>0.92</td><td>0.762</td><td/><td/><td/><td/></tr><tr><td>CBC</td><td>0.77</td><td>0.80</td><td>0.5</td><td>-</td><td/><td/><td/><td/></tr><tr><td>PT</td><td>0.76</td><td>0.60</td><td>0.5</td><td>-</td><td/><td/><td/><td/></tr><tr><td>UCE</td><td>0.68</td><td>-</td><td>0.57</td><td>-</td><td/><td/><td/><td/></tr><tr><td>ABG</td><td>0.77</td><td>0.60</td><td>0.62</td><td>-</td><td/><td/><td/><td/></tr><tr><td>BP</td><td>0.76</td><td>0.65</td><td>0.5</td><td>-</td><td/><td/><td/><td/></tr><tr><td>IND</td><td>0.77</td><td>0.93</td><td>0.88</td><td>0.686</td><td/><td/><td/><td/></tr><tr><td>PF</td><td>0.78</td><td>0.61</td><td>0.62</td><td>-</td><td/><td/><td/><td/></tr><tr><td>PV</td><td>0.5</td><td>-</td><td>0.5</td><td>-</td><td/><td/><td/><td/></tr><tr><td>Random</td><td>-</td><td>0.45</td><td>-</td><td>0.361</td><td/><td/><td/><td/></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Structured model test performance and PWCCA similarity to text features. The All category encompasses all test groups and their features.Table 1shows the full list of features and their corresponding test categories.", |
|
"content": "<table/>" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Rating task results.", |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |