ACL-OCL / Base_JSON /prefixC /json /clinicalnlp /2020.clinicalnlp-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:27:17.577057Z"
},
"title": "Multiple Sclerosis Severity Classification From Clinical Text",
"authors": [
{
"first": "Alister",
"middle": [],
"last": "D'costa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Toronto",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Stefan",
"middle": [],
"last": "Denkovski",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Toronto",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Michal",
"middle": [],
"last": "Malyska",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Toronto",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Sae",
"middle": [
"Young"
],
"last": "Moon",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Toronto",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Brandon",
"middle": [],
"last": "Rufino",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Toronto",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Zhen",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Unity Health Toronto",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Taylor",
"middle": [],
"last": "Killian",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Toronto",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Marzyeh",
"middle": [],
"last": "Ghassemi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Toronto",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Multiple Sclerosis (MS) is a chronic, inflammatory and degenerative neurological disease, which is monitored by a specialist using the Expanded Disability Status Scale (EDSS) and recorded in unstructured text in the form of a neurology consult note. An EDSS measurement contains an overall 'EDSS' score and several functional subscores. Typically, expert knowledge is required to interpret consult notes and generate these scores. Previous approaches used limited context length Word2Vec embeddings and keyword searches to predict scores given a consult note, but often failed when scores were not explicitly stated. In this work, we present MS-BERT, the first publicly available transformer model trained on real clinical data other than MIMIC. Next, we present MSBC, a classifier that applies MS-BERT to generate embeddings and predict EDSS and functional subscores. Lastly, we explore combining MSBC with other models through the use of Snorkel to generate scores for unlabelled consult notes. MSBC achieves state-of-the-art performance on all metrics and prediction tasks and outperforms the models generated from the Snorkel ensemble. We improve Macro-F1 by 0.12 (to 0.88) for predicting EDSS and on average by 0.29 (to 0.63) for predicting functional subscores over previous Word2Vec CNN and rule-based approaches.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Multiple Sclerosis (MS) is a chronic, inflammatory and degenerative neurological disease, which is monitored by a specialist using the Expanded Disability Status Scale (EDSS) and recorded in unstructured text in the form of a neurology consult note. An EDSS measurement contains an overall 'EDSS' score and several functional subscores. Typically, expert knowledge is required to interpret consult notes and generate these scores. Previous approaches used limited context length Word2Vec embeddings and keyword searches to predict scores given a consult note, but often failed when scores were not explicitly stated. In this work, we present MS-BERT, the first publicly available transformer model trained on real clinical data other than MIMIC. Next, we present MSBC, a classifier that applies MS-BERT to generate embeddings and predict EDSS and functional subscores. Lastly, we explore combining MSBC with other models through the use of Snorkel to generate scores for unlabelled consult notes. MSBC achieves state-of-the-art performance on all metrics and prediction tasks and outperforms the models generated from the Snorkel ensemble. We improve Macro-F1 by 0.12 (to 0.88) for predicting EDSS and on average by 0.29 (to 0.63) for predicting functional subscores over previous Word2Vec CNN and rule-based approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent advancements of deep learning models with electronic health records (EHR) have shown a great deal of success in many clinical applications (Shickel et al., 2017) , such as disease detection (Choi et al., 2016b) , diagnostics (Choi et al., 2017) , risk predictions (Futoma et al., 2015) and patient subtyping (Che et al., 2017a; Baytas et al., 2017) . However, when the data within the EHR is presented in the form of narrative, unstructured clinical notes, extensive work is required by a profes-sional to diagnose and generate labels for a patient (PRATT, 1973) .",
"cite_spans": [
{
"start": 146,
"end": 168,
"text": "(Shickel et al., 2017)",
"ref_id": "BIBREF41"
},
{
"start": 197,
"end": 217,
"text": "(Choi et al., 2016b)",
"ref_id": "BIBREF9"
},
{
"start": 232,
"end": 251,
"text": "(Choi et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 271,
"end": 292,
"text": "(Futoma et al., 2015)",
"ref_id": "BIBREF15"
},
{
"start": 315,
"end": 334,
"text": "(Che et al., 2017a;",
"ref_id": "BIBREF6"
},
{
"start": 335,
"end": 355,
"text": "Baytas et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 556,
"end": 569,
"text": "(PRATT, 1973)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The development of pre-trained language models, namely Bidirectional Encoder Representations from Transformers (BERT), have significantly improved natural language processing (NLP) tasks within the general language domain (Devlin et al., 2018) . However, in specialized domains such as the clinical one, the vocabulary, syntax and semantics differ significantly from general language (Liu et al., 2012) and thus pretraining a language model on domain-specific texts is critical to improving performance. This is supported by the observed increase in performance on domain-specific NLP tasks when pretraining a BERT model on domainspecific texts Peng et al., 2019; Alsentzer et al., 2019; Beltagy et al., 2019) . Take for example BlueBERT (Peng et al., 2019) , which has been further pretrained on over 4 billion words from PubMed abstracts and 500 million words from MIMIC-III (Johnson et al., 2016) and has been shown to outperform BERT on multilabel classification from the Hallmarks of Cancers corpus (Peng et al., 2019) .",
"cite_spans": [
{
"start": 222,
"end": 243,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 384,
"end": 402,
"text": "(Liu et al., 2012)",
"ref_id": "BIBREF27"
},
{
"start": 645,
"end": 663,
"text": "Peng et al., 2019;",
"ref_id": "BIBREF33"
},
{
"start": 664,
"end": 687,
"text": "Alsentzer et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 688,
"end": 709,
"text": "Beltagy et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 738,
"end": 757,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF33"
},
{
"start": 867,
"end": 899,
"text": "MIMIC-III (Johnson et al., 2016)",
"ref_id": null
},
{
"start": 1004,
"end": 1023,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Domain-specific language models, such as Blue-BERT, still face several challenges for clinical NLP tasks. First, clinical texts must be de-identified of sensitive information, with the replacement of key tokens reducing the model's ability to interpret the text (Meystre et al., 2014) . Second, texts from a specific clinical application may contain unique sub-language that the model was not trained on, hindering the model's performance. Third, transformer models have a fixed context length of 512 tokens that is significantly shorter than the average length of clinical texts (Devlin et al., 2018) . As a result of truncating the text to fit the context length, the model is unable to analyze the entire text and may miss important information. These are the challenges of applying existing BERT models to specific clinical NLP tasks, which we have addressed through our contributions applied to a multiple sclerosis (MS) dataset.",
"cite_spans": [
{
"start": 262,
"end": 284,
"text": "(Meystre et al., 2014)",
"ref_id": "BIBREF29"
},
{
"start": 580,
"end": 601,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "[1] A publicly available BERT based model pre-trained on over 70,000 MS consult notes, which we call MS-BERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "[2] A comprehensive pipeline for target predictions that integrates MS-BERT into a classifier, which we call MSBC. We apply MSBC to two tasks: (I) prediction of EDSS and functional subscores from neurological consult notes of MS patients and (II) generation of labels for an unlabelled consult note cohort.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "[3] Methods for data de-identification that preserves contextual information, optimized for fixed-context length models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "[4] A novel approach to generate encounter level embeddings for documents larger than the BERT context window.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "[5] Semi-supervised labelling pipeline using the Snorkel framework (Ratner et al., 2017) that increased the training data available for EDSS prediction and provided a quantitative analysis of silver-labelling strategies on real clinical applications.",
"cite_spans": [
{
"start": 67,
"end": 88,
"text": "(Ratner et al., 2017)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "De-identification of clinical text. The consult notes used in this study contained sensitive information such as patient's name, phone numbers, physician's name and address. We de-identified the data using a curated database of patient and doctor information and regular expression matching. We replaced identifying pieces of information with specific tokens that met the following criteria: (1) the token was within the current BERT vocabulary, (2) the token had a similar semantic meaning to the word it replaced, and (3) the token was not found in the original data set. For example, all last names were replaced with \"Salamanca\". In doing so, we aimed to limit the loss of contextual information that results from de-identification. We also overcame challenges with sub-optimal placeholder replacements often present in clinical datasets, like MIMIC-III (Johnson et al., 2016) . As an example, MIMIC-III may replace a patient's last name with \"[**LAST NAME PLACEHOLDER**]\", which is tokenized by BERT into at least 7 tokens (one for each square bracket, one for each star and at least one for the place holder within the brackets). A list of our de-identification replacements can be found within the appendices (contribution [3] ).",
"cite_spans": [
{
"start": 848,
"end": 880,
"text": "MIMIC-III (Johnson et al., 2016)",
"ref_id": null
},
{
"start": 1216,
"end": 1233,
"text": "(contribution [3]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "MS-BERT. We used the de-identified consult notes to pre-train a language model optimized for NLP tasks related to MS, namely MS-BERT. MS-BERT is a BERT model that uses BlueBERT (Peng et al., 2019) as its starting point, where BlueBERT is a BERT model pre-trained on PubMed abstracts & MIMIC III note cohorts (Johnson et al., 2016) . We used a masked language modeling (MLM) pre-training task (Devlin et al., 2018) over all deidentified consult notes. The task used the bidirectional nature of the BERT model to predict a series of randomly selected masked tokens in a piece of text, allowing the model to learn the contextual meaning of the words in a sentence. This resulted in a language model that is optimized for understanding MS consult notes. The MS-BERT language model has been made available for use and is publicly accessible. The pretrained MS-BERT model can be found here (contribution [1]).",
"cite_spans": [
{
"start": 177,
"end": 196,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF33"
},
{
"start": 308,
"end": 330,
"text": "(Johnson et al., 2016)",
"ref_id": "BIBREF20"
},
{
"start": 392,
"end": 413,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "Encounter Level Embedding. We generated encounter level embeddings for each consult note to address issues related to the limited context length of transformer models. Most transformer models have a context length limited to a number of sub-word tokens (512 in case of BERT (Devlin et al., 2018)); however, the consult notes are often significantly longer. We separated consult notes longer than the context length into chunks of the maximum context length (in our case the length was 512 tokens). We then used MS-BERT to embed each chunk, resulting in a variable length output sequence of 768 dimensional vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "We explored 3 methods of converting the sequence of chunk level embeddings into a singular encounter level embedding: (1) taking the average across the sequence; (2) taking the max across the sequence; and (3) using a convolutional neural network (CNN) encoder based on Zhang and Wallace (2015) included in the AllenNLP library. For more details see Figure 1 .",
"cite_spans": [
{
"start": 270,
"end": 294,
"text": "Zhang and Wallace (2015)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [
{
"start": 350,
"end": 358,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "In preliminary testing, the first two options under-performed the CNN encoder by a large margin (\u223c60%), thus we proceeded with the third op-tion. Our final CNN encoder consists of six 1D convolutions with kernels of size [2, 3, 4, 5, 6, 10] and 128 filters each for a total of 768 dimensions in the output. This output is our final note embedding. We compared these full-length encounter level embeddings to embeddings that were generated using only a single context window (i.e. 512 tokens) and found that encounter level summaries were critical to model performance.",
"cite_spans": [
{
"start": 221,
"end": 224,
"text": "[2,",
"ref_id": null
},
{
"start": 225,
"end": 227,
"text": "3,",
"ref_id": null
},
{
"start": 228,
"end": 230,
"text": "4,",
"ref_id": null
},
{
"start": 231,
"end": 233,
"text": "5,",
"ref_id": null
},
{
"start": 234,
"end": 236,
"text": "6,",
"ref_id": null
},
{
"start": 237,
"end": 240,
"text": "10]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "1 MSBC. Finally, we developed a custom classifier named MSBC (Multiple Sclerosis BERT CNN) to predict MS severity labels (EDSS or a functional subscore) using MS-BERT. MSBC is built using the AllenNLP (Gardner et al., 2017) framework. A breakdown of MSBC is as follows. MSBC first reads in a consult note, tokenizes the text using the BERT vocabulary and then splits the tokens into chunks of size 512. MS-BERT weights are applied to each token chunk and all chunks for a note are then passed into the CNN based sequence to vector (Seq2Vec) encoder described above to pool the chunks and generate an encounter level embedding (i.e. a 1D vector of 768). This encounter level embedding is passed through 2 linear feed forward layers, acting as a dimension reduction step, before finally being passed to a linear classification layer to predict a label for the note. Figure 1 shows an overview of MSBC's architecture.",
"cite_spans": [
{
"start": 201,
"end": 223,
"text": "(Gardner et al., 2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 864,
"end": 872,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "We trained and optimized MSBC for variables of interest, namely EDSS and functional subscores. Each note in the training set was passed through MSBC as described above. The resulting label was compared to the target label and a loss was computed. We used an AdamW optimizer to propagate errors back through the model, with a learning rate of 0.0005, weight decay of 0.01 and bias correction on a binary cross entropy loss function. We treat this as a classification problem instead of regression because EDSS is not uniform i.e. the difference between 3 and 4 is not the same as 4 and 5. We trained each model over 50 epochs using a batch size of 5 with 4 gradient accumulation steps. The model was saved at the end of each epoch if it had the best value for the validation metric. If during training the best validation metric was not beaten within 5 epochs, the trainer stopped early. A model for each prediction task was generated using MSBC and the train and validation sets described above. Once trained, we evaluated performance on 1 Code for our pipeline and experiments are available here (contributions [2,4,5]) the held out test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "Semi-Supervised Labelling. Due to the costs of manually reviewing and labelling clinical texts, a significant majority of clinical texts in EHRs remain unlabelled (Garla et al., 2013) . To leverage the full potential of all clinical text available and generate pseudo-labels for unlabelled data, we explored semi-supervised labelling using the Snorkel framework (v 0.9.3) (Ratner et al., 2017) . Snorkel facilitates weak supervision of unlabelled data given weak heuristics and classifiers (i.e. labelling functions or LFs) (Ratner et al., 2016 (Ratner et al., , 2017 ). Snorkel's Label Model, a generative model, combines the predictions and generates a single confidence weighted label per data point. Snorkel does this by using the LFs' observed agreement and disagreement rates to estimate the unknown accuracy of the LF's. Snorkel then learns and models the accuracies of the LFs to combine the labels and generate the final label per data point (Ratner et al., 2019) . To identify the optimal combination of LFs to label the unlabelled notes, we evaluated the performance of task predictions on various Snorkel ensembles. The model that yielded the highest performance on our validation-set was chosen to be used to label the unlabelled notes.",
"cite_spans": [
{
"start": 163,
"end": 183,
"text": "(Garla et al., 2013)",
"ref_id": "BIBREF17"
},
{
"start": 372,
"end": 393,
"text": "(Ratner et al., 2017)",
"ref_id": "BIBREF37"
},
{
"start": 524,
"end": 544,
"text": "(Ratner et al., 2016",
"ref_id": "BIBREF38"
},
{
"start": 545,
"end": 567,
"text": "(Ratner et al., , 2017",
"ref_id": "BIBREF37"
},
{
"start": 951,
"end": 972,
"text": "(Ratner et al., 2019)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "We created two additional models using the MSBC architecture: MSBC+, trained on a combination of labelled and pseudo-labelled data and MSBC-silver, which is a model trained on only pseudo-labelled data. We pursued the development of MSBC-silver as an attempt to see if we could reconstruct our model without access to the original labelled data, similarly to Krishna et al. (2020) .",
"cite_spans": [
{
"start": 359,
"end": 380,
"text": "Krishna et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "Multiple sclerosis (MS) is one of the most common non-traumatic disabling neurological condition among young adults worldwide (Ploughman et al., 2014; Wade, 2014) . Onset of MS typically occurs between the ages of 20 to 40 years, with women more often affected than men (Ploughman et al., 2014) . MS is a disease that impacts the central nervous system (CNS) (Goldenberg, 2012), leading to the degradation of myelin sheathing and axons within the nervous system. This degradation is highly varied and unpredictable in both location and intensity within the body. Resulting symptoms include but are not limited to: visual impairment, loss of balance, numbness, bladder dysfunction and fatigue (Calabresi, 2004) . MS is typically monitored by the Expanded Disability Status Scale (EDSS) (Kurtzke, 1983) . EDSS is used to evaluate the degree of CNS impairment on a scale from 0 to 10. EDSS also includes eight functional subscores (Kurtzke, 1983 ) such as an ambulation score and a visual score. A full list of functional subscores is found within Table 2 and their respective descriptions can be found in the appendices.",
"cite_spans": [
{
"start": 126,
"end": 150,
"text": "(Ploughman et al., 2014;",
"ref_id": "BIBREF35"
},
{
"start": 151,
"end": 162,
"text": "Wade, 2014)",
"ref_id": "BIBREF42"
},
{
"start": 270,
"end": 294,
"text": "(Ploughman et al., 2014)",
"ref_id": "BIBREF35"
},
{
"start": 692,
"end": 709,
"text": "(Calabresi, 2004)",
"ref_id": "BIBREF5"
},
{
"start": 785,
"end": 800,
"text": "(Kurtzke, 1983)",
"ref_id": "BIBREF24"
},
{
"start": 928,
"end": 942,
"text": "(Kurtzke, 1983",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 1045,
"end": 1052,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "EDSS and functional subscores are discussed in a patient's consult note, dictated by a physician and manually transcribed. EDSS is determined by a combination of functional subscores and is typically stated within consult notes. However, functional subscores are not typically stated within a consult note and need to be derived from contextual information about the patient's health. Traditionally, both EDSS and functional subscores are manually derived by an expert within the field and logged into the patient's health record. Minute differences in patient descriptions can correspond to different EDSS and functional subscore values. Through consultation with MS healthcare professionals we expect the qualitative descriptions of MS symptoms contained within the clinical notes to remain uniform across healthcare systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "The dataset, compiled by a leading MS research hospital, contains approximately 70,000 MS consult notes for about 5,000 patients, totaling over 35.7 millon words. These notes were collected from patients who visited this hospital's MS clinic between 2015 to 2019. Of the 70,000 notes approximately 16,000 are manually labeled by a research assistant for EDSS and functional subscores. The gender split within the dataset was observed to be 72% female and 28% male as shown in Figure 2 and reflecting the natural discrepancy in MS (Harbo et al., 2013).",
"cite_spans": [],
"ref_spans": [
{
"start": 476,
"end": 484,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "Once de-identified, data was separated into labelled and unlabelled sets. The labelled set was further separated into test (\u223c30%), train (\u223c50%) and validation (\u223c20%) subsets. When designing the splits for our data, we wanted to ensure that we could accurately predict EDSS and functional subscores on new notes for both current and new patients and to reduce any gender bias that may occur from population discrepancy. First we stratified by gender. Then we either fully contained the notes of one patient within a subset or divided the patients notes across subsets chronologically. This allowed for earlier notes to be used for training, and later notes for validation and test. Due to de-identification of notes the risk of information leakage between subsets is minimized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "Previous Work. Previous approaches to extract information from MS consult notes have typically relied on keyword searches (Davis and Haines, 2015; Damotte et al., 2019) . We refer to the collection of these searches as the rule-based (RB) approach. Word2Vec embeddings used with a convolutional neural network (CNN), have been shown to be successful in clinical tasks such as creating explainable predictions of medical codes from clinical text (Mullenbach et al., 2018) . Previous work done at our affiliated MS hospital used Word2Vec embeddings and a CNN model to generate EDSS predictions. Best results were achieved by incorporating the RB approach with the Word2Vec CNN. This method first used the RB approach to extract keywords and phrases that infer EDSS scores. If the RB approach was unable to predict a score, then the prediction from the Word2Vec CNN model was used. More information on the development of the CNN model can be found in the appendices. In this work, we compared the performance on predicting EDSS and functional subscores between the: (1) Word2Vec CNN, (2) a sequential approach using RB plus Word2Vec CNN, (3) MSBC, and (4) a sequential approach using RB plus MSBC. Additional baselines were established with term frequency-inverse document frequency (tf-idf) features. These features have been successful in var-ious clinical NLP tasks (Bhattarai et al., 2009; Narayan Shukla and Marlin, 2020; Boag et al., 2018) . A number of baseline models were developed on top tf-idf features such as: support vector machines (SVM), logistic regression (LR) and linear discriminant analysis (LDA). Due to a lack of performance on the easier task of predicting EDSS scores (see Table 1 ), they were not evaluated for the prediction of functional subscores.",
"cite_spans": [
{
"start": 122,
"end": 146,
"text": "(Davis and Haines, 2015;",
"ref_id": "BIBREF13"
},
{
"start": 147,
"end": 168,
"text": "Damotte et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 445,
"end": 470,
"text": "(Mullenbach et al., 2018)",
"ref_id": "BIBREF31"
},
{
"start": 1366,
"end": 1390,
"text": "(Bhattarai et al., 2009;",
"ref_id": "BIBREF3"
},
{
"start": 1391,
"end": 1423,
"text": "Narayan Shukla and Marlin, 2020;",
"ref_id": "BIBREF32"
},
{
"start": 1424,
"end": 1442,
"text": "Boag et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 1695,
"end": 1702,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experiment 1: EDSS and Functional Subscore Prediction",
"sec_num": "3.2"
},
{
"text": "Results. Our results for EDSS prediction are summarized in Table 1 and functional scores in Table 2 . MSBC achieves top performance in both tasks in all metrics. For EDSS prediction, Macro-F1 and Micro-F1 are improved upon by 0.11 and 0.043 respectively. For functional subscore prediction, we see a significant improvement of over 0.35 in Macro-F1 and almost 0.15 in Micro-F1.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 100,
"text": "Table 1 and functional scores in Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experiment 1: EDSS and Functional Subscore Prediction",
"sec_num": "3.2"
},
{
"text": "Discussion. The significant improvement of MSBC, especially in Macro-F1, indicates that MS-BERT is better able to distinguish nuances within text that characterize different EDSS and functional subscores. Interestingly, the Word2Vec CNN outperformed BlueBERT, which is likely attributed to the fact that Word2Vec was pre-trained on our corpus of text. Also, our different method of de-identifying data from MIMIC-III (which Blue-BERT was pre-trained on), may have reduced Blue-BERT's effectiveness. However, the contextually similar token replacement should limit this impact.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 1: EDSS and Functional Subscore Prediction",
"sec_num": "3.2"
},
{
"text": "We see a strong improvement in functional subscore predictions over the baselines. While EDSS is stated directly in notes, functional subscores are typically referenced indirectly. This makes it more difficult for a rule based approach and simple models to learn the contextual information required to assess scores. Furthermore, EDSS and functional subscores also suffer from a high level of disagreement among clinicians, particularly for the sensory and mental categories (Piri Cinar and Guven Yorgun, 2018). The level of disagreement typical is lower for EDSS scores greater than 5.5 and in general does not exceed 1. At two clinics, examined EDSS scores differed by 0.5 for up to 29% of patients and by 1 for up to 50% of patients. This level of subjectivity and variability within the true labels may make it difficult for the model to predict accurately. That said, due to the contextual awareness brought by MS-BERT, MSBC shows strong improvement from previous work when predicting functional subscores. Additionally, the labels for functional subscores were generated postexamination by trained clinicians based on the contents of notes. Therefore, missing information from notes led to missing labels for certain functional subscores, resulting in varying levels of support for different scores. MSBC under-performed on classes with low support. The bottom 25% of classes in terms of support averaged an F1 score of 0.78, which was 0.1 lower than the mean for all classes. However, classes with low support are typical of EDSS due to its bi-modal distribution (Meyer-Moock et al., 2014) . This is a result of the non-linear method of determining EDSS based on certain heuristics and conditions (i.e. the difference between an EDSS score of 3 to 4, is not the same as 4 to 5).",
"cite_spans": [
{
"start": 1570,
"end": 1596,
"text": "(Meyer-Moock et al., 2014)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 1: EDSS and Functional Subscore Prediction",
"sec_num": "3.2"
},
{
"text": "To help understand why and when rule based approaches failed, we looked at performance of the models only on notes that rule based approaches were not able to label EDSS scores (see appendices). This accounted for around 12% of the notes and we see very poor performance for all other models with F1 scores below 0.36 (and very high F1-scores for those rule based were able to label), while MSBC is still able to achieve an F1 score above 0.6. This may indicate that a certain portion of notes that contain poor quality information and may be \"trickier\" to label. These \"tricky\" notes could be notes that state \"no change\" or \"similar\" results to past notes, without restating those scores for example. However, it is predicted that MSBC was still able to outperform other models through its ability to understand contextual information embedded in the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 1: EDSS and Functional Subscore Prediction",
"sec_num": "3.2"
},
{
"text": "Labelling of EDSS",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2: Semi-Supervised",
"sec_num": "3.3"
},
{
"text": "We evaluated the effectiveness of the Snorkel ensembles and compared the performance of: (1) MSBC (which has been observed in Experiment 1),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2: Semi-Supervised",
"sec_num": "3.3"
},
{
"text": "(2) MSBC+, and (3) MSBC-silver. We hereon refer to two types of labels: (1) gold labels (n\u223c16,000), which were manually obtained by a professional at our MS clinic and are considered truth in our experiments, and (2) silver labels (n\u223c54,000), which were generated from the model chosen for EDSS labelling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2: Semi-Supervised",
"sec_num": "3.3"
},
{
"text": "Results. Various Snorkel ensembles were evaluated as presented in Table 3 . Only the LF combinations that included MSBC were evaluated as MSBC had the best EDSS prediction performance. From the F1 scores, we observe that MSBC alone outperforms all ensembles that contain MSBC by at least 0.02 on Macro-F1. The addition of weaker classifiers consistently decreased the ensemble's performance. Furthermore, we observe that the amount of conflict for MSBC (i.e. fraction of data MSBC disagrees with for at least one other LF) increases as weaker classifiers are added to the ensemble.",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 73,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiment 2: Semi-Supervised",
"sec_num": "3.3"
},
{
"text": "From the above analysis, we concluded that MSBC alone, out of all Snorkel ensembles, performs the best and therefore was chosen to generate silver-labels for the unlabelled neurology notes. Various models were trained using the MSBC architecture and are presented in Table 4 . The best version of MSBC was the model trained solely on gold label data (our original MSBC). Macro-F1 score and Micro-F1 score are observed to drop in MSBC+. MSBC-silver was the worst out of the 3 variations with a Macro-F1 of 0.83 and Micro-F1 of 0.91 but is still observed to outperform the previous best baseline (RB+Word2Vec CNN presented in Table 1 ) by an approximate Macro and Micro-F1 of 0.06 and 0.02 respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 267,
"end": 274,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 624,
"end": 631,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experiment 2: Semi-Supervised",
"sec_num": "3.3"
},
{
"text": "Discussion. MSBC alone performs better than all Snorkel ensembles. The performance of the ensembles consistently decreased as more weak classifiers and heuristics were added. We hypothesize that the drop in performance is due to the fact that the Snorkel's Label Model learns to predict the accuracy of the LFs based on observed agreements and disagreements. It also assumes conditional independence among the LFs (Ratner et al., 2019) . This result is not surprising given that the qualitative analysis of errors showed that MSBC was almost strictly an improvement over the Rule-Based approach. MSBC only struggled with notes that had EDSS indicated in the roman numeral 'iv' (which could be misconstrued to be the lower-case acronym for intravenous) and notes where patient complaints of their symptoms were contained in a different note chunk than the physician findings which contradicted those symptoms. In all other cases, the model made no significant (off by no more than 0.5-1 on the EDSS scale) errors compared to the weak heuristics. Therefore in the presence of a strong LF, such as MSBC, we suspect that the addition of weaker LFs introduce disagreements with MSBC and thus decreased predictive performance. Furthermore, all LFs were developed based on the same labelled training data (for example, tf-idf models were trained on the same training set). Hence, it is likely that the LFs were correlated, which violated the conditional independence assumption made by Snorkel and compromised prediction accuracy.",
"cite_spans": [
{
"start": 414,
"end": 435,
"text": "(Ratner et al., 2019)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2: Semi-Supervised",
"sec_num": "3.3"
},
{
"text": "Our model trained on silver labeled data, MSBCsilver, performs worse than MSBC by 0.03-0.06. This small decrease in performance indicates that our model is able to relearn its own distribution and helps validate its performance. MSBC-silver outperformed all previous baselines on the EDSS prediction task. The strong results of MSBC-silver helps show the effectiveness of using MSBC as a labelling function. This work shows potential to reduce tedious hours required by a professional to read through a patient's consult note and manually generate an EDSS score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2: Semi-Supervised",
"sec_num": "3.3"
},
{
"text": "In this work we present methods to overcome the challenges that arise when applying a modern transformer model on a specific clinical NLP task, specifically MS severity prediction. We did this through: (1) de-identifying clinical texts in a (2) generating encounter level embeddings to eliminate loss of information resulting from the limited context length of transformer models;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Thoughts",
"sec_num": "4"
},
{
"text": "(3) further pretraining a BERT model on MS consult notes to build a language model (MS-BERT) with better understanding of MS clinical notes; (4) developing a classifier (MSBC) that uses MS-BERT to achieve state of the art performance on predicting EDSS and functional subscores; and (5) using our classifier to generate labels for previously unlabelled data, showing its effectiveness as a labelling model. We believe that the MS-BERT language model and its improved ability to understand MS consult notes will aid clinicians in the diagnosis and treatment of MS. Furthermore, we believe that being trained on more clinical text, MS-BERT has the potential to improve other NLP tasks within the clinical domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Thoughts",
"sec_num": "4"
},
{
"text": "First, we are in the process of implementing an interpretability module that would provide perword attentions instead of the per-sub-word-token attentions available out-of-the-box. Second, we want to evaluate MS-BERT's performance on other language tasks such as relation extraction, sentence similarity, inference tasks, and question answering within the clinical space. Third, we would like to experiment with other note-level embeddings and model architectures, such as the CNN presented by Kim 2014 (Kim, 2014 . While we are pleased with the performance of MSBC, we would like to demonstrate that our approach (the methods for de-identifying data, fine-tuning a language model, the generation of encounter level embeddings and our custom classifier) can be applied on other clinical datasets. Also, we would like to pre-train longer context transformer models such as the Reformer (Kitaev et al., 2020) which targets longer context windows and compare it to Clinical BERT which is tailored for the clinical domain (Alsentzer et al., 2019) . Finally, we would like to see if using token level embeddings as inputs to our CNN encoder, along with replacing some tokens with more clinically relevant ones in the base BERT vocabulary could improve encounter level embedding quality.",
"cite_spans": [
{
"start": 494,
"end": 502,
"text": "Kim 2014",
"ref_id": "BIBREF21"
},
{
"start": 503,
"end": 513,
"text": "(Kim, 2014",
"ref_id": "BIBREF21"
},
{
"start": 885,
"end": 906,
"text": "(Kitaev et al., 2020)",
"ref_id": "BIBREF22"
},
{
"start": 1018,
"end": 1042,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "4.1"
},
{
"text": "A De-identification of Clinical Text We trained a number of baseline models on top of our tf-idf features, finding that our max feature space was optimal at 1500 tokens. After hyper-parameter tuning our tf-idf baseline models, we observed that the following performed best for predicting EDSS scores:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "4.1"
},
{
"text": "\u2022 Support vector classification (SVC) with tuned regularization parameter 'C' equal to 1. Both linear and radial basis function (RBF) kernels were generated based on their strong performance in this classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "4.1"
},
{
"text": "\u2022 Linear discriminate analysis (LDA) with a singular value decomposition solver.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "4.1"
},
{
"text": "\u2022 Logistic regression (LR) using a limited-memory BFGS (lbfgs) solver with 'l2' regularization and inverse regularization strength, 'C', equal to 100. This model also considered class weights within the training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "4.1"
},
{
"text": "Word2Vec and Convolution Neural Networks Word2Vec models (Mikolov et al., 2013; Choi et al., 2016a ) take a corpus of text and learn vector representations, called embeddings, for each word (Che et al., 2017b) . Words with similar context have been observed to have close embeddings in the vector space.",
"cite_spans": [
{
"start": 57,
"end": 79,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF30"
},
{
"start": 80,
"end": 98,
"text": "Choi et al., 2016a",
"ref_id": "BIBREF8"
},
{
"start": 190,
"end": 209,
"text": "(Che et al., 2017b)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "4.1"
},
{
"text": "CNNs have been observed to work well in a variety of clinical tasks. For example, CNN architectures have proved successful in relation extraction (Sahu et al., 2016) , risk prediction (Che et al., 2017b) , the extraction of medical events from clinical notes (Li and Huang, 2016) , and clinical named entity recognition .",
"cite_spans": [
{
"start": 146,
"end": 165,
"text": "(Sahu et al., 2016)",
"ref_id": "BIBREF40"
},
{
"start": 184,
"end": 203,
"text": "(Che et al., 2017b)",
"ref_id": "BIBREF7"
},
{
"start": 259,
"end": 279,
"text": "(Li and Huang, 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "4.1"
},
{
"text": "Previous work done at our collaborating hospital used a 200-dimensional Word2Vec embedding trained on all MS consult notes (n=75,009) with a window size of 10 and a minimum count of 2. Next, they converted all tokenized notes into their word vector representations. While doing so, they set a maximum note length of 1,000 tokens and zero padded notes as necessary. They then designed a 3-dimensional input sequence (batch size x 1000 x 200). This input sequence was fed into a Keras (Chollet et al., 2015) implementation of the CNN architecture described by Kim 2014 (Kim, 2014 . Finally, using convolutional layers (with max pooling), and fully connected layers (with softmax output), they trained their CNN model using the RMSProp optimizer with early stopping. E Performance of MSBC on 'Tricky' Notes ",
"cite_spans": [
{
"start": 483,
"end": 505,
"text": "(Chollet et al., 2015)",
"ref_id": null
},
{
"start": 558,
"end": 566,
"text": "Kim 2014",
"ref_id": "BIBREF21"
},
{
"start": 567,
"end": 577,
"text": "(Kim, 2014",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "4.1"
}
],
"back_matter": [
{
"text": "We would like to thank the researchers and staff at the Data Science and Advanced Analytics (DSAA) team at St. Michael's Hospital, for providing consistent support and guidance throughout this project. We would also like to thank Dr. Marzyeh Ghassemi, and Taylor Killan for providing us the opportunity to work on this exciting project. Lastly, we would like to thank Dr. Tony Antoniou and Dr. Jiwon Oh from the MS clinic at St. Michael's Hospital for their support on the neurological examination notes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "5"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Publicly available clinical BERT embeddings",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Alsentzer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Boag",
"suffix": ""
},
{
"first": "Wei-Hung",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Mcdermott",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "72--78",
"other_ids": {
"DOI": [
"10.18653/v1/W19-1909"
]
},
"num": null,
"urls": [],
"raw_text": "Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical BERT embeddings. In Proceedings of the 2nd Clinical Nat- ural Language Processing Workshop, pages 72-78, Minneapolis, Minnesota, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Patient subtyping via time-aware lstm networks",
"authors": [
{
"first": "M",
"middle": [],
"last": "Inci",
"suffix": ""
},
{
"first": "Cao",
"middle": [],
"last": "Baytas",
"suffix": ""
},
{
"first": "Xi",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Anil",
"suffix": ""
},
{
"first": "Jiayu",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "65--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inci M Baytas, Cao Xiao, Xi Zhang, Fei Wang, Anil K Jain, and Jiayu Zhou. 2017. Patient subtyping via time-aware lstm networks. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pages 65-74.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Scibert: Pretrained language model for scientific text",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scib- ert: Pretrained language model for scientific text. In EMNLP.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Classification of Clinical Conditions : A Case Study on Prediction of Obesity and Its Comorbidities",
"authors": [
{
"first": "Archana",
"middle": [],
"last": "Bhattarai",
"suffix": ""
},
{
"first": "Dipankar",
"middle": [],
"last": "Vasile Rus",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dasgupta",
"suffix": ""
}
],
"year": 2009,
"venue": "Science",
"volume": "",
"issue": "",
"pages": "183--194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Archana Bhattarai, Vasile Rus, and Dipankar Dasgupta. 2009. Classification of Clinical Conditions : A Case Study on Prediction of Obesity and Its Co- morbidities. Science, pages 183-194.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "What's in a Note? Unpacking Predictive Value in Clinical Note Representations",
"authors": [
{
"first": "Willie",
"middle": [],
"last": "Boag",
"suffix": ""
},
{
"first": "Dustin",
"middle": [],
"last": "Doss",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Szolovits",
"suffix": ""
}
],
"year": 2017,
"venue": "AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science",
"volume": "",
"issue": "",
"pages": "26--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Willie Boag, Dustin Doss, Tristan Naumann, and Pe- ter Szolovits. 2018. What's in a Note? Unpacking Predictive Value in Clinical Note Representations. AMIA Joint Summits on Translational Science pro- ceedings. AMIA Joint Summits on Translational Sci- ence, 2017:26-34.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Diagnosis and management of multiple sclerosis",
"authors": [
{
"first": "A",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Calabresi",
"suffix": ""
}
],
"year": 2004,
"venue": "American Family Physician",
"volume": "70",
"issue": "10",
"pages": "1935--1944",
"other_ids": {
"DOI": [
"10.1212/wnl.58.8_suppl_4.s23"
]
},
"num": null,
"urls": [],
"raw_text": "Peter A. Calabresi. 2004. Diagnosis and management of multiple sclerosis. American Family Physician, 70(10):1935-1944.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An rnn architecture with dynamic temporal matching for personalized predictions of parkinson's disease",
"authors": [
{
"first": "Cao",
"middle": [],
"last": "Chao Che",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Jiayu",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Zho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 SIAM International Conference on Data Mining",
"volume": "",
"issue": "",
"pages": "198--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao Che, Cao Xiao, Jian Liang, Bo Jin, Jiayu Zho, and Fei Wang. 2017a. An rnn architecture with dy- namic temporal matching for personalized predic- tions of parkinson's disease. In Proceedings of the 2017 SIAM International Conference on Data Min- ing, pages 198-206. SIAM.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Exploiting Convolutional Neural Network for Risk Prediction with Medical Feature Embedding",
"authors": [
{
"first": "Zhengping",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Zhaonan",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengping Che, Yu Cheng, Zhaonan Sun, and Yan Liu. 2017b. Exploiting Convolutional Neural Network for Risk Prediction with Medical Feature Embedding.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multi-layer Representation Learning for Medical Concepts",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [
"Taha"
],
"last": "Bahadori",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Searles",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Coffey",
"suffix": ""
},
{
"first": "Jimeng",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "1495--1504",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Choi, Mohammad Taha Bahadori, Elizabeth Searles, Catherine Coffey, and Jimeng Sun. 2016a. Multi-layer Representation Learning for Medical Concepts. Proceedings of the ACM SIGKDD In- ternational Conference on Knowledge Discovery and Data Mining, 13-17-August-2016:1495-1504.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Retain: An interpretable predictive model for healthcare using reverse time attention mechanism",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [
"Taha"
],
"last": "Bahadori",
"suffix": ""
},
{
"first": "Jimeng",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Kulas",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Schuetz",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Stewart",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3504--3512",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Choi, Mohammad Taha Bahadori, Jimeng Sun, Joshua Kulas, Andy Schuetz, and Walter Stewart. 2016b. Retain: An interpretable predictive model for healthcare using reverse time attention mecha- nism. In Advances in Neural Information Processing Systems, pages 3504-3512.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Using recurrent neural network models for early detection of heart failure onset",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Schuetz",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Walter",
"suffix": ""
},
{
"first": "Jimeng",
"middle": [],
"last": "Stewart",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of the American Medical Informatics Association",
"volume": "24",
"issue": "2",
"pages": "361--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Choi, Andy Schuetz, Walter F Stewart, and Jimeng Sun. 2017. Using recurrent neural network models for early detection of heart failure onset. Jour- nal of the American Medical Informatics Association, 24(2):361-370.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Harnessing electronic medical records to advance research on multiple sclerosis. Multiple sclerosis (Houndmills",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Damotte",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Liz\u00e9e",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Tremblay",
"suffix": ""
},
{
"first": "Alisha",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Pouya",
"middle": [],
"last": "Khankhanian",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Santaniello",
"suffix": ""
},
{
"first": "Refujia",
"middle": [],
"last": "Gomez",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Lincoln",
"suffix": ""
},
{
"first": "Wendy",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Tiffany",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nelson",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Pablo",
"middle": [],
"last": "Villoslada",
"suffix": ""
},
{
"first": "Jill",
"middle": [
"A"
],
"last": "Hollenbach",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"D"
],
"last": "Bevan",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Riley",
"middle": [],
"last": "Bove",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Douglas",
"suffix": ""
},
{
"first": "Ari",
"middle": [
"J"
],
"last": "Goodin",
"suffix": ""
},
{
"first": "Sergio",
"middle": [
"E"
],
"last": "Green",
"suffix": ""
},
{
"first": "Bruce",
"middle": [
"Ac"
],
"last": "Baranzini",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cree",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Roland",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Henry",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"M"
],
"last": "Hauser",
"suffix": ""
},
{
"first": "Pierre-Antoine",
"middle": [],
"last": "Gelfand",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gourraud",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "25",
"issue": "",
"pages": "408--418",
"other_ids": {
"DOI": [
"10.1177/1352458517747407"
]
},
"num": null,
"urls": [],
"raw_text": "Vincent Damotte, Antoine Liz\u00e9e, Matthew Tremblay, Alisha Agrawal, Pouya Khankhanian, Adam San- taniello, Refujia Gomez, Robin Lincoln, Wendy Tang, Tiffany Chen, Nelson Lee, Pablo Villoslada, Jill A Hollenbach, Carolyn D Bevan, Jennifer Graves, Riley Bove, Douglas S Goodin, Ari J Green, Ser- gio E Baranzini, Bruce Ac Cree, Roland G Henry, Stephen L Hauser, Jeffrey M Gelfand, and Pierre- Antoine Gourraud. 2019. Harnessing electronic med- ical records to advance research on multiple sclerosis. Multiple sclerosis (Houndmills, Basingstoke, Eng- land), 25(3):408-418.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The intelligent use and clinical benefits of electronic medical records in multiple sclerosis",
"authors": [
{
"first": "Mary",
"middle": [
"F"
],
"last": "Davis",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"L"
],
"last": "Haines",
"suffix": ""
}
],
"year": 2015,
"venue": "Expert Review of Clinical Immunology",
"volume": "11",
"issue": "2",
"pages": "205--211",
"other_ids": {
"DOI": [
"10.1586/1744666X.2015.991314"
]
},
"num": null,
"urls": [],
"raw_text": "Mary F. Davis and Jonathan L. Haines. 2015. The intel- ligent use and clinical benefits of electronic medical records in multiple sclerosis. Expert Review of Clini- cal Immunology, 11(2):205-211.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A comparison of models for predicting early hospital readmissions",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Futoma",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Morris",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Lucas",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of Biomedical Informatics",
"volume": "56",
"issue": "",
"pages": "229--238",
"other_ids": {
"DOI": [
"10.1016/j.jbi.2015.05.016"
]
},
"num": null,
"urls": [],
"raw_text": "Joseph Futoma, Jonathan Morris, and Joseph Lucas. 2015. A comparison of models for predicting early hospital readmissions. Journal of Biomedical Infor- matics, 56:229-238.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Allennlp: A deep semantic natural language processing platform",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Grus",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Schmitz",
"suffix": ""
},
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. Allennlp: A deep semantic natural language processing platform.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Semi-supervised clinical text classification with laplacian svms: An application to cancer case management",
"authors": [
{
"first": "Vijay",
"middle": [],
"last": "Garla",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Cynthia",
"middle": [],
"last": "Brandt",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Biomedical Informatics",
"volume": "46",
"issue": "5",
"pages": "869--875",
"other_ids": {
"DOI": [
"10.1016/j.jbi.2013.06.014"
]
},
"num": null,
"urls": [],
"raw_text": "Vijay Garla, Caroline Taylor, and Cynthia Brandt. 2013. Semi-supervised clinical text classification with laplacian svms: An application to cancer case management. Journal of Biomedical Informatics, 46(5):869-875.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multiple sclerosis review",
"authors": [
{
"first": "M",
"middle": [],
"last": "Marvin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goldenberg",
"suffix": ""
}
],
"year": 2012,
"venue": "P and T",
"volume": "37",
"issue": "3",
"pages": "175--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marvin M. Goldenberg. 2012. Multiple sclerosis review. P and T, 37(3):175-184.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Sex and gender issues in multiple sclerosis",
"authors": [
{
"first": "F",
"middle": [],
"last": "Hanne",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Harbo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gold",
"suffix": ""
}
],
"year": 2013,
"venue": "Therapeutic Advances in Neurological Disorders",
"volume": "6",
"issue": "4",
"pages": "237--248",
"other_ids": {
"DOI": [
"10.1177/1756285613488434"
]
},
"num": null,
"urls": [],
"raw_text": "Hanne F. Harbo, Ralf Gold, and Mar Tintora. 2013. Sex and gender issues in multiple sclerosis. Therapeutic Advances in Neurological Disorders, 6(4):237-248.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "MIMIC-III, a freely accessible critical care database",
"authors": [
{
"first": "E",
"middle": [
"W"
],
"last": "Alistair",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tom",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "H Lehman",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Mengling",
"middle": [],
"last": "Li-Wei",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Ghassemi",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Moody",
"suffix": ""
},
{
"first": "Leo",
"middle": [
"Anthony"
],
"last": "Szolovits",
"suffix": ""
},
{
"first": "Roger G",
"middle": [],
"last": "Celi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mark",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-wei, Mengling Feng, Moham- mad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. MIMIC-III, a freely accessible critical care database. Scientific data, 3:160035.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP 2014 -2014 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {
"DOI": [
"10.3115/v1/d14-1181"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP 2014 -2014 Con- ference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, pages 1746-1751.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Reformer: The efficient transformer",
"authors": [
{
"first": "Nikita",
"middle": [],
"last": "Kitaev",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Anselm",
"middle": [],
"last": "Levskaya",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer. In Inter- national Conference on Learning Representations.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Nicolas Papernot, and Mohit Iyyer. 2020. Thieves on sesame street! model extraction of bert-based apis",
"authors": [
{
"first": "Kalpesh",
"middle": [],
"last": "Krishna",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Singh Tomar",
"suffix": ""
},
{
"first": "Ankur",
"middle": [
"P"
],
"last": "Parikh",
"suffix": ""
}
],
"year": null,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kalpesh Krishna, Gaurav Singh Tomar, Ankur P. Parikh, Nicolas Papernot, and Mohit Iyyer. 2020. Thieves on sesame street! model extraction of bert-based apis. In International Conference on Learning Representa- tions.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Rating neurologic impairment in multiple sclerosis: An expanded disability status scale (EDSS)",
"authors": [
{
"first": "John",
"middle": [
"F"
],
"last": "Kurtzke",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1212/wnl.33.11.1444"
]
},
"num": null,
"urls": [],
"raw_text": "John F. Kurtzke. 1983. Rating neurologic impairment in multiple sclerosis: An expanded disability status scale (EDSS). Technical Report 11.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2019,
"venue": "Bioinformatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1093/bioinformatics/btz682"
]
},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Clinical Information Extraction via Convolutional Neural Network",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Li and Heng Huang. 2016. Clinical Information Extraction via Convolutional Neural Network.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Natural Language Processing, Electronic Health Records, and Clinical Research. Clinical Research Informatics",
"authors": [
{
"first": "F",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Feifan",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Chunhua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "293--310",
"other_ids": {
"DOI": [
"10.1007/978-1-84882-448-5_16"
]
},
"num": null,
"urls": [],
"raw_text": "F Liu, H Yu, C Weng, Feifan Liu, Chunhua Weng, and Hong Yu. 2012. Natural Language Processing, Elec- tronic Health Records, and Clinical Research. Clini- cal Research Informatics, pages 293-310.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Systematic literature review and validity evaluation of the Expanded Disability Status Scale (EDSS) and the Multiple Sclerosis Functional Composite (MSFC) in patients with multiple sclerosis",
"authors": [
{
"first": "Sandra",
"middle": [],
"last": "Meyer-Moock",
"suffix": ""
},
{
"first": "You",
"middle": [],
"last": "Shan Feng",
"suffix": ""
},
{
"first": "Mathias",
"middle": [],
"last": "Maeurer",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Werner"
],
"last": "Dippel",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kohlmann",
"suffix": ""
}
],
"year": 2014,
"venue": "BMC Neurology",
"volume": "14",
"issue": "1",
"pages": "1--10",
"other_ids": {
"DOI": [
"10.1186/1471-2377-14-58"
]
},
"num": null,
"urls": [],
"raw_text": "Sandra Meyer-Moock, You Shan Feng, Mathias Maeurer, Franz Werner Dippel, and Thomas Kohlmann. 2014. Systematic literature review and validity evaluation of the Expanded Disability Status Scale (EDSS) and the Multiple Sclerosis Functional Composite (MSFC) in patients with multiple sclero- sis. BMC Neurology, 14(1):1-10.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Text de-identification for privacy protection: A study of its impact on clinical text information content",
"authors": [
{
"first": "M",
"middle": [],
"last": "St\u00e9phane",
"suffix": ""
},
{
"first": "\u00d3scar",
"middle": [],
"last": "Meystre",
"suffix": ""
},
{
"first": "F",
"middle": [
"Jeffrey"
],
"last": "Ferr\u00e1ndez",
"suffix": ""
},
{
"first": "Brett",
"middle": [
"R"
],
"last": "Friedlin",
"suffix": ""
},
{
"first": "Shuying",
"middle": [],
"last": "South",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"H"
],
"last": "Shen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Samore",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Biomedical Informatics",
"volume": "50",
"issue": "",
"pages": "142--150",
"other_ids": {
"DOI": [
"10.1016/j.jbi.2014.01.011"
]
},
"num": null,
"urls": [],
"raw_text": "St\u00e9phane M. Meystre, \u00d3scar Ferr\u00e1ndez, F. Jeffrey Friedlin, Brett R. South, Shuying Shen, and Matthew H. Samore. 2014. Text de-identification for privacy protection: A study of its impact on clini- cal text information content. Journal of Biomedical Informatics, 50:142-150.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Distributed representations ofwords and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corraudo",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems. Neural information processing systems foundation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- raudo, and Jeffrey Dean. 2013. Distributed represen- tations ofwords and phrases and their composition- ality. In Advances in Neural Information Process- ing Systems. Neural information processing systems foundation.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Explainable Prediction of Medical Codes from Clinical Text",
"authors": [
{
"first": "James",
"middle": [],
"last": "Mullenbach",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Wiegreffe",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Duke",
"suffix": ""
},
{
"first": "Jimeng",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "1101--1111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explainable Pre- diction of Medical Codes from Clinical Text. pages 1101-1111.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Integrating Physiological Time Series and Clinical Notes with Deep Learning for Improved ICU Mortality Prediction",
"authors": [
{
"first": "Benjamin",
"middle": [
"M"
],
"last": "Satya Narayan Shukla",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marlin",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satya Narayan Shukla and Benjamin M. Marlin. 2020. Integrating Physiological Time Series and Clinical Notes with Deep Learning for Improved ICU Mortal- ity Prediction. Technical report.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Shankai",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "58--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Trans- fer Learning in Biomedical Natural Language Pro- cessing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets. pages 58-65.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "What We Learned from The History of Multiple Sclerosis Measurement: Expanded Disease Status Scale. Archives of Neuropsychiatry",
"authors": [
{
"first": "Piri",
"middle": [],
"last": "Bilge",
"suffix": ""
},
{
"first": "Yuksel",
"middle": [],
"last": "Cinar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guven Yorgun",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "55",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.29399/npa.23343"
]
},
"num": null,
"urls": [],
"raw_text": "Bilge Piri Cinar and Yuksel Guven Yorgun. 2018. What We Learned from The History of Multiple Sclero- sis Measurement: Expanded Disease Status Scale. Archives of Neuropsychiatry, 55(Suppl 1):S69.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "The Canadian survey of health, lifestyle and ageing with multiple sclerosis: methodology and initial results",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ploughman",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Beaulieu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "O J",
"middle": [],
"last": "Hogan",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "J D",
"middle": [],
"last": "W Alderdice",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Fisk",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Sadovnick",
"suffix": ""
},
{
"first": "S A",
"middle": [],
"last": "O'connor",
"suffix": ""
},
{
"first": "L M",
"middle": [],
"last": "Morrow",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Metz",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Smyth",
"suffix": ""
},
{
"first": "R A",
"middle": [],
"last": "Mayo",
"suffix": ""
},
{
"first": "K B",
"middle": [],
"last": "Marrie",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Knox",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Stefanelli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Godwin",
"suffix": ""
}
],
"year": 2014,
"venue": "BMJ Open",
"volume": "5",
"issue": "3",
"pages": "",
"other_ids": {
"PMID": [
"25757943"
]
},
"num": null,
"urls": [],
"raw_text": "M Ploughman, S Beaulieu, C Harris, S Hogan, O J Manning, P W Alderdice, J D Fisk, A D Sadovnick, P O'Connor, S A Morrow, L M Metz, P Smyth, N Mayo, R A Marrie, K B Knox, M Stefanelli, and M Godwin. 2014. The Canadian survey of health, lifestyle and ageing with multiple sclerosis: method- ology and initial results.[Erratum appears in BMJ Open. 2015;5(3):e005718; PMID: 25757943]. BMJ Open, 4(7):e005718.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Medicine, Computers, and Linguistics",
"authors": [
{
"first": "A",
"middle": [
"W"
],
"last": "Pratt",
"suffix": ""
}
],
"year": 1973,
"venue": "Advances in Biomedical Engineering",
"volume": "",
"issue": "",
"pages": "97--140",
"other_ids": {
"DOI": [
"10.1016/b978-0-12-004903-5.50007-8"
]
},
"num": null,
"urls": [],
"raw_text": "A.W. PRATT. 1973. Medicine, Computers, and Linguis- tics. In Advances in Biomedical Engineering, pages 97-140. Elsevier.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Snorkel: Rapid training data creation with weak supervision",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Ehrenberg",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "Fries",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Re",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the VLDB Endowment",
"volume": "11",
"issue": "",
"pages": "269--282",
"other_ids": {
"DOI": [
"10.14778/3157794.3157797"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander Ratner, Stephen H. Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher Re. 2017. Snorkel: Rapid training data creation with weak su- pervision. Proceedings of the VLDB Endowment, 11(3):269-282.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Data Programming: Creating Large Training Sets, Quickly",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"De"
],
"last": "Sa",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Selsam",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Ratner, Christopher De Sa, Sen Wu, Daniel Selsam, and Christopher R\u00e9. 2016. Data Program- ming: Creating Large Training Sets, Quickly. Tech- nical report.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Training complex models with multi-task weak supervision",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "Braden",
"middle": [],
"last": "Hancock",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Dunnmon",
"suffix": ""
},
{
"first": "Frederic",
"middle": [],
"last": "Sala",
"suffix": ""
},
{
"first": "Shreyash",
"middle": [],
"last": "Pandey",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "4763--4771",
"other_ids": {
"DOI": [
"10.1609/aaai.v33i01.33014763"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander Ratner, Braden Hancock, Jared Dunnmon, Frederic Sala, Shreyash Pandey, and Christopher R\u00e9. 2019. Training complex models with multi-task weak supervision. Proceedings of the AAAI Con- ference on Artificial Intelligence, 33:4763-4771.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Relation extraction from clinical texts using domain invariant convolutional neural network",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Sunil Kumar Sahu",
"suffix": ""
},
{
"first": "Krishnadev",
"middle": [],
"last": "Anand",
"suffix": ""
},
{
"first": "Mahanandeeshwar",
"middle": [],
"last": "Oruganty",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gattu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "206--215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunil Kumar Sahu, Ashish Anand, Krishnadev Oru- ganty, and Mahanandeeshwar Gattu. 2016. Relation extraction from clinical texts using domain invariant convolutional neural network. pages 206-215.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Deep ehr: a survey of recent advances in deep learning techniques for electronic health record (ehr) analysis",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Shickel",
"suffix": ""
},
{
"first": "Patrick",
"middle": [
"James"
],
"last": "Tighe",
"suffix": ""
},
{
"first": "Azra",
"middle": [],
"last": "Bihorac",
"suffix": ""
},
{
"first": "Parisa",
"middle": [],
"last": "Rashidi",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE journal of biomedical and health informatics",
"volume": "22",
"issue": "5",
"pages": "1589--1604",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Shickel, Patrick James Tighe, Azra Bihorac, and Parisa Rashidi. 2017. Deep ehr: a survey of recent advances in deep learning techniques for elec- tronic health record (ehr) analysis. IEEE journal of biomedical and health informatics, 22(5):1589-1604.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Spatial Analysis of Global Prevalence of Multiple Sclerosis Suggests Need for an Updated Prevalence Scale",
"authors": [
{
"first": "J",
"middle": [],
"last": "Brett",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wade",
"suffix": ""
}
],
"year": 2014,
"venue": "Multiple Sclerosis International",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {
"DOI": [
"10.1155/2014/124578"
]
},
"num": null,
"urls": [],
"raw_text": "Brett J. Wade. 2014. Spatial Analysis of Global Preva- lence of Multiple Sclerosis Suggests Need for an Updated Prevalence Scale. Multiple Sclerosis Inter- national, 2014:1-7.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Clinical Named Entity Recognition Using Deep Learning Models. AMIA",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Degui",
"middle": [],
"last": "Zhi",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2017,
"venue": "Annual Symposium proceedings. AMIA Symposium",
"volume": "",
"issue": "",
"pages": "1812--1819",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Min Jiang, Jun Xu, Degui Zhi, and Hua Xu. 2017. Clinical Named Entity Recognition Using Deep Learning Models. AMIA ... Annual Symposium proceedings. AMIA Symposium, 2017:1812-1819.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A sensitivity analysis of (and practitioners' guide to) convolutional neural networks for sentence classification",
"authors": [
{
"first": "Ye",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ye Zhang and Byron C. Wallace. 2015. A sensitivity analysis of (and practitioners' guide to) convolutional neural networks for sentence classification. CoRR, abs/1510.03820.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "The MSBC architecture. We used a CNN described byZhang and Wallace (2015) to generate encounter level embeddings."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Distribution of EDSS scores varied by age and gender."
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Heat map showing the distribution of predictions from our model compared to true values. Tight grouping is noticed in high levels of support, and less grouping where there is less support."
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Distribution of age within the data set."
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Histogram showing the number of notes per patient."
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Plot of mean EDSS score vs age."
},
"FIGREF6": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Change of EDSS score in subsequent visits."
},
"FIGREF7": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Change of functional subscores with age."
},
"FIGREF8": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Distribution of functional subscores across gender."
},
"FIGREF9": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Correlation matrix between functional subscores and EDSS. Strong correlations between EDSS and ambulatory and pyramidal subscores as expected."
},
"TABREF0": {
"content": "<table><tr><td>Model</td><td colspan=\"2\">Macro-F1 Micro-F1</td></tr><tr><td colspan=\"2\">Multiple Sclerosis Bert Classifier (MSBC) 0.88296</td><td>0.94177</td></tr><tr><td>MSBC Truncated (only first 512 tokens)</td><td>0.74680</td><td>0.90086</td></tr><tr><td>Rule-Based (RB) + Word2Vec CNN</td><td>0.76817</td><td>0.89668</td></tr><tr><td>RB + MSBC</td><td>0.86625</td><td>0.92987</td></tr><tr><td>Word2Vec CNN</td><td>0.66475</td><td>0.88144</td></tr><tr><td>RB</td><td>0.76694</td><td>0.83761</td></tr><tr><td>BlueBERT CNN</td><td>0.51000</td><td>0.81000</td></tr><tr><td>Linear SVC</td><td>0.48503</td><td>0.74452</td></tr><tr><td>LDA</td><td>0.50122</td><td>0.74390</td></tr><tr><td>SVC RBF</td><td>0.45877</td><td>0.72428</td></tr><tr><td>Log Reg</td><td>0.45763</td><td>0.71175</td></tr></table>",
"html": null,
"type_str": "table",
"text": "EDSS prediction performance for all models. Higher values indicate stronger performance and highest values are bolded.",
"num": null
},
"TABREF1": {
"content": "<table><tr><td>Models</td><td colspan=\"2\">MSBC</td><td>RB</td><td/><td colspan=\"2\">RB + Word2Vec</td></tr><tr><td>Subscore</td><td colspan=\"6\">Macro-F1 Micro-F1 Macro-F1 Micro-F1 Macro-F1 Micro-F1</td></tr><tr><td>Ambulation</td><td>0.6980</td><td>0.88797</td><td>0.2710</td><td>0.5627</td><td>0.2674</td><td>0.5155</td></tr><tr><td>Bowel Bladder</td><td>0.6039</td><td>0.86619</td><td>0.2773</td><td>0.5525</td><td>0.2027</td><td>0.5209</td></tr><tr><td>Brain Stem</td><td>0.5842</td><td>0.90356</td><td>0.4174</td><td>0.5694</td><td>0.3712</td><td>0.6598</td></tr><tr><td>Cerebellar</td><td>0.6437</td><td>0.85707</td><td>0.4927</td><td>0.6120</td><td>0.4188</td><td>0.5908</td></tr><tr><td>Mental</td><td>0.5496</td><td>0.79470</td><td>0.3643</td><td>0.5586</td><td>0.3003</td><td>0.5499</td></tr><tr><td>Pyramidal</td><td>0.7192</td><td>0.87755</td><td>0.4173</td><td>0.5128</td><td>0.4028</td><td>0.5598</td></tr><tr><td>Sensory</td><td>0.5570</td><td>0.87518</td><td>0.4082</td><td>0.4173</td><td>0.3485</td><td>0.5603</td></tr><tr><td>Visual</td><td>0.7153</td><td>0.93855</td><td>0.5020</td><td>0.4082</td><td>0.4207</td><td>0.6986</td></tr><tr><td>Mean</td><td>0.6339</td><td>0.8751</td><td>0.3937</td><td>0.5737</td><td>0.3416</td><td>0.5820</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Sub-score prediction performance differences between baseline and MSBC. Higher values indicate stronger performance. Highest values are bolded. It should be noted that low to no support for the highest levels of sub-scores impacted Macro-F1.",
"num": null
},
"TABREF2": {
"content": "<table><tr><td>Ensemble combinations</td><td colspan=\"3\">Macro-F1 Micro-F1 Conflicts</td></tr><tr><td>MSBC</td><td>0.88296</td><td>0.94177</td><td>N/A</td></tr><tr><td>MSBC + Rule Based LFs (RB LFs)</td><td>0.86617</td><td>0.93363</td><td>0.23471</td></tr><tr><td>MSBC + RB LFs + Word2Vec</td><td>0.78582</td><td>0.91901</td><td>0.33229</td></tr><tr><td>MSBC + RB LFs + Word2Vec + LDA</td><td>0.77004</td><td>0.88917</td><td>0.46796</td></tr><tr><td colspan=\"2\">MSBC + RB LFs + Word2Vec + TFIDFs 0.55728</td><td>0.82592</td><td>0.55145</td></tr></table>",
"html": null,
"type_str": "table",
"text": "EDSS predictions results for Snorkel ensembles containing MSBC. Conflicts reflect the fraction of data that MSBC disagrees with at least one other LF. Highest values are bolded.",
"num": null
},
"TABREF3": {
"content": "<table><tr><td>Gold labels (n=16,000) were manually</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Performance of MSBC predicting EDSS using different label types.",
"num": null
},
"TABREF4": {
"content": "<table><tr><td>Value</td><td>Replacement</td></tr><tr><td>Last / Family Names</td><td>Salamanca</td></tr><tr><td>Female First Names</td><td>Lucie</td></tr><tr><td>Male First Names</td><td>Ezekiel</td></tr><tr><td>Phone/Fax</td><td>1718</td></tr><tr><td>MRN/PID</td><td>999</td></tr><tr><td>Dates / DOB</td><td>2010s</td></tr><tr><td>Time</td><td>1610</td></tr><tr><td>Addresses</td><td>Silesia</td></tr><tr><td colspan=\"2\">Location/Hospital/Clinics Troy</td></tr><tr><td>B Functional Subscores for EDSS</td><td/></tr></table>",
"html": null,
"type_str": "table",
"text": "Full breakdown of word and category replacements for note de-identification.",
"num": null
},
"TABREF5": {
"content": "<table><tr><td colspan=\"2\">Functional Scores Description</td></tr><tr><td>Visual Function</td><td>Ability to read of eye chart at 20 feet</td></tr><tr><td>Brainstems</td><td>Eye movement, balance, hearing, numbness, swallowing, speech</td></tr><tr><td>Pyramidal</td><td>Reflexes, limb strength, motor performance</td></tr><tr><td>Cerebellar</td><td>Muscle coordination and control (ataxia)</td></tr><tr><td>Sensory</td><td>Ability to detect light touch or vibration</td></tr><tr><td colspan=\"2\">Bowl and Bladder Control and correct function of bladder and bowl functions</td></tr><tr><td>Cerebral</td><td>Depression, mental alertness (mentation)</td></tr><tr><td>Ambulation</td><td>Ability to walk unimpaired</td></tr><tr><td>C Baseline Models</td><td/></tr><tr><td colspan=\"2\">Term Frequency-Inverse Document Frequency</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Functional subscores for EDSS.",
"num": null
},
"TABREF6": {
"content": "<table><tr><td/><td colspan=\"2\">MSBC's Breakdown</td></tr><tr><td>EDSS</td><td colspan=\"2\">Precision Recall F1</td><td>Support</td></tr><tr><td>0</td><td>0.9764</td><td colspan=\"2\">0.9805 0.9784 717</td></tr><tr><td>1.0</td><td>0.9605</td><td colspan=\"2\">0.9679 0.9642 779</td></tr><tr><td>1.5</td><td>0.9751</td><td colspan=\"2\">0.9333 0.9534 420</td></tr><tr><td>2.0</td><td>0.9365</td><td colspan=\"2\">0.9708 0.9533 926</td></tr><tr><td>2.5</td><td>0.9410</td><td colspan=\"2\">0.9280 0.9344 361</td></tr><tr><td>3.0</td><td>0.9413</td><td colspan=\"2\">0.9436 0.9425 408</td></tr><tr><td>3.5</td><td>0.9362</td><td colspan=\"2\">0.8980 0.9167 196</td></tr><tr><td>4.0</td><td>0.9632</td><td colspan=\"2\">0.9562 0.9597 137</td></tr><tr><td>4.5</td><td>0.8605</td><td colspan=\"2\">0.7400 0.7957 50</td></tr><tr><td>5.0</td><td>0.9157</td><td colspan=\"2\">0.8837 0.8994 86</td></tr><tr><td>5.5</td><td>0.8889</td><td colspan=\"2\">0.8889 0.8889 81</td></tr><tr><td>6.0</td><td>0.8689</td><td colspan=\"2\">0.9339 0.9002 227</td></tr><tr><td>6.5</td><td>0.9247</td><td colspan=\"2\">0.8984 0.9113 246</td></tr><tr><td>7.0</td><td>0.7761</td><td colspan=\"2\">0.7647 0.7704 68</td></tr><tr><td>7.5</td><td>0.9286</td><td colspan=\"2\">0.6842 0.7879 38</td></tr><tr><td>8.0</td><td>0.8889</td><td colspan=\"2\">0.8000 0.8421 30</td></tr><tr><td>8.5</td><td>0.7500</td><td colspan=\"2\">0.9231 0.8276 13</td></tr><tr><td>9.0</td><td>0.7143</td><td colspan=\"2\">0.6250 0.6667 8</td></tr><tr><td>Mean</td><td>0.8970</td><td colspan=\"2\">0.8734 0.8830 4791</td></tr><tr><td colspan=\"2\">Weighted Mean 0.9420</td><td colspan=\"2\">0.9417 0.9414 4791</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Performance of MSBC across all values for EDSS.",
"num": null
},
"TABREF7": {
"content": "<table><tr><td>Model</td><td colspan=\"3\">Macro-F1 Micro-F1 Weighted-F1</td></tr><tr><td>MSBC</td><td>0.49942</td><td>0.61268</td><td>0.60340</td></tr><tr><td colspan=\"2\">RB + Word2Vec (Bench Mark) 0.19297</td><td>0.33275</td><td>0.32934</td></tr><tr><td>Word2Vec CNN</td><td>0.19297</td><td>0.33275</td><td>0.32934</td></tr><tr><td>SVC RBF</td><td>0.26748</td><td>0.40493</td><td>0.36611</td></tr><tr><td>Log Reg Baseline</td><td>0.24783</td><td>0.35916</td><td>0.34876</td></tr><tr><td>LDA</td><td>0.23374</td><td>0.33627</td><td>0.32295</td></tr><tr><td>Linear SVC</td><td>0.18703</td><td>0.30634</td><td>0.29474</td></tr></table>",
"html": null,
"type_str": "table",
"text": "EDSS prediction across notes that were not found via a key word search. Bolded scores represent best model performance.EDSS Prediction on Samples that Rules were Unable to Label",
"num": null
},
"TABREF8": {
"content": "<table><tr><td>Model</td><td colspan=\"3\">Macro-F1 Micro-F1 Weighted-F1</td></tr><tr><td>MSBC</td><td>0.95363</td><td>0.98603</td><td>0.98599</td></tr><tr><td colspan=\"2\">RB + Word2Vec CNN (Bench Mark) 0.93298</td><td>0.97253</td><td>0.97259</td></tr><tr><td>Word2Vec CNN</td><td>0.79170</td><td>0.95525</td><td>0.95393</td></tr><tr><td>LDA</td><td>0.53302</td><td>0.79872</td><td>0.80062</td></tr><tr><td>Linear SVC</td><td>0.52528</td><td>0.80346</td><td>0.80861</td></tr><tr><td>SVC RBF</td><td>0.48367</td><td>0.76723</td><td>0.75366</td></tr><tr><td>Log Reg Baseline</td><td>0.48057</td><td>0.75918</td><td>0.75845</td></tr><tr><td>F Exploratory Data Analysis</td><td/><td/><td/></tr></table>",
"html": null,
"type_str": "table",
"text": "EDSS prediction across notes that were found via a key word search. Bolded scores represent best model performance.EDSS Predictions on Samples that Rules were Able to Label",
"num": null
}
}
}
}