ACL-OCL / Base_JSON /prefixE /json /emnlp /2020.emnlp-demos.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:59:38.245343Z"
},
"title": "The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "James",
"middle": [],
"last": "Wexler",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Jasmijn",
"middle": [],
"last": "Bastings",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Andy",
"middle": [],
"last": "Coenen",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Gehrmann",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Mahima",
"middle": [],
"last": "Pushkarna",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Carey",
"middle": [],
"last": "Radebaugh",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Emily",
"middle": [],
"last": "Reif",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ann",
"middle": [],
"last": "Yuan",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present the Language Interpretability Tool (LIT), an open-source platform for visualization and understanding of NLP models. We focus on core questions about model behavior: Why did my model make this prediction? When does it perform poorly? What happens under a controlled change in the input? LIT integrates local explanations, aggregate analysis, and counterfactual generation into a streamlined, browser-based interface to enable rapid exploration and error analysis. We include case studies for a diverse set of workflows, including exploring counterfactuals for sentiment analysis, measuring gender bias in coreference systems, and exploring local behavior in text generation. LIT supports a wide range of models-including classification, seq2seq, and structured predictionand is highly extensible through a declarative, framework-agnostic API. LIT is under active development, with code and full documentation available at https://github.com/ pair-code/lit. 1 * Equal contribution. 1 A video walkthrough is available at https://youtu. be/j0OfBWFUqIE.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We present the Language Interpretability Tool (LIT), an open-source platform for visualization and understanding of NLP models. We focus on core questions about model behavior: Why did my model make this prediction? When does it perform poorly? What happens under a controlled change in the input? LIT integrates local explanations, aggregate analysis, and counterfactual generation into a streamlined, browser-based interface to enable rapid exploration and error analysis. We include case studies for a diverse set of workflows, including exploring counterfactuals for sentiment analysis, measuring gender bias in coreference systems, and exploring local behavior in text generation. LIT supports a wide range of models-including classification, seq2seq, and structured predictionand is highly extensible through a declarative, framework-agnostic API. LIT is under active development, with code and full documentation available at https://github.com/ pair-code/lit. 1 * Equal contribution. 1 A video walkthrough is available at https://youtu. be/j0OfBWFUqIE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Advances in modeling have brought unprecedented performance on many NLP tasks (e.g. , but many questions remain about the behavior of these models under domain shift (Blitzer and Pereira, 2007) and adversarial settings (Jia and Liang, 2017) , and for their tendencies to behave according to social biases (Bolukbasi et al., 2016; Caliskan et al., 2017) or shallow heuristics (e.g. McCoy et al., 2019; Poliak et al., 2018) . For any new model, one might want to know: What kind of examples does my model perform poorly on? Why did my model make this prediction? And critically, does my model behave consistently if I change things like textual style, verb tense, or pronoun gender? Despite the recent explosion of work on model understanding and evaluation (e.g. Belinkov et al., 2020; Ribeiro et al., 2020) , there is no \"silver bullet\" for analysis. Practitioners must often experiment with many techniques, looking at local explanations, aggregate metrics, and counterfactual variations of the input to build a full understanding of model behavior.",
"cite_spans": [
{
"start": 166,
"end": 193,
"text": "(Blitzer and Pereira, 2007)",
"ref_id": "BIBREF5"
},
{
"start": 219,
"end": 240,
"text": "(Jia and Liang, 2017)",
"ref_id": "BIBREF17"
},
{
"start": 305,
"end": 329,
"text": "(Bolukbasi et al., 2016;",
"ref_id": "BIBREF6"
},
{
"start": 330,
"end": 352,
"text": "Caliskan et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 381,
"end": 400,
"text": "McCoy et al., 2019;",
"ref_id": "BIBREF21"
},
{
"start": 401,
"end": 421,
"text": "Poliak et al., 2018)",
"ref_id": "BIBREF27"
},
{
"start": 762,
"end": 784,
"text": "Belinkov et al., 2020;",
"ref_id": "BIBREF4"
},
{
"start": 785,
"end": 806,
"text": "Ribeiro et al., 2020)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Existing tools can assist with this process, but many come with limitations: offline tools such as TFMA (Mewald, 2019) can provide only aggregate metrics, interactive frontends (e.g. Wallace et al., 2019) may focus on single-datapoint explanation, and more integrated tools (e.g. Wexler et al., 2020; Mothilal et al., 2020; Strobelt et al., 2018) often work with only a narrow range of models. Switching between tools or adapting a new method from research code can take days of work, distracting from the real task of error analysis. An ideal workflow would be seamless and interactive: users should see the data, what the model does with it, and why, so they can quickly test hypotheses and build understanding.",
"cite_spans": [
{
"start": 104,
"end": 118,
"text": "(Mewald, 2019)",
"ref_id": "BIBREF23"
},
{
"start": 183,
"end": 204,
"text": "Wallace et al., 2019)",
"ref_id": "BIBREF39"
},
{
"start": 280,
"end": 300,
"text": "Wexler et al., 2020;",
"ref_id": null
},
{
"start": 301,
"end": 323,
"text": "Mothilal et al., 2020;",
"ref_id": "BIBREF24"
},
{
"start": 324,
"end": 346,
"text": "Strobelt et al., 2018)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With this in mind, we introduce the Language Interpretability Tool (LIT), a toolkit and browserbased user interface (UI) for NLP model understanding. LIT supports local explanationsincluding salience maps, attention, and rich visualizations of model predictions-as well as aggregate analysis-including metrics, embedding spaces, and flexible slicing-and allows users to seamlessly hop between them to test local hypotheses and validate them over a dataset. LIT provides first-class support for counterfactual generation: new datapoints can be added on the fly, and their effect on the model visualized immediately. Sideby-side comparison allows for two models, or two datapoints, to be visualized simultaneously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We recognize that research workflows are con- stantly evolving, and designed LIT along the following principles:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Flexible: Support a wide range of NLP tasks, including classification, seq2seq, language modeling, and structured prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Extensible: Designed for experimentation, and can be reconfigured and extended for novel workflows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Modular: Components are self-contained, portable, and simple to implement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Framework agnostic: Works with any model that can run from Python -including Tensor-Flow (Abadi et al., 2015) , PyTorch (Paszke et al., 2019) , or remote models on a server.",
"cite_spans": [
{
"start": 91,
"end": 111,
"text": "(Abadi et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 122,
"end": 143,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Easy to use: Low barrier to entry, with only a small amount of code needed to add models and data (Section 4.3), and an easy path to access sophisticated functionality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "LIT has a browser-based UI comprised of modules ( Figure 1 ) which contain controls and visualizations for specific tasks (Table 1 ). At the most basic level, LIT works as a simple demo server: one can enter text, press a button, and see the model's predictions. But by loading an evaluation set, allowing dynamic datapoint generation, and an array of interactive visualizations, metrics, and modules that respond to user input, LIT supports a much richer set of user journeys:",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 58,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 122,
"end": 130,
"text": "(Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "User Interface and Functionality",
"sec_num": "2"
},
{
"text": "J1 -Explore the dataset. Users can interactively explore datasets using different criteria across modules like the data table and the embeddings module (similar to Smilkov et al. (2016) ), in which a PCA or UMAP (McInnes et al., 2018) projection can be rotated, zoomed, and panned to explore clusters and global structures ( Figure 1 -top left).",
"cite_spans": [
{
"start": 164,
"end": 185,
"text": "Smilkov et al. (2016)",
"ref_id": "BIBREF33"
},
{
"start": 212,
"end": 234,
"text": "(McInnes et al., 2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 325,
"end": 333,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "User Interface and Functionality",
"sec_num": "2"
},
{
"text": "J2 -Find interesting datapoints. Users can identify interesting datapoints for analysis, cycle through them, and save selections for future use. For example, users can select off-diagonal groups from a confusion matrix, examine outlying clusters in embedding space, or select a range based on scalar values (Figure 4 (a) ).",
"cite_spans": [],
"ref_spans": [
{
"start": 307,
"end": 320,
"text": "(Figure 4 (a)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "User Interface and Functionality",
"sec_num": "2"
},
{
"text": "J3 -Explain local behavior. Users can deepdive into model behavior on selected individual datapoints using a variety of modules depending on the model task and type. For instance, users can compare explanations from salience maps, including local gradients (Li et al., 2016) and LIME (Ribeiro et al., 2016) , or look for patterns in attention heads (Figure 1 -bottom).",
"cite_spans": [
{
"start": 257,
"end": 274,
"text": "(Li et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 284,
"end": 306,
"text": "(Ribeiro et al., 2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 349,
"end": 358,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "User Interface and Functionality",
"sec_num": "2"
},
{
"text": "Displays an attention visualization for each layer and head.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Module Description Attention",
"sec_num": null
},
{
"text": "A customizable confusion matrix for single model or multi-model comparison.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confusion Matrix",
"sec_num": null
},
{
"text": "Creates counterfactuals for selected datapoint(s) using a variety of techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Counterfactual Generator",
"sec_num": null
},
{
"text": "A tabular view of the data, with sorting, searching, and filtering support.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Table",
"sec_num": null
},
{
"text": "Editable details of a selected datapoint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datapoint Editor",
"sec_num": null
},
{
"text": "Visualizes dataset by layer-wise embeddings, projected down to 3 dimensions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings",
"sec_num": null
},
{
"text": "Displays metrics such as accuracy or BLEU score, on the whole dataset and slices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics Table",
"sec_num": null
},
{
"text": "Displays model predictions, including classification, text generation, language model probabilities, and a graph visualization for structured prediction tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predictions",
"sec_num": null
},
{
"text": "Shows heatmaps for token-based feature attribution for a selected datapoint using techniques like local gradients and LIME.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Salience Maps",
"sec_num": null
},
{
"text": "Displays a jitter plot organizing datapoints by model output scores, metrics or other scalar values. J5 -Compare side-by-side. Users can interactively compare two or more models on the same data, or a single model on two datapoints simultaneously. Visualizations automatically \"replicate\" for a side-by-side view.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scalar Plot",
"sec_num": null
},
{
"text": "J6 -Compute metrics. LIT calculates and displays metrics for the whole dataset, the current selection, as well as on manual or automaticallygenerated slices (Figure 3 (c)) to easily find patterns in model performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 166,
"text": "(Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Scalar Plot",
"sec_num": null
},
{
"text": "LIT's interface allows these user journeys to be explored interactively. Selecting a dataset and model(s) will automatically show compatible modules in a multi-pane layout (Figure 1) . A tabbed bottom panel groups modules by workflow and functionality, while the top panel shows persistent modules for dataset exploration.",
"cite_spans": [],
"ref_spans": [
{
"start": 172,
"end": 182,
"text": "(Figure 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Scalar Plot",
"sec_num": null
},
{
"text": "These modules respond dynamically to user interactions. If a selection is made in the embedding projector, for example, the metrics table will respond automatically and compute scores on the selected datapoints. Global controls make it easy to page through examples, enter a comparison mode, or save the selection as a named \"slice\". In this way, the user can quickly explore multiple workflows using different combinations of modules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scalar Plot",
"sec_num": null
},
{
"text": "A brief video demonstration of the LIT UI is available at https://youtu.be/j0OfBWFUqIE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scalar Plot",
"sec_num": null
},
{
"text": "Sentiment analysis. How well does a sentiment classifier handle negation? We load the development set of the Stanford Sentiment Treebank (SST; Socher et al., 2013) , and use the search function in LIT's data table (J1, J2) to find the 56 datapoints containing the word \"not\". Looking at the Metrics Table ( J6), we find that surprisingly, our BERT model (Devlin et al., 2019) gets 100% of these correct! But we might want to know if this is truly robust. With LIT, we can select individual datapoints and look for explanations (J3). For example, take the negative review, \"It's not the ultimate depression-era gangster movie.\". As shown in Figure 2 , salience maps suggest that \"not\" and \"ultimate\" are important to the prediction.",
"cite_spans": [
{
"start": 143,
"end": 163,
"text": "Socher et al., 2013)",
"ref_id": "BIBREF34"
},
{
"start": 354,
"end": 375,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 299,
"end": 306,
"text": "Table (",
"ref_id": null
},
{
"start": 640,
"end": 648,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Case Studies",
"sec_num": "3"
},
{
"text": "We can verify this by creating modified inputs, using LIT's datapoint editor (J4). Removing \"not\" gets a strongly positive prediction from \"It's the ultimate depression-era gangster movie.\", while replacing \"ultimate\" to get \"It's not the worst depression-era gangster movie.\" elicits a mildly positive score from our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Studies",
"sec_num": "3"
},
{
"text": "Gender bias in coreference. Does a system encode gendered associations, which might lead to incorrect predictions? We load a coreference model Figure 2 : Salience maps on \"It's not the ultimate depression-era gangster movie.\", suggesting that \"not\" and \"ultimate\" are important to the model's prediction. trained on OntoNotes (Hovy et al., 2006) , and load the Winogender dataset into LIT for evaluation. Each Winogender example has a pronoun and two candidate referents, one a occupation term like (\"technician\") and one an \"other participant\" (like \"customer\"). Our model predicts coreference probabilities for each candidate. We can explore the model's sensitivity to pronouns by comparing two examples side-by-side (see Figure 3 (a).) We can see how commonly the model makes similar errors by paging through the dataset (J1), or by selecting specific slices of interest. For example, we can use the scalar plot module (J2) (Figure 3 (b)) to select datapoints where the occupation term is associated with a high proportion of male or female workers, according to the U.S. Bureau of Labor Statistics (BLS; Caliskan et al., 2017) .",
"cite_spans": [
{
"start": 326,
"end": 345,
"text": "(Hovy et al., 2006)",
"ref_id": "BIBREF16"
},
{
"start": 1108,
"end": 1130,
"text": "Caliskan et al., 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 143,
"end": 151,
"text": "Figure 2",
"ref_id": null
},
{
"start": 724,
"end": 732,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 927,
"end": 936,
"text": "(Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Case Studies",
"sec_num": "3"
},
{
"text": "In the Metrics Table (J6) , we can slice this selection by pronoun type and by the true referent. On the set of male-dominated occupations (< 25% female by BLS), we see the model performs well when the ground-truth agrees with the stereotypee.g. when the answer is the occupation term, male pronouns are correctly resolved 83% of the time, compared to female pronouns only 37.5% of the time (Figure 3 (c) ).",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 25,
"text": "Table (J6)",
"ref_id": null
},
{
"start": 391,
"end": 404,
"text": "(Figure 3 (c)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Case Studies",
"sec_num": "3"
},
{
"text": "We analyze a T5 (Raffel et al., 2019 ) model on the CNN-DM summarization task (Hermann et al., 2015) , and loosely follow the steps of Strobelt et al. (2018) . LIT's scalar plot module (J2) allows us to look at per-example ROUGE scores, and quickly select an example with middling performance (Figure 4 (a) ). We find the generated text (Figure 4 (b) ) contains an erroneous constituent: \"alastair cook was replaced as captain by former captain ...\". We can dig deeper, using LIT's language modeling module (Figure 4 (c) ) to see that the token \"by\" is predicted with high probability (28.7%).",
"cite_spans": [
{
"start": 16,
"end": 36,
"text": "(Raffel et al., 2019",
"ref_id": "BIBREF29"
},
{
"start": 78,
"end": 100,
"text": "(Hermann et al., 2015)",
"ref_id": "BIBREF14"
},
{
"start": 135,
"end": 157,
"text": "Strobelt et al. (2018)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 293,
"end": 306,
"text": "(Figure 4 (a)",
"ref_id": "FIGREF2"
},
{
"start": 337,
"end": 351,
"text": "(Figure 4 (b)",
"ref_id": "FIGREF2"
},
{
"start": 508,
"end": 521,
"text": "(Figure 4 (c)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Debugging text generation. Does the training data explain a particular error in text generation?",
"sec_num": null
},
{
"text": "To find out how T5 arrived at this prediction, we utilize the \"similarity searcher\" component through the counterfactual generator tab (Figure 4 (d) ). This performs a fast approximate nearest-neighbor lookup (Andoni and Indyk, 2006 ) from a pre-built index over the training corpus, using embeddings from the T5 decoder. With one click, we can retrieve 25 nearest neighbors and add them to the LIT UI for inspection (as in Figure A.1) . We see that the words \"captain\" and \"former\" appear 34 and 16 times in these examples-along with 3 occurrences of \"replaced by\" (Figure 4 (e) )-suggesting a strong prior toward our erroneous phrase.",
"cite_spans": [
{
"start": 209,
"end": 232,
"text": "(Andoni and Indyk, 2006",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 135,
"end": 148,
"text": "(Figure 4 (d)",
"ref_id": "FIGREF2"
},
{
"start": 424,
"end": 435,
"text": "Figure A.1)",
"ref_id": "FIGREF0"
},
{
"start": 566,
"end": 579,
"text": "(Figure 4 (e)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Debugging text generation. Does the training data explain a particular error in text generation?",
"sec_num": null
},
{
"text": "The LIT UI is written in TypeScript, and communicates with a Python backend that hosts models, datasets, counterfactual generators, and other interpretation components. LIT is agnostic to modeling frameworks; data is exchanged using NumPy arrays and JSON, and components are integrated through a declarative \"spec\" system (Section 4.4) that minimizes cross-dependencies and encourages modularity. A more detailed design schematic is given in the Appendix, Figure A. 2.",
"cite_spans": [],
"ref_spans": [
{
"start": 456,
"end": 465,
"text": "Figure A.",
"ref_id": null
}
],
"eq_spans": [],
"section": "System design and components",
"sec_num": "4"
},
{
"text": "The browser-based UI is a single-page web app, built with lit-element 2 and MobX 3 . A shared framework of \"service\" objects tracks interaction state, such as the active model, dataset, and selection, and coordinates a set of otherwise-independent modules which provide controls and visualizations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frontend",
"sec_num": "4.1"
},
{
"text": "The Python backend serves models, data, and interpretation components. The server is stateless, but includes a caching layer for model predictions, which frees components from needing to store intermediate results and allows interactive use of large models like BERT (Devlin et al., 2019) and GPT-2 (Radford et al., 2019) . Component types include:",
"cite_spans": [
{
"start": 267,
"end": 288,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 293,
"end": 321,
"text": "GPT-2 (Radford et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Backend",
"sec_num": "4.2"
},
{
"text": "\u2022 Models which implement a predict() function, input spec(), and output spec().",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Backend",
"sec_num": "4.2"
},
{
"text": "\u2022 Datasets which load data from any source and expose an .examples field and a spec().",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Backend",
"sec_num": "4.2"
},
{
"text": "\u2022 Interpreters are called on a model and a set of datapoints, and return output-such as a salience map-that may also depend on the model's predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Backend",
"sec_num": "4.2"
},
{
"text": "\u2022 Generators are interpreters that return new input datapoints from source datapoints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Backend",
"sec_num": "4.2"
},
{
"text": "\u2022 Metrics are interpreters which return aggregate scores for a list of inputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Backend",
"sec_num": "4.2"
},
{
"text": "These components are designed to be selfcontained and interact through minimalist APIs, with most exposing only one or two methods plus spec information. They communicate through standard Python and NumPy types, making LIT compatible with most common modeling frameworks, including TensorFlow (Abadi et al., 2015) and Py-Torch (Paszke et al., 2019) . Components are also portable, and can easily be used in a notebook or standalone script. For example: will run the LIME (Ribeiro et al., 2016) component and return a list of tokens and their importance to the model prediction.",
"cite_spans": [
{
"start": 293,
"end": 313,
"text": "(Abadi et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 327,
"end": 348,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 471,
"end": 493,
"text": "(Ribeiro et al., 2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Backend",
"sec_num": "4.2"
},
{
"text": "dataset =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Backend",
"sec_num": "4.2"
},
{
"text": "LIT is built as a Python library, and its typical use is to create a short demo.py script that loads models and data and passes them to the lit.Server class: models = {'foo': FooModel(...), 'bar': BarModel(...)} datasets = {'baz': BazDataset(...)} server = lit.Server(models, datasets) server.serve() A full example script is included in the Appendix (Figure A.3) . The same server can host several models and datasets for side-by-side comparison, and can also interact with remotely-hosted models.",
"cite_spans": [],
"ref_spans": [
{
"start": 351,
"end": 363,
"text": "(Figure A.3)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Running with your own model",
"sec_num": "4.3"
},
{
"text": "4.4 Extensibility: the spec() system NLP models come in many shapes, with inputs that may involve multiple text segments, additional categorical features, scalars, and more, and output modalities that include classification, regression, text generation, and span labeling. Models may have multiple heads of different types, and may also return additional values like gradients, embeddings, or attention maps. Rather than enumerate all variations, LIT describes each model and dataset with an extensible system of semantic types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Running with your own model",
"sec_num": "4.3"
},
{
"text": "For example, a dataset class for textual entailment (Dagan et al., 2006; Bowman et al., 2015) might have spec(), describing available fields:",
"cite_spans": [
{
"start": 52,
"end": 72,
"text": "(Dagan et al., 2006;",
"ref_id": "BIBREF10"
},
{
"start": 73,
"end": 93,
"text": "Bowman et al., 2015)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Running with your own model",
"sec_num": "4.3"
},
{
"text": "\u2022 premise: TextSegment()",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Running with your own model",
"sec_num": "4.3"
},
{
"text": "\u2022 hypothesis: TextSegment() \u2022 label: MulticlassLabel(vocab=...)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Running with your own model",
"sec_num": "4.3"
},
{
"text": "A model for the same task would have an input spec() to describe required inputs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Running with your own model",
"sec_num": "4.3"
},
{
"text": "\u2022 premise: TextSegment() \u2022 hypothesis: TextSegment() As well as an output spec() to describe its predictions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Running with your own model",
"sec_num": "4.3"
},
{
"text": "\u2022 probas: MulticlassPreds( vocab=..., parent=\"label\")",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Running with your own model",
"sec_num": "4.3"
},
{
"text": "Other LIT components can read this spec, and infer how to operate on the data. For example, the MulticlassMetrics component searches for MulticlassPreds fields (which contain probabilities), uses the vocab annotation to decode to string labels, and evaluates these against the input field described by parent. Frontend modules can detect these fields, and automatically display: for example, the embedding projector will appear if Embeddings are available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Running with your own model",
"sec_num": "4.3"
},
{
"text": "New types can be easily defined: a SpanLabels class might represent the output of a named entity recognition model, and custom components can be added to interpret it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Running with your own model",
"sec_num": "4.3"
},
{
"text": "A number of tools exist for interactive analysis of trained ML models. Many are general-purpose, such as the What-If Tool (Wexler et al., 2020) , Captum (Kokhlikyan et al., 2019) , Manifold (Zhang et al., 2018) , or InterpretML (Nori et al., 2019) , while others focus on specific applications like fairness, including FairVis (Cabrera et al., 2019) and FairSight (Ahn and Lin, 2019) . And some provide rich support for counterfactual analysis, either within-dataset (What-If Tool) or dynamically generated as in DiCE (Mothilal et al., 2020) .",
"cite_spans": [
{
"start": 122,
"end": 143,
"text": "(Wexler et al., 2020)",
"ref_id": null
},
{
"start": 153,
"end": 178,
"text": "(Kokhlikyan et al., 2019)",
"ref_id": null
},
{
"start": 190,
"end": 210,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF44"
},
{
"start": 228,
"end": 247,
"text": "(Nori et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 327,
"end": 349,
"text": "(Cabrera et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 364,
"end": 383,
"text": "(Ahn and Lin, 2019)",
"ref_id": "BIBREF1"
},
{
"start": 513,
"end": 541,
"text": "DiCE (Mothilal et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "For NLP, a number of tools exist for specific model classes, such as RNNs (Strobelt et al., 2017) , Transformers (Hoover et al., 2020; Vig and Belinkov, 2019) , or text generation (Strobelt et al., 2018) . More generally, AllenNLP Interpret (Wallace et al., 2019) introduces a modular framework for interpretability components, focused on singledatapoint explanations and integrated tightly with the AllenNLP (Gardner et al., 2017) framework. While many components exist in other tools, LIT aims to integrate local explanations, aggregate analysis, and counterfactual generation into a single tool. In this, it is most similar to Errudite , which provides an integrated UI for NLP error analysis, including a custom DSL for text transformations and the ability to evaluate over a corpus. However, LIT is explicitly designed for flexibility: we support a broad range of workflows and provide a modular design for extension with new tasks, visualizations, and generation techniques.",
"cite_spans": [
{
"start": 74,
"end": 97,
"text": "(Strobelt et al., 2017)",
"ref_id": "BIBREF36"
},
{
"start": 113,
"end": 134,
"text": "(Hoover et al., 2020;",
"ref_id": null
},
{
"start": 135,
"end": 158,
"text": "Vig and Belinkov, 2019)",
"ref_id": "BIBREF38"
},
{
"start": 180,
"end": 203,
"text": "(Strobelt et al., 2018)",
"ref_id": "BIBREF35"
},
{
"start": 409,
"end": 431,
"text": "(Gardner et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Limitations LIT is an evaluation tool, and as such is not directly useful for training-time monitoring. As LIT is built to be interactive, it does not scale to large datasets as well as offline tools such as TFMA (Mewald, 2019) . (Currently, the LIT UI can handle about 10,000 examples at once.) Because LIT is framework-agnostic, it does not have the deep model integration of tools such as AllenNLP Interpret (Wallace et al., 2019) or Captum (Kokhlikyan et al., 2019) . This makes many things simpler and more portable, but also requires more code for techniques like integrated gradients (Sundararajan et al., 2017 ) that need to directly manipulate parts of the model.",
"cite_spans": [
{
"start": 213,
"end": 227,
"text": "(Mewald, 2019)",
"ref_id": "BIBREF23"
},
{
"start": 411,
"end": 433,
"text": "(Wallace et al., 2019)",
"ref_id": "BIBREF39"
},
{
"start": 444,
"end": 469,
"text": "(Kokhlikyan et al., 2019)",
"ref_id": null
},
{
"start": 591,
"end": 617,
"text": "(Sundararajan et al., 2017",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "LIT provides an integrated UI and a suite of components for visualizing and exploring the behavior of NLP models. It enables interactive analysis both at the single-datapoint level and over a whole dataset, with first-class support for counterfactual generation and evaluation. LIT supports a diverse range of workflows, from explaining individual predictions to disaggregated analysis to probing for bias through counterfactuals. LIT also supports a range of model types and techniques out of the box, and is designed for extensibility through simple, framework-agnostic APIs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Roadmap",
"sec_num": "6"
},
{
"text": "LIT is under active development by a small team. Planned upcoming additions include new counterfactual generation plug-ins, additional metrics and visualizations for sequence and structured output types, and a greater ability to customize the UI for different applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Roadmap",
"sec_num": "6"
},
{
"text": "LIT is open-source under an Apache 2.0 license, and we welcome contributions from the community at https://github.com/pair-code/lit. Figure A .5: Confusion matrix (a) and side-by-side comparison of predictions and salience maps (b) on two sentiment classifiers. In model comparison mode, the confusion matrix can compare two models, and clicking an off-diagonal cell with select examples where the two models make different predictions. In (b) we see one such example, where the model in the second row (\"sst 1\") predicts incorrectly, even though gradient-based salience show both models focusing on the same tokens.",
"cite_spans": [],
"ref_spans": [
{
"start": 133,
"end": 141,
"text": "Figure A",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion and Roadmap",
"sec_num": "6"
},
{
"text": "https://lit-element.polymer-project. org/. Naming is coincidental; the Language Interpretability Tool is not related to the lit-html or lit-element projects.3 https://mobx.js.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Slav Petrov, Martin Wattenberg, Fernanda Viegas, Kellie Webster, Emily Pitler, Dipanjan Das, Leslie Lai, Kristen Olson, and other members of PAIR and the Language team at Google Research for many productive discussions during development. We also thank our anonymous reviewers for their helpful feedback, and Pere Lluis, Luke Gessler, and Kevin Robinson for their contributions to the open-source code.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Figure A.1: The counterfactual generator module, showing a set of generated datapoints in the staging area. The labels can be maually edited before adding these to the dataset. In this example, the counterfactuals were created using the word replacer, replacing the word \"great\" with \"terrible\" in each passage. Figure A .2: Overview of LIT system architecture. The backend manages models, datasets, metrics, generators, and interpretation components, as well as a caching layer to speed up interactive use. The frontend is a TypeScript single-page app consisting of independent modules (webcomponents built with lit-element) which interact with shared \"services\" that manage interaction state. The backend can be extended by passing components to the lit.Server class in the demo script (Section 4.3 and Figure A. 3), while the frontend can be extended by importing new components in a single file, layout.ts, which both lists available modules and specifies their position in the UI (Figure 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 312,
"end": 320,
"text": "Figure A",
"ref_id": null
},
{
"start": 805,
"end": 814,
"text": "Figure A.",
"ref_id": null
},
{
"start": 985,
"end": 994,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "NLI_LABELS = ['entailment', 'neutral', 'contradiction'] class MultiNLIData(lit.Dataset):\"\"\"Loader for MultiNLI dataset.\"\"\" def __init__(self, path): # Read the eval set from a .tsv file df = pandas.read_csv(path, sep='\\t') # Store as a list of dicts, conforming to self.spec ( \"\"\"Describe the inputs to the model.\"\"\" return { 'premise': lit_types.TextSegment(), 'hypothesis': lit_types.TextSegment(), } def output_spec(self):\"\"\"Describe the model outputs.\"\"\" return { # The 'parent' keyword tells LIT where to look for gold labels when computing metrics. 'probas': lit_types.MulticlassPreds(vocab=NLI_LABELS, parent='label'), # This model returns two different embeddings, but you can easily add more. 'output_embs': lit_types.Embeddings(), 'mean_word_embs': lit_types.Embeddings(), # In LIT, we treat tokens as another model output. There can be more than one, # and the 'align' field describes which input segment they correspond to. 'premise_tokens': lit_types.Tokens(align='premise'), 'hypothesis_tokens': lit_types.Tokens(align='hypothesis'), # Gradients are also returned by the model; 'align' here references a Tokens field. 'premise_grad': lit_types.TokenGradients(align='premise_tokens'), 'hypothesis_grad': lit_types.TokenGradients(align='hypothesis_tokens'), # Similarly, attention references a token field, but here we want the model's full \"internal\" # tokenization, which might be something like: [START] datasets = { 'mnli_matched': MultiNLIData('/path/to/dev_matched.tsv'), 'mnli_mismatched': MultiNLIData('/path/to/dev_mismatched.tsv'), } models = { 'model_foo': MyNLIModel('/path/to/model/foo/files'), 'model_bar': MyNLIModel('/path/to/model/bar/files'), } lit_demo = lit.Server(models, datasets, port=4321) lit_demo.serve() if __name__ == '__main__': main() Figure A. 3: Example demo script to run LIT with two NLI models and the MultiNLI (Williams et al., 2018) development sets. The actual model can be implemented in TensorFlow, PyTorch, C++, a REST API, or anything that can be wrapped in a Python class: to work with LIT, users needs only to define the spec fields and implement a predict() function which returns a dict of NumPy arrays for each input datapoint. The dataset loader is even simpler; a complete implementation is given above to read from a TSV file, but libraries like TensorFlow Datasets can also be used.",
"cite_spans": [
{
"start": 13,
"end": 55,
"text": "['entailment', 'neutral', 'contradiction']",
"ref_id": null
},
{
"start": 1411,
"end": 1418,
"text": "[START]",
"ref_id": null
},
{
"start": 1858,
"end": 1881,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [
{
"start": 1777,
"end": 1786,
"text": "Figure A.",
"ref_id": null
}
],
"eq_spans": [],
"section": "117",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "TensorFlow: Large-scale machine learning on heterogeneous systems",
"authors": [
{
"first": "Mart\u00edn",
"middle": [],
"last": "Abadi",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Barham",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Brevdo",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Citro",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Devin",
"suffix": ""
},
{
"first": "Sanjay",
"middle": [],
"last": "Ghemawat",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Harp",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Irving",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Isard",
"suffix": ""
},
{
"first": "Yangqing",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Rafal",
"middle": [],
"last": "Jozefowicz",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Manjunath",
"middle": [],
"last": "Kudlur ; Martin Wicke",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Xiaoqiang",
"middle": [],
"last": "Zheng",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mart\u00edn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefow- icz, Lukasz Kaiser, Manjunath Kudlur, Josh Leven- berg, Dandelion Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi\u00e9gas, Oriol Vinyals, Pete Warden, Mar- tin Wattenberg, Martin Wicke, Yuan Yu, and Xiao- qiang Zheng. 2015. TensorFlow: Large-scale ma- chine learning on heterogeneous systems. Software available from tensorflow.org.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Fairsight: Visual analytics for fairness in decision making",
"authors": [
{
"first": "Yongsu",
"middle": [],
"last": "Ahn",
"suffix": ""
},
{
"first": "Yu-Ru",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Transactions on Visualization and Computer Graphics",
"volume": "",
"issue": "",
"pages": "1--1",
"other_ids": {
"DOI": [
"10.1109/tvcg.2019.2934262"
]
},
"num": null,
"urls": [],
"raw_text": "Yongsu Ahn and Yu-Ru Lin. 2019. Fairsight: Visual analytics for fairness in decision making. IEEE Transactions on Visualization and Computer Graph- ics, page 1-1.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions",
"authors": [
{
"first": "Alexandr",
"middle": [],
"last": "Andoni",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Indyk",
"suffix": ""
}
],
"year": 2006,
"venue": "47th annual IEEE symposium on foundations of computer science (FOCS'06)",
"volume": "",
"issue": "",
"pages": "459--468",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandr Andoni and Piotr Indyk. 2006. Near-optimal hashing algorithms for approximate nearest neigh- bor in high dimensions. In 2006 47th annual IEEE symposium on foundations of computer sci- ence (FOCS'06), pages 459-468. IEEE.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Paraphrasing with bilingual parallel corpora",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Bannard",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "597--604",
"other_ids": {
"DOI": [
"10.3115/1219840.1219914"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Bannard and Chris Callison-Burch. 2005. Para- phrasing with bilingual parallel corpora. In Proceed- ings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 597- 604, Ann Arbor, Michigan. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Interpretability and analysis in neural NLP",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Gehrmann",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov, Sebastian Gehrmann, and Ellie Pavlick. 2020. Interpretability and analysis in neural NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tu- torial Abstracts, pages 1-5, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Domain adaptation of natural language processing systems",
"authors": [
{
"first": "John",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Blitzer and Fernando Pereira. 2007. Domain adaptation of natural language processing systems. University of Pennsylvania, pages 1-106.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings",
"authors": [
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Venkatesh",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Adam",
"middle": [
"T"
],
"last": "Saligrama",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "29",
"issue": "",
"pages": "4349--4357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Ad- vances in Neural Information Processing Systems 29, pages 4349-4357.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1075"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Fairvis: Visual analytics for discovering intersectional bias in machine learning",
"authors": [
{
"first": "Angel",
"middle": [
"Alexander"
],
"last": "Cabrera",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Epperson",
"suffix": ""
},
{
"first": "Fred",
"middle": [],
"last": "Hohman",
"suffix": ""
},
{
"first": "Minsuk",
"middle": [],
"last": "Kahng",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Morgenstern",
"suffix": ""
},
{
"first": "Duen Horng",
"middle": [],
"last": "Chau",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 IEEE Conference on Visual Analytics Science and Technology (VAST)",
"volume": "",
"issue": "",
"pages": "46--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angel Alexander Cabrera, Will Epperson, Fred Hohman, Minsuk Kahng, Jamie Morgenstern, and Duen Horng Chau. 2019. Fairvis: Visual analytics for discovering intersectional bias in machine learn- ing. In 2019 IEEE Conference on Visual Analytics Science and Technology (VAST), pages 46-56. IEEE.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semantics derived automatically from language corpora contain human-like biases",
"authors": [
{
"first": "Aylin",
"middle": [],
"last": "Caliskan",
"suffix": ""
},
{
"first": "Joanna",
"middle": [
"J"
],
"last": "Bryson",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2017,
"venue": "Science",
"volume": "356",
"issue": "6334",
"pages": "183--186",
"other_ids": {
"DOI": [
"10.1126/science.aal4230"
]
},
"num": null,
"urls": [],
"raw_text": "Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The pascal recognising textual entailment challenge",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Ido Dagan",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Glickman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Magnini",
"suffix": ""
}
],
"year": 2006,
"venue": "Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment",
"volume": "",
"issue": "",
"pages": "177--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Machine Learning Challenges. Eval- uating Predictive Uncertainty, Visual Object Classi- fication, and Recognising Tectual Entailment, pages 177-190, Berlin, Heidelberg. Springer Berlin Hei- delberg.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "HotFlip: White-box adversarial examples for text classification",
"authors": [
{
"first": "Javid",
"middle": [],
"last": "Ebrahimi",
"suffix": ""
},
{
"first": "Anyi",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Lowd",
"suffix": ""
},
{
"first": "Dejing",
"middle": [],
"last": "Dou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "31--36",
"other_ids": {
"DOI": [
"10.18653/v1/P18-2006"
]
},
"num": null,
"urls": [],
"raw_text": "Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial exam- ples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 31-36, Melbourne, Australia. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "AllenNLP: A deep semantic natural language processing platform",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Grus",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Schmitz",
"suffix": ""
},
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. AllenNLP: A deep semantic natural language processing platform.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Teaching machines to read and comprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Kocisky",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "28",
"issue": "",
"pages": "1693--1701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Informa- tion Processing Systems 28, pages 1693-1701.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "2020. exBERT: A visual analysis tool to explore learned representations in Transformer models",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Hoover",
"suffix": ""
},
{
"first": "Hendrik",
"middle": [],
"last": "Strobelt",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Gehrmann",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "187--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Hoover, Hendrik Strobelt, and Sebastian Gehrmann. 2020. exBERT: A visual analysis tool to explore learned representations in Transformer mod- els. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Sys- tem Demonstrations, pages 187-196, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Ontonotes: The 90% solution",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, NAACL-Short '06",
"volume": "",
"issue": "",
"pages": "57--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: The 90% solution. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, NAACL-Short '06, pages 57-60, Stroudsburg, PA, USA. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Adversarial examples for evaluating reading comprehension systems",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2021--2031",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1215"
]
},
"num": null,
"urls": [],
"raw_text": "Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2021-2031, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Visualizing and understanding neural models in NLP",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xinlei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "681--691",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1082"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in NLP. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 681-691, San Diego, California. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Linzen, Grzegorz Chrupa\u0142a, Yonatan Belinkov, and Dieuwke Hupkes, editors. 2019. Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics, Florence, Italy.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3428--3448",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1334"
]
},
"num": null,
"urls": [],
"raw_text": "Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Umap: Uniform manifold approximation and projection for dimension reduction",
"authors": [
{
"first": "Leland",
"middle": [],
"last": "Mcinnes",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Healy",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Melville",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.03426"
]
},
"num": null,
"urls": [],
"raw_text": "Leland McInnes, John Healy, and James Melville. 2018. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Introducing tensorflow model analysis: Scaleable, sliced, and full-pass metrics",
"authors": [
{
"first": "Clemens",
"middle": [],
"last": "Mewald",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clemens Mewald. 2019. Introducing tensorflow model analysis: Scaleable, sliced, and full-pass metrics. https://blog.tensorflow.org/2018/03/ introducing-tensorflow-model-analysis. html.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Explaining machine learning classifiers through diverse counterfactual explanations",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Ramaravind K Mothilal",
"suffix": ""
},
{
"first": "Chenhao",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency",
"volume": "",
"issue": "",
"pages": "607--617",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramaravind K Mothilal, Amit Sharma, and Chenhao Tan. 2020. Explaining machine learning classifiers through diverse counterfactual explanations. In Pro- ceedings of the 2020 Conference on Fairness, Ac- countability, and Transparency, pages 607-617.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "InterpretML: A unified framework for machine learning interpretability",
"authors": [
{
"first": "Harsha",
"middle": [],
"last": "Nori",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Jenkins",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Koch",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.09223"
]
},
"num": null,
"urls": [],
"raw_text": "Harsha Nori, Samuel Jenkins, Paul Koch, and Rich Caruana. 2019. InterpretML: A unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Kopf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learn- ing library. In Advances in Neural Information Pro- cessing Systems 32, pages 8024-8035.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Hypothesis only baselines in natural language inference",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "Aparajita",
"middle": [],
"last": "Haldar",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "180--191",
"other_ids": {
"DOI": [
"10.18653/v1/S18-2023"
]
},
"num": null,
"urls": [],
"raw_text": "Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language in- ference. In Proceedings of the Seventh Joint Con- ference on Lexical and Computational Semantics, pages 180-191, New Orleans, Louisiana. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multi- task learners.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Exploring the limits of transfer learning with a unified text-to",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv e-prints.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "why should I trust you?\": Explaining the predictions of any classifier",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Ribeiro",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations",
"volume": "",
"issue": "",
"pages": "97--101",
"other_ids": {
"DOI": [
"10.18653/v1/N16-3020"
]
},
"num": null,
"urls": [],
"raw_text": "Marco Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \"why should I trust you?\": Explaining the pre- dictions of any classifier. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demon- strations, pages 97-101, San Diego, California. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Beyond accuracy: Behavioral testing of NLP models with CheckList",
"authors": [
{
"first": "Tongshuang",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Guestrin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4902--4912",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Be- havioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4902- 4912, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Gender bias in coreference resolution",
"authors": [
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Leonard",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "8--14",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2002"
]
},
"num": null,
"urls": [],
"raw_text": "Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8-14, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Embedding projector: Interactive visualization and interpretation of embeddings",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Smilkov",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Nicholson",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Reif",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [
"B"
],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Smilkov, Nikhil Thorat, Charles Nicholson, Emily Reif, Fernanda B Vi\u00e9gas, and Martin Watten- berg. 2016. Embedding projector: Interactive visu- alization and interpretation of embeddings. In NIPS 115",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Seq2seq-vis: A visual debugging tool for sequence-to-sequence models",
"authors": [
{
"first": "Hendrik",
"middle": [],
"last": "Strobelt",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Gehrmann",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Behrisch",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Perer",
"suffix": ""
},
{
"first": "Hanspeter",
"middle": [],
"last": "Pfister",
"suffix": ""
},
{
"first": "Alexander M",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE transactions on visualization and computer graphics",
"volume": "25",
"issue": "1",
"pages": "353--363",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hendrik Strobelt, Sebastian Gehrmann, Michael Behrisch, Adam Perer, Hanspeter Pfister, and Alexander M Rush. 2018. Seq2seq-vis: A visual debugging tool for sequence-to-sequence models. IEEE transactions on visualization and computer graphics, 25(1):353-363.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "LSTMvis: A tool for visual analysis of hidden state dynamics in recurrent neural networks",
"authors": [
{
"first": "Hendrik",
"middle": [],
"last": "Strobelt",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Gehrmann",
"suffix": ""
},
{
"first": "Hanspeter",
"middle": [],
"last": "Pfister",
"suffix": ""
},
{
"first": "Alexander M",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE transactions on visualization and computer graphics",
"volume": "24",
"issue": "1",
"pages": "667--676",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hendrik Strobelt, Sebastian Gehrmann, Hanspeter Pfis- ter, and Alexander M Rush. 2017. LSTMvis: A tool for visual analysis of hidden state dynamics in recur- rent neural networks. IEEE transactions on visual- ization and computer graphics, 24(1):667-676.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Axiomatic attribution for deep networks",
"authors": [
{
"first": "Mukund",
"middle": [],
"last": "Sundararajan",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Taly",
"suffix": ""
},
{
"first": "Qiqi",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "3319--3328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Pro- ceedings of the 34th International Conference on Machine Learning, volume 70, pages 3319-3328. PMLR.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Analyzing the structure of attention in a transformer language model",
"authors": [
{
"first": "Jesse",
"middle": [],
"last": "Vig",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "63--76",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4808"
]
},
"num": null,
"urls": [],
"raw_text": "Jesse Vig and Yonatan Belinkov. 2019. Analyzing the structure of attention in a transformer language model. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 63-76, Florence, Italy. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "AllenNLP interpret: A framework for explaining predictions of NLP models",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Tuyls",
"suffix": ""
},
{
"first": "Junlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sanjay",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {
"DOI": [
"10.18653/v1/D19-3002"
]
},
"num": null,
"urls": [],
"raw_text": "Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Sub- ramanian, Matt Gardner, and Sameer Singh. 2019. AllenNLP interpret: A framework for explaining predictions of NLP models. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 7-12, Hong Kong, China. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Inter- national Conference on Learning Representations.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "2020. The what-if tool: Interactive probing of machine learning models",
"authors": [
{
"first": "J",
"middle": [],
"last": "Wexler",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Pushkarna",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": null,
"venue": "IEEE Transactions on Visualization and Computer Graphics",
"volume": "26",
"issue": "1",
"pages": "56--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Wexler, M. Pushkarna, T. Bolukbasi, M. Wattenberg, F. Vi\u00e9gas, and J. Wilson. 2020. The what-if tool: In- teractive probing of machine learning models. IEEE Transactions on Visualization and Computer Graph- ics, 26(1):56-65.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Errudite: Scalable, reproducible, and testable error analysis",
"authors": [
{
"first": "Tongshuang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Marco",
"middle": [
"Tulio"
],
"last": "Ribeiro",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Heer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Weld",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "747--763",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1073"
]
},
"num": null,
"urls": [],
"raw_text": "Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, and Daniel Weld. 2019. Errudite: Scalable, repro- ducible, and testable error analysis. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 747-763, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Manifold: A model-agnostic framework for interpretation and diagnosis of machine learning models",
"authors": [
{
"first": "Jiawei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Piero",
"middle": [],
"last": "Molino",
"suffix": ""
},
{
"first": "Lezhi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Ebert",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Transactions on Visualization and Computer Graphics",
"volume": "",
"issue": "",
"pages": "1--1",
"other_ids": {
"DOI": [
"10.1109/TVCG.2018.2864499"
]
},
"num": null,
"urls": [],
"raw_text": "Jiawei Zhang, Yang Wang, Piero Molino, Lezhi Li, and David Ebert. 2018. Manifold: A model-agnostic framework for interpretation and diagnosis of ma- chine learning models. IEEE Transactions on Visu- alization and Computer Graphics, PP:1-1.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The LIT UI, showing a fine-tuned BERT(Devlin et al., 2019) model on the Stanford Sentiment Treebank(Socher et al., 2013) development set. The top half shows a selection toolbar, and, left-to-right: the embedding projector, the data table, and the datapoint editor. Tabs present different modules in the bottom half; the view above shows classifier predictions, an attention visualization, and a confusion matrix.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "Exploring a coreference model on the Winogender dataset.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "Investigating a local generation error, from selection of an interesting example to finding relevant training datapoints that led to an error.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF3": {
"text": "A.4: Full UI screenshot, showing a BERT(Devlin et al., 2019) model on a sample from the \"matched\" split of the MultiNLI(Williams et al., 2018) development set. The embedding projector (top left) shows three clusters, corresponding to the output layer of the model, and colored by the true label. On the bottom, the metrics table shows accuracy scores faceted by genre, and a confusion matrix shows the model predictions against the gold labels.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"html": null,
"num": null,
"text": "Built-in modules in the Language Interpretability Tool.",
"type_str": "table",
"content": "<table/>"
}
}
}
}