ACL-OCL / Base_JSON /prefixA /json /aacl /2020.aacl-main.46.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:13:34.016482Z"
},
"title": "A Survey of the State of Explainable AI for Natural Language Processing",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Danilevsky",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research -Almadem",
"location": {}
},
"email": ""
},
{
"first": "Kun",
"middle": [],
"last": "Qian",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research -Almadem",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Ranit",
"middle": [],
"last": "Aharonov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research -Almadem",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Yannis",
"middle": [],
"last": "Katsis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research -Almadem",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Prithviraj",
"middle": [],
"last": "Sen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research -Almadem",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent years have seen important advances in the quality of state-of-the-art models, but this has come at the expense of models becoming less interpretable. This survey presents an overview of the current state of Explainable AI (XAI), considered within the domain of Natural Language Processing (NLP). We discuss the main categorization of explanations, as well as the various ways explanations can be arrived at and visualized. We detail the operations and explainability techniques currently available for generating explanations for NLP model predictions, to serve as a resource for model developers in the community. Finally, we point out the current gaps and encourage directions for future work in this important research area.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent years have seen important advances in the quality of state-of-the-art models, but this has come at the expense of models becoming less interpretable. This survey presents an overview of the current state of Explainable AI (XAI), considered within the domain of Natural Language Processing (NLP). We discuss the main categorization of explanations, as well as the various ways explanations can be arrived at and visualized. We detail the operations and explainability techniques currently available for generating explanations for NLP model predictions, to serve as a resource for model developers in the community. Finally, we point out the current gaps and encourage directions for future work in this important research area.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Traditionally, Natural Language Processing (NLP) systems have been mostly based on techniques that are inherently explainable. Examples of such approaches, often referred to as white box techniques, include rules, decision trees, hidden Markov models, logistic regressions, and others. Recent years, though, have brought the advent and popularity of black box techniques, such as deep learning models and the use of language embeddings as features. While these methods in many cases substantially advance model quality, they come at the expense of models becoming less interpretable. This obfuscation of the process by which a model arrives at its results can be problematic, as it may erode trust in the many AI systems humans interact with daily (e.g., chatbots, recommendation systems, information retrieval algorithms, and many others). In the broader AI community, this growing understanding of the importance of explainability has created an emerging field called Explainable AI (XAI). However, just as tasks in different fields are more amenable to particular approaches, explainability must also be considered within the context of each discipline. We therefore focus this survey on XAI works in the domain of NLP, as represented in the main NLP conferences in the last seven years. This is, to the best of our knowledge, the first XAI survey focusing on the NLP domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As will become clear in this survey, explainability is in itself a term that requires an explanation. While explainability may generally serve many purposes (see, e.g., Lertvittayakumjorn and Toni, 2019) , our focus is on explainability from the perspective of an end user whose goal is to understand how a model arrives at its result, also referred to as the outcome explanation problem (Guidotti et al., 2018) . In this regard, explanations can help users of NLP-based AI systems build trust in these systems' predictions. Additionally, understanding the model's operation may also allow users to provide useful feedback, which in turn can help developers improve model quality (Adadi and Berrada, 2018) .",
"cite_spans": [
{
"start": 169,
"end": 203,
"text": "Lertvittayakumjorn and Toni, 2019)",
"ref_id": "BIBREF36"
},
{
"start": 388,
"end": 411,
"text": "(Guidotti et al., 2018)",
"ref_id": "BIBREF23"
},
{
"start": 680,
"end": 705,
"text": "(Adadi and Berrada, 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Explanations of model predictions have previously been categorized in a fairly simple way that differentiates between (1) whether the explanation is for each prediction individually or the model's prediction process as a whole, and (2) determining whether generating the explanation requires post-processing or not (see Section 3). However, although rarely studied, there are many additional characterizations of explanations, the most important being the techniques used to either generate or visualize explanations. In this survey, we analyze the NLP literature with respect to both these dimensions and identify the most commonly used explainability and visualization techniques, in addition to operations used to generate explanations (Sections 4.1-Section 4.3). We briefly describe each technique and point to representative papers adopting it. Finally, we discuss the common evaluation techniques used to measure the quality of explanations (Section 5), and conclude with a discussion of gaps and challenges in developing success-ful explainability approaches in the NLP domain (Section 6). Related Surveys: Earlier surveys on XAI include Adadi and Berrada (2018) and Guidotti et al. (2018) . While Adadi and Berrada provide a comprehensive review of basic terminology and fundamental concepts relevant to XAI in general, our goal is to survey more recent works in NLP in an effort to understand how these achieve XAI and how well they achieve it. Guidotti et al. adopt a four dimensional classification scheme to rate various approaches. Crucially, they differentiate between the \"explanator\" and the black-box model it explains. This makes most sense when a surrogate model is used to explain a black-box model. As we shall subsequently see, such a distinction applies less well to the majority of NLP works published in the past few years where the same neural network (NN) can be used not only to make predictions but also to derive explanations. In a series of tutorials, Lecue et al. (2020) discuss fairness and trust in machine learning (ML) that are clearly related to XAI but not the focus of this survey. Finally, we adapt some nomenclature from Arya et al. (2019) which presents a software toolkit that can help users lend explainability to their models and ML pipelines.",
"cite_spans": [
{
"start": 1145,
"end": 1169,
"text": "Adadi and Berrada (2018)",
"ref_id": "BIBREF1"
},
{
"start": 1174,
"end": 1196,
"text": "Guidotti et al. (2018)",
"ref_id": "BIBREF23"
},
{
"start": 1983,
"end": 2002,
"text": "Lecue et al. (2020)",
"ref_id": "BIBREF35"
},
{
"start": 2162,
"end": 2180,
"text": "Arya et al. (2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our goal for this survey is to: (1) provide the reader with a better understanding of the state of XAI in NLP, (2) point developers interested in building explainable NLP models to currently available techniques, and (3) bring to the attention of the research community the gaps that exist; mainly a lack of formal definitions and evaluation for explainability. We have also built an interactive website providing interested readers with all relevant aspects for every paper covered in this survey. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We identified relevant papers (see Appendix A) and classified them based on the aspects defined in Sections 3 and 4. To ensure a consistent classification, each paper was individually analyzed by at least two reviewers, consulting additional reviewers in the case of disagreement. For simplicity of presentation, we label each paper with its main applicable category for each aspect, though some papers may span multiple categories (usually with varying degrees of emphasis.) All relevant aspects for every paper covered in this survey can be found at the aforementioned website; to enable readers of this survey to discover interesting explainability techniques and ideas, even if they have not been fully developed in the respective publications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "Explanations are often categorized along two main aspects (Guidotti et al., 2018; Adadi and Berrada, 2018) . The first distinguishes whether the explanation is for an individual prediction (local) or the model's prediction process as a whole (global). The second differentiates between the explanation emerging directly from the prediction process (selfexplaining) versus requiring post-processing (posthoc). We next describe both of these aspects in detail, and provide a summary of the four categories they induce in Table 1 .",
"cite_spans": [
{
"start": 58,
"end": 81,
"text": "(Guidotti et al., 2018;",
"ref_id": "BIBREF23"
},
{
"start": 82,
"end": 106,
"text": "Adadi and Berrada, 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 519,
"end": 526,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Categorization of Explanations",
"sec_num": "3"
},
{
"text": "A local explanation provides information or justification for the model's prediction on a specific input; 46 of the 50 papers fall into this category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local vs Global",
"sec_num": "3.1"
},
{
"text": "A global explanation provides similar justification by revealing how the model's predictive process works, independently of any particular input. This category holds the remaining 4 papers covered by this survey. This low number is not surprising given the focus of this survey being on explanations that justify predictions, as opposed to explanations that help understand a model's behavior in general (which lie outside the scope of this survey).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local vs Global",
"sec_num": "3.1"
},
{
"text": "Regardless of whether the explanation is local or global, explanations differ on whether they arise as part of the prediction process, or whether their generation requires post-processing following the model making a prediction. A self-explaining approach, which may also be referred to as directly interpretable (Arya et al., 2019) , generates the explanation at the same time as the prediction, using information emitted by the model as a result of the process of making that prediction. Decision trees and rule-based models are examples of global self-explaining models, while feature saliency approaches such as attention are examples of local self-explaining models.",
"cite_spans": [
{
"start": 313,
"end": 332,
"text": "(Arya et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Explaining vs Post-Hoc",
"sec_num": "3.2"
},
{
"text": "In contrast, a post-hoc approach requires that an additional operation is performed after the predictions are made. LIME (Ribeiro et al., 2016) is an example of producing a local explanation using a surrogate model applied following the predictor's operation. A paper might also be considered to span both categories -for example, (Sydorova et al., 2019) actually presents both self-explaining and post-hoc explanation techniques. ",
"cite_spans": [
{
"start": 116,
"end": 143,
"text": "LIME (Ribeiro et al., 2016)",
"ref_id": null
},
{
"start": 331,
"end": 354,
"text": "(Sydorova et al., 2019)",
"ref_id": "BIBREF67"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Explaining vs Post-Hoc",
"sec_num": "3.2"
},
{
"text": "While the previous categorization serves as a convenient high-level classification of explanations, it does not cover other important characteristics. We now introduce two additional aspects of explanations: (1) techniques for deriving the explanation and (2) presentation to the end user. We discuss the most commonly used explainability techniques, along with basic operations that enable explainability, as well as the visualization techniques commonly used to present the output of associated explainability techniques. We identify the most common combinations of explainability techniques, operations, and visualization techniques for each of the four high-level categories of explanations presented above, and summarize them, together with representative papers, in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 772,
"end": 779,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Aspects of Explanations",
"sec_num": "4"
},
{
"text": "Although explainability techniques and visualizations are often intermixed, there are fundamental differences between them that motivated us to treat them separately. Concretely, explanation derivation -typically done by AI scientists and engineers -focuses on mathematically motivated justifications of models' output, leveraging various explainability techniques to produce \"raw explanations\" (such as attention scores). On the other hand, explanation presentation -ideally done by UX engineersfocuses on how these \"raw explanations\" are best presented to the end users using suitable visualization techniques (such as saliency heatmaps).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aspects of Explanations",
"sec_num": "4"
},
{
"text": "In the papers surveyed, we identified five major explainability techniques that differ in the mechanisms they adopt to generate the raw mathematical justifications that lead to the final explanation presented to the end users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explainability Techniques",
"sec_num": "4.1"
},
{
"text": "Feature importance. The main idea is to derive explanation by investigating the importance scores of different features used to output the final prediction. Such approaches can be built on different types of features, such as manual features obtained from feature engineering (e.g., Voskarides et al., 2015) , lexical features including word/tokens and n-gram (e.g., Godin et al., 2018; Mullenbach et al., 2018) , or latent features learned by NNs (e.g., Xie et al., 2017) . Attention mechanism (Bahdanau et al., 2015) and first-derivative saliency (Li et al., 2015) are two widely used operations to enable feature importance-based explanations. Text-based features are inherently more interpretable by humans than general features, which may explain the widespread use of attention-based approaches in the NLP domain.",
"cite_spans": [
{
"start": 283,
"end": 307,
"text": "Voskarides et al., 2015)",
"ref_id": "BIBREF71"
},
{
"start": 367,
"end": 386,
"text": "Godin et al., 2018;",
"ref_id": "BIBREF22"
},
{
"start": 387,
"end": 411,
"text": "Mullenbach et al., 2018)",
"ref_id": "BIBREF46"
},
{
"start": 455,
"end": 472,
"text": "Xie et al., 2017)",
"ref_id": "BIBREF74"
},
{
"start": 495,
"end": 518,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF8"
},
{
"start": 549,
"end": 566,
"text": "(Li et al., 2015)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explainability Techniques",
"sec_num": "4.1"
},
{
"text": "Surrogate model. Model predictions are explained by learning a second, usually more explainable model, as a proxy. One well-known example is LIME (Ribeiro et al., 2016), which learns surrogate models using an operation called input perturbation. Surrogate model-based approaches are model-agnostic and can be used to achieve either local (e.g., Alvarez-Melis and Jaakkola, 2017) or global (e.g., Liu et al., 2018) explanations. However, the learned surrogate models and the original models may have completely different mechanisms to make predictions, leading to concerns about the fidelity of surrogate model-based approaches.",
"cite_spans": [
{
"start": 396,
"end": 413,
"text": "Liu et al., 2018)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explainability Techniques",
"sec_num": "4.1"
},
{
"text": "Example-driven. Such approaches explain the prediction of an input instance by identifying and presenting other instances, usually from available labeled data, that are semantically similar to the input instance. They are similar in spirit to nearest neighbor-based approaches (Dudani, 1976) , and have been applied to different NLP tasks such as text classification (Croce et al., 2019) and question answering (Abujabal et al., 2017) .",
"cite_spans": [
{
"start": 277,
"end": 291,
"text": "(Dudani, 1976)",
"ref_id": "BIBREF17"
},
{
"start": 367,
"end": 387,
"text": "(Croce et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 411,
"end": 434,
"text": "(Abujabal et al., 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explainability Techniques",
"sec_num": "4.1"
},
{
"text": "Provenance-based. Explanations are provided by illustrating some or all of the prediction derivation process, which is an intuitive and effective explainability technique when the final prediction is the result of a series of reasoning steps. We observe several question answering papers adopt such ap- Table 2 : Overview of common combinations of explanation aspects: columns 2, 3, and 4 capture explainability techniques, operations, and visualization techniques, respectively (see Sections 4.1, 4.2, and 4.3 for details). These are grouped by the high-level categories detailed in Section 3, as shown in the first column. The last two columns show the number of papers in this survey that fall within each subgroup, and a list of representative references.",
"cite_spans": [],
"ref_spans": [
{
"start": 303,
"end": 310,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Explainability Techniques",
"sec_num": "4.1"
},
{
"text": "proaches (Abujabal et al., 2017; Zhou et al., 2018; Amini et al., 2019) .",
"cite_spans": [
{
"start": 9,
"end": 32,
"text": "(Abujabal et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 33,
"end": 51,
"text": "Zhou et al., 2018;",
"ref_id": "BIBREF76"
},
{
"start": 52,
"end": 71,
"text": "Amini et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explainability Techniques",
"sec_num": "4.1"
},
{
"text": "Declarative induction. Human-readable representations, such as rules (Pr\u00f6llochs et al., 2019) , trees (Voskarides et al., 2015) , and programs (Ling et al., 2017) are induced as explanations.",
"cite_spans": [
{
"start": 69,
"end": 93,
"text": "(Pr\u00f6llochs et al., 2019)",
"ref_id": "BIBREF53"
},
{
"start": 102,
"end": 127,
"text": "(Voskarides et al., 2015)",
"ref_id": "BIBREF71"
},
{
"start": 143,
"end": 162,
"text": "(Ling et al., 2017)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explainability Techniques",
"sec_num": "4.1"
},
{
"text": "As shown in Table 2 , feature importance-based and surrogate model-based approaches have been in frequent use (accounting for 29 and 8, respectively, of the 50 papers reviewed). This should not come as a surprise, as features serve as building blocks for machine learning models (explaining the proliferation of feature importance-based approaches) and most recent NLP papers employ NNbased models, which are generally black-box models (explaining the popularity of surrogate modelbased approaches). Finally note that a complex NLP approach consisting of different components may employ more than one of these explainability techniques. A representative example is the QA system QUINT (Abujabal et al., 2017) , which displays the query template that best matches the user input query (example-driven) as well as the instantiated knowledge-base entities (provenance).",
"cite_spans": [
{
"start": 685,
"end": 708,
"text": "(Abujabal et al., 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Explainability Techniques",
"sec_num": "4.1"
},
{
"text": "We now present the most common set of operations encountered in our literature review that are used to enable explainability, in conjunction with relevant work employing each one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Operations to Enable Explainability",
"sec_num": "4.2"
},
{
"text": "First-derivative saliency. Gradient-based explanations estimate the contribution of input i towards output o by computing the partial derivative of o with respect to i. This is closely related to older concepts such as sensitivity (Saltelli et al., 2008) . First-derivative saliency is particularly con-venient for NN-based models because these can be computed for any layer using a single call to auto-differentiation, which most deep learning engines provide out-of-the-box. Recent work has also proposed improvements to first-derivative saliency (Sundararajan et al., 2017) . As suggested by its name and definition, first-derivative saliency can be used to enable feature importance explainability, especially on word/token-level features (Aubakirova and Bansal, 2016; Karlekar et al., 2018) .",
"cite_spans": [
{
"start": 231,
"end": 254,
"text": "(Saltelli et al., 2008)",
"ref_id": "BIBREF61"
},
{
"start": 549,
"end": 576,
"text": "(Sundararajan et al., 2017)",
"ref_id": "BIBREF66"
},
{
"start": 743,
"end": 772,
"text": "(Aubakirova and Bansal, 2016;",
"ref_id": "BIBREF6"
},
{
"start": 773,
"end": 795,
"text": "Karlekar et al., 2018)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Operations to Enable Explainability",
"sec_num": "4.2"
},
{
"text": "Layer-wise relevance propagation. This is another way to attribute relevance to features computed in any intermediate layer of an NN. Definitions are available for most common NN layers including fully connected layers, convolution layers and recurrent layers. Layer-wise relevance propagation has been used to, for example, enable feature importance explainability (Poerner et al., 2018) and example-driven explainability (Croce et al., 2018) .",
"cite_spans": [
{
"start": 366,
"end": 388,
"text": "(Poerner et al., 2018)",
"ref_id": "BIBREF52"
},
{
"start": 423,
"end": 443,
"text": "(Croce et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Operations to Enable Explainability",
"sec_num": "4.2"
},
{
"text": "Input perturbations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Operations to Enable Explainability",
"sec_num": "4.2"
},
{
"text": "Pioneered by LIME (Ribeiro et al., 2016), input perturbations can explain the output for input x by generating random perturbations of x and training an explainable model (usually a linear model). They are mainly used to enable surrogate models (e.g., Ribeiro et al., 2016; Alvarez-Melis and Jaakkola, 2017).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Operations to Enable Explainability",
"sec_num": "4.2"
},
{
"text": "Attention (Bahdanau et al., 2015; Vaswani et al., 2017) . Less an operation and more of a strategy to enable the NN to explain predictions, attention layers can be added to most NN architectures and, because they appeal to human intuition, can help indicate where the NN model is \"focusing\". While previous work has widely used attention layers (Luo et al., 2018; Xie et al., 2017; Mullenbach et al., 2018) to enable feature importance explainability, the jury is still out as to how much explainability attention provides (Jain and Wallace, 2019; Serrano and Smith, 2019; Wiegreffe and Pinter, 2019) .",
"cite_spans": [
{
"start": 10,
"end": 33,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF8"
},
{
"start": 34,
"end": 55,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF70"
},
{
"start": 345,
"end": 363,
"text": "(Luo et al., 2018;",
"ref_id": "BIBREF44"
},
{
"start": 364,
"end": 381,
"text": "Xie et al., 2017;",
"ref_id": "BIBREF74"
},
{
"start": 382,
"end": 406,
"text": "Mullenbach et al., 2018)",
"ref_id": "BIBREF46"
},
{
"start": 523,
"end": 547,
"text": "(Jain and Wallace, 2019;",
"ref_id": "BIBREF29"
},
{
"start": 548,
"end": 572,
"text": "Serrano and Smith, 2019;",
"ref_id": "BIBREF64"
},
{
"start": 573,
"end": 600,
"text": "Wiegreffe and Pinter, 2019)",
"ref_id": "BIBREF73"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Operations to Enable Explainability",
"sec_num": "4.2"
},
{
"text": "LSTM gating signals. Given the sequential nature of language, recurrent layers, in particular LSTMs (Hochreiter and Schmidhuber, 1997) , are commonplace. While it is common to mine the outputs of LSTM cells to explain outputs, there may also be information present in the outputs of the gates produced within the cells. It is possible to utilize (and even combine) other operations presented here to interpret gating signals to aid feature importance explainability (Ghaeini et al., 2018) .",
"cite_spans": [
{
"start": 100,
"end": 134,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF27"
},
{
"start": 466,
"end": 488,
"text": "(Ghaeini et al., 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Operations to Enable Explainability",
"sec_num": "4.2"
},
{
"text": "Explainability-aware architecture design. One way to exploit the flexibility of deep learning is to devise an NN architecture that mimics the process humans employ to arrive at a solution. This makes the learned model (partially) interpretable since the architecture contains human-recognizable components. Implementing such a model architecture can be used to enable the induction of human-readable programs for solving math problems (Amini et al., 2019; Ling et al., 2017) or sentence simplification problems (Dong et al., 2019) . This design may also be applied to surrogate models that generate explanations for predictions (Rajani et al., 2019a; Liu et al., 2019) .",
"cite_spans": [
{
"start": 435,
"end": 455,
"text": "(Amini et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 456,
"end": 474,
"text": "Ling et al., 2017)",
"ref_id": "BIBREF40"
},
{
"start": 511,
"end": 530,
"text": "(Dong et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 628,
"end": 650,
"text": "(Rajani et al., 2019a;",
"ref_id": "BIBREF56"
},
{
"start": 651,
"end": 668,
"text": "Liu et al., 2019)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Operations to Enable Explainability",
"sec_num": "4.2"
},
{
"text": "Previous works have also attempted to compare these operations in terms of efficacy with respect to specific NLP tasks (Poerner et al., 2018) . Operations outside of this list exist and are popular for particular categories of explanations. Table 2 mentions some of these. For instance, Pr\u00f6llochs et al. (2019) use reinforcement learning to learn simple negation rules, Liu et al. (2018) learns a taxonomy post-hoc to better interpret network embeddings, and Pryzant et al. (2018b) uses gradient reversal (Ganin et al., 2016) to deconfound lexicons.",
"cite_spans": [
{
"start": 119,
"end": 141,
"text": "(Poerner et al., 2018)",
"ref_id": "BIBREF52"
},
{
"start": 287,
"end": 310,
"text": "Pr\u00f6llochs et al. (2019)",
"ref_id": "BIBREF53"
},
{
"start": 370,
"end": 387,
"text": "Liu et al. (2018)",
"ref_id": "BIBREF42"
},
{
"start": 459,
"end": 481,
"text": "Pryzant et al. (2018b)",
"ref_id": "BIBREF55"
},
{
"start": 505,
"end": 525,
"text": "(Ganin et al., 2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 241,
"end": 248,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Operations to Enable Explainability",
"sec_num": "4.2"
},
{
"text": "An explanation may be presented in different ways to the end user, and making the appropriate choice is crucial for the overall success of an XAI approach. For example, the widely used attention mechanism, which learns the importance scores of a set of features, can be visualized as raw attention scores or as a saliency heatmap (see Figure 1a) . Although the former is acceptable, the latter is more user-friendly and has become the standard way to visualize attention-based approaches. We now present the major visualization techniques identified in our literature review.",
"cite_spans": [],
"ref_spans": [
{
"start": 335,
"end": 345,
"text": "Figure 1a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Visualization Techniques",
"sec_num": "4.3"
},
{
"text": "Saliency. This has been primarily used to visualize the importance scores of different types of elements in XAI learning systems, such as showing input-output word alignment (Bahdanau et al., 2015) (Figure 1a ), highlighting words in input text (Mullenbach et al., 2018) (Figure 1b) or displaying extracted relations (Xie et al., 2017) . We observe a strong correspondence between feature importancebased explainability and saliency-based visualizations; namely, all papers using feature importance to generate explanations also chose saliency-based visualization techniques. Saliency-based visualizations are popular because they present visually perceptive explanations and can be easily understood by different types of end users. They are there- fore frequently seen across different AI domains (e.g., computer vision (Simonyan et al., 2013) and speech (Aldeneh and Provost, 2017) ). As shown in Table 2 , saliency is the most dominant visualization technique among the papers covered by this survey. Raw declarative representations. As suggested by its name, this visualization technique directly presents the learned declarative representations, such as logic rules, trees, and programs (Figure 1c and 1d) . Such techniques assume that end users can understand specific representations, such as firstorder logic rules (Pezeshkpour et al., 2019a) and reasoning trees (Liang et al., 2016) , and therefore may implicitly target more advanced users.",
"cite_spans": [
{
"start": 174,
"end": 197,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF8"
},
{
"start": 245,
"end": 270,
"text": "(Mullenbach et al., 2018)",
"ref_id": "BIBREF46"
},
{
"start": 317,
"end": 335,
"text": "(Xie et al., 2017)",
"ref_id": "BIBREF74"
},
{
"start": 822,
"end": 845,
"text": "(Simonyan et al., 2013)",
"ref_id": "BIBREF65"
},
{
"start": 857,
"end": 884,
"text": "(Aldeneh and Provost, 2017)",
"ref_id": "BIBREF2"
},
{
"start": 1325,
"end": 1352,
"text": "(Pezeshkpour et al., 2019a)",
"ref_id": "BIBREF50"
},
{
"start": 1373,
"end": 1393,
"text": "(Liang et al., 2016)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 198,
"end": 208,
"text": "(Figure 1a",
"ref_id": null
},
{
"start": 271,
"end": 282,
"text": "(Figure 1b)",
"ref_id": null
},
{
"start": 900,
"end": 907,
"text": "Table 2",
"ref_id": null
},
{
"start": 1193,
"end": 1212,
"text": "(Figure 1c and 1d)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Visualization Techniques",
"sec_num": "4.3"
},
{
"text": "Natural language explanation. The explanation is verbalized in human-comprehensible natural language ( Figure 2) . The natural language can be generated using sophisticated deep learning models, e.g., by training a language model with human natural language explanations and coupling with a deep generative model (Rajani et al., 2019a) . It can also be generated by using simple templatebased approaches (Abujabal et al., 2017) . In fact, many declarative induction-based techniques can use template-based natural language generation (Reiter and Dale, 1997) to turn rules and programs into human-comprehensible language, and this minor extension can potentially make the explanation more accessible to lay users. Table 2 references some additional visualization techniques, such as using raw examples to Figure 2 : Template-based natural language explanation for a QA system (Abujabal et al., 2017) . present example-driven approaches Croce et al., 2019 ) (e.g., Figure 1e) , and dependency parse trees to represent input questions (Abujabal et al., 2017) .",
"cite_spans": [
{
"start": 313,
"end": 335,
"text": "(Rajani et al., 2019a)",
"ref_id": "BIBREF56"
},
{
"start": 404,
"end": 427,
"text": "(Abujabal et al., 2017)",
"ref_id": "BIBREF0"
},
{
"start": 534,
"end": 557,
"text": "(Reiter and Dale, 1997)",
"ref_id": "BIBREF58"
},
{
"start": 875,
"end": 898,
"text": "(Abujabal et al., 2017)",
"ref_id": "BIBREF0"
},
{
"start": 935,
"end": 953,
"text": "Croce et al., 2019",
"ref_id": "BIBREF15"
},
{
"start": 1032,
"end": 1055,
"text": "(Abujabal et al., 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 103,
"end": 112,
"text": "Figure 2)",
"ref_id": null
},
{
"start": 713,
"end": 720,
"text": "Table 2",
"ref_id": null
},
{
"start": 804,
"end": 812,
"text": "Figure 2",
"ref_id": null
},
{
"start": 963,
"end": 973,
"text": "Figure 1e)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Visualization Techniques",
"sec_num": "4.3"
},
{
"text": "Following the goals of XAI, a model's quality should be evaluated not only by its accuracy and performance, but also by how well it provides explanations for its predictions. In this section we discuss the state of the field in terms of defining and measuring explanation quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Quality",
"sec_num": "5"
},
{
"text": "Given the young age of the field, unsurprisingly there is little agreement on how explanations should be evaluated. The majority of the works reviewed (32 out of 50) either lack a standardized evaluation or include only an informal evaluation, while a smaller number of papers looked at more formal evaluation approaches, including leveraging ground truth data and human evaluation. We next present the major categories of evaluation tech-niques we encountered (summarized in Table 3 : Common evaluation techniques and number of papers adopting them, out of the 50 papers surveyed (note that some papers adopt more than one technique)",
"cite_spans": [],
"ref_spans": [
{
"start": 476,
"end": 483,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.1"
},
{
"text": "Informal examination of explanations. This typically takes the form of high-level discussions of how examples of generated explanations align with human intuition. This includes cases where the output of a single explainability approach is examined in isolation (Xie et al., 2017) as well as when explanations are compared to those of other reference approaches (Ross et al., 2017) (such as LIME, which is a frequently used baseline).",
"cite_spans": [
{
"start": 262,
"end": 280,
"text": "(Xie et al., 2017)",
"ref_id": "BIBREF74"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.1"
},
{
"text": "Comparison to ground truth. Several works compare generated explanations to ground truth data in order to quantify the performance of explainability techniques. Employed metrics vary based on task and explainability technique, but commonly encountered metrics include P/R/F1 (Carton et al., 2018), perplexity, and BLEU (Ling et al., 2017; Rajani et al., 2019b) . While having a quantitative way to measure explainability is a promising direction, care should be taken during ground truth acquisition to ensure its quality and account for cases where there may be alternative valid explanations. Approaches employed to address this issue involve having multiple annotators and reporting inter-annotator agreement or mean human performance, as well as evaluating the explanations at different granularities (e.g., token-wise vs phrasewise) to account for disagreements on the precise value of the ground truth (Carton et al., 2018) .",
"cite_spans": [
{
"start": 319,
"end": 338,
"text": "(Ling et al., 2017;",
"ref_id": "BIBREF40"
},
{
"start": 339,
"end": 360,
"text": "Rajani et al., 2019b)",
"ref_id": "BIBREF57"
},
{
"start": 908,
"end": 929,
"text": "(Carton et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.1"
},
{
"text": "Human evaluation. A more direct way to assess the explanation quality is to ask humans to evaluate the effectiveness of the generated explanations. This has the advantage of avoiding the assumption that there is only one good explanation that could serve as ground truth, as well as sidestepping the need to measure similarity of explanations. Here as well, it is important to have multiple annotators, report inter-annotator agreement, and correctly deal with subjectivity and variance in the responses. The approaches found in this survey vary in several dimensions, including the number of humans involved (ranging from 1 (Mullenbach et al., 2018) to 25 (Sydorova et al., 2019) humans), as well as the high-level task that they were asked to perform (including rating the explanations of a single approach (Dong et al., 2019) and comparing explanations of multiple techniques (Sydorova et al., 2019) ).",
"cite_spans": [
{
"start": 625,
"end": 650,
"text": "(Mullenbach et al., 2018)",
"ref_id": "BIBREF46"
},
{
"start": 657,
"end": 680,
"text": "(Sydorova et al., 2019)",
"ref_id": "BIBREF67"
},
{
"start": 809,
"end": 828,
"text": "(Dong et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 879,
"end": 902,
"text": "(Sydorova et al., 2019)",
"ref_id": "BIBREF67"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.1"
},
{
"text": "Other operation-specific techniques. Given the prevalence of attention layers (Bahdanau et al., 2015; Vaswani et al., 2017) in NLP, recent work (Jain and Wallace, 2019; Serrano and Smith, 2019; Wiegreffe and Pinter, 2019) has developed specific techniques to evaluate such explanations based on counterfactuals or erasure-based tests . Serrano and Smith repeatedly set to zero the maximal entry produced by the attention layer. If attention weights indeed \"explain\" the output prediction, then turning off the dominant weights should result in an altered prediction. Similar experiments have been devised by others (Jain and Wallace, 2019) . In particular, Wiegreffe and Pinter caution against assuming that there exists only one true explanation to suggest accounting for the natural variance of attention layers. On a broader note, causality has thoroughly explored such counterfactualbased notions of explanation (Halpern, 2016) .",
"cite_spans": [
{
"start": 78,
"end": 101,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF8"
},
{
"start": 102,
"end": 123,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF70"
},
{
"start": 144,
"end": 168,
"text": "(Jain and Wallace, 2019;",
"ref_id": "BIBREF29"
},
{
"start": 169,
"end": 193,
"text": "Serrano and Smith, 2019;",
"ref_id": "BIBREF64"
},
{
"start": 194,
"end": 221,
"text": "Wiegreffe and Pinter, 2019)",
"ref_id": "BIBREF73"
},
{
"start": 615,
"end": 639,
"text": "(Jain and Wallace, 2019)",
"ref_id": "BIBREF29"
},
{
"start": 916,
"end": 931,
"text": "(Halpern, 2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.1"
},
{
"text": "While the above overview summarizes how explainability approaches are commonly evaluated, another important aspect is what is being evaluated. Explanations are multi-faceted objects that can be evaluated on multiple aspects, such as fidelity (how much they reflect the actual workings of the underlying model), comprehensibility (how easy they are to understand by humans), and others. Therefore, understanding the target of the evaluation is important for interpreting the evaluation results. We refer interested readers to (Carvalho et al., 2019) for a comprehensive presentation of aspects of evaluating approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.1"
},
{
"text": "Many works do not explicitly state what is being evaluated. As a notable exception, (Lertvittayakumjorn and Toni, 2019) outlines three goals of explanations (reveal model behavior, justify model predictions, and assist humans in investigating uncertain predictions) and proposes human evaluation experiments targeting each of them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.1"
},
{
"text": "An important and often overlooked aspect of explanation quality is the part of the prediction process (starting with the input and ending with the model output) covered by an explanation. We have observed that many explainability approaches explain only part of this process, leaving it up to the end user to fill in the gaps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predictive Process Coverage",
"sec_num": "5.2"
},
{
"text": "As an example, consider the MathQA task of solving math word problems. As readers may be familiar from past education experience, in math exams, one is often asked to provide a step-by-step explanation of how the answer was derived. Usually, full credit is not given if any of the critical steps used in the derivation are missing. Recent works have studied the explainability of MathQA models, which seek to reproduce this process (Amini et al., 2019; Ling et al., 2017) , and have employed different approaches in the type of explanations produced. While (Amini et al., 2019) explains the predicted answer by showing the sequence of mathematical operations leading to it, this provides only partial coverage, as it does not explain how these operations were derived from the input text. On the other hand, the explanations produced by (Ling et al., 2017) augment the mathematical formulas with text describing the thought process behind the derived solution, thus covering a bigger part of the prediction process.",
"cite_spans": [
{
"start": 432,
"end": 452,
"text": "(Amini et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 453,
"end": 471,
"text": "Ling et al., 2017)",
"ref_id": "BIBREF40"
},
{
"start": 557,
"end": 577,
"text": "(Amini et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 837,
"end": 856,
"text": "(Ling et al., 2017)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Predictive Process Coverage",
"sec_num": "5.2"
},
{
"text": "The level of coverage may be an artifact of explainability techniques used: provenance-based approaches tend to provide more coverage, while example-driven approaches, may provide little to no coverage. Moreover, while our math teacher would argue that providing higher coverage is always beneficial to the student, in reality this may depend on the end use of the explanation. For instance, the coverage of explanations of (Amini et al., 2019) may be potentially sufficient for advanced technical users. Thus, higher coverage, while in general a positive aspect, should always be considered in combination with the target use and audience of the produced explanations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predictive Process Coverage",
"sec_num": "5.2"
},
{
"text": "This survey showcases recent advances of XAI research in NLP, as evidenced by publications in major NLP conferences in the last 7 years. We have discussed the main categorization of explanations (Local vs Global, Self-Explaining vs Post-Hoc) as well as the various ways explanations can be arrived at and visualized, together with the common techniques used. We have also detailed operations and explainability techniques currently available for generating explanations of model predictions, in the hopes of serving as a resource for developers interested in building explainable NLP models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insights and Future Directions",
"sec_num": "6"
},
{
"text": "We hope this survey encourages the research community to work in bridging the current gaps in the field of XAI in NLP. The first research direction is a need for clearer terminology and understanding of what constitutes explainability and how it connects to the target audience. For example, is a model that displays an induced program that, when executed, yields a prediction, and yet conceals the process of inducing the program, explainable in general? Or is it explainable for some target users but not for others? The second is an expansion of the evaluation processes and metrics, especially for human evaluation. The field of XAI is aimed at adding explainability as a desired feature of models, in addition to the model's predictive quality, and other features such as runtime performance, complexity or memory usage. In general, trade-offs exist between desired characteristics of models, such as more complex models achieving better predictive power at the expense of slower runtime. In XAI, some works have claimed that explainability may come at the price of losing predictive quality (Bertsimas et al., 2019) , while other have claimed the opposite (Garneau et al., 2018; Liang et al., 2016) . Studying such possible trade-offs is an important research area for XAI, but one that cannot advance until standardized metrics are developed for evaluating the quality of explanations. The third research direction is a call to more critically address the issue of fidelity (or causality), and to ask hard questions about whether a claimed explanation is faithfully explaining the model's prediction. Finally, it is interesting to note that we found only four papers that fall into the global explanations category. This might seem surprising given that white box models, which have been fundamental in NLP, are explainable in the global sense. We believe this stems from the fact that because white box models are clearly explainable, the focus of the explicit XAI field is in explaining black box models, which comprise mostly local explanations. White box models, like rule based models and decision trees, while still in use, are less frequently framed as explainable or interpretable, and are hence not the main thrust of where the field is going. We think that this may be an oversight of the field since white box models can be a great test bed for studying techniques for evaluating explanations. Yunyao Li, Lucian Popa, Christine T Wolf, and Anbang Xu for their efforts at the early stage of this work.",
"cite_spans": [
{
"start": 1097,
"end": 1121,
"text": "(Bertsimas et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 1162,
"end": 1184,
"text": "(Garneau et al., 2018;",
"ref_id": "BIBREF20"
},
{
"start": 1185,
"end": 1204,
"text": "Liang et al., 2016)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Insights and Future Directions",
"sec_num": "6"
},
{
"text": "https://xainlp2020.github.io/xainlp/ (we plan to maintain this website as a contribution to the community.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "papers, and the top three conferences of the set. To ensure a consistent classification, each paper was individually reviewed by at least two reviewers, consulting additional reviewers in the case of disagreement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for their valuable feedback. We also thank Shipi Dhanorkar,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "This survey aims to demonstrate the recent advances of XAI research in NLP, rather than to provide an exhaustive list of XAI papers in the NLP community. To this end, we identified relevant papers published in major NLP conferences (ACL, NAACL, EMNLP, and COLING) between 2013 and 2019. We filtered for titles containing (lemmatized) terms related to XAI, such as \"explainability\", \"interpretability\", \"transparent\", etc. While this may ignore some related papers, we argue that representative papers are more likely to include such terms in their titles. In particular, we assume that if authors consider explainability to be a major component of their work, they are more likely to use related keywords in the title of their work. Our search criteria yielded a set of 107 papers. During the paper review process we first verified whether each paper truly fell within the scope of the survey; namely, papers with a focus on explainability as a vehicle for understanding how a model arrives at its result. This process excluded 57 papers, leaving us with a total of 50 papers. Table 4 lists the top three broad NLP topics (taken verbatim from the ACL call for papers) covered by these",
"cite_spans": [],
"ref_spans": [
{
"start": 1077,
"end": 1084,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Appendix A -Methodology",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Quint: Interpretable question answering over knowledge bases",
"authors": [
{
"first": "Abdalghani",
"middle": [],
"last": "Abujabal",
"suffix": ""
},
{
"first": "Rishiraj",
"middle": [],
"last": "Saha Roy",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Yahya",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "61--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abdalghani Abujabal, Rishiraj Saha Roy, Mohamed Yahya, and Gerhard Weikum. 2017. Quint: Inter- pretable question answering over knowledge bases. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing: Sys- tem Demonstrations, pages 61-66.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Peeking inside the black-box: A survey on explainable artificial intelligence (xai)",
"authors": [
{
"first": "A",
"middle": [],
"last": "Adadi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Berrada",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Access",
"volume": "6",
"issue": "",
"pages": "52138--52160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Adadi and M. Berrada. 2018. Peeking inside the black-box: A survey on explainable artificial intelli- gence (xai). IEEE Access, 6:52138-52160.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Using regional saliency for speech emotion recognition",
"authors": [
{
"first": "Zakaria",
"middle": [],
"last": "Aldeneh",
"suffix": ""
},
{
"first": "Emily",
"middle": [
"Mower"
],
"last": "Provost",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "2741--2745",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zakaria Aldeneh and Emily Mower Provost. 2017. Us- ing regional saliency for speech emotion recognition. In 2017 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 2741-2745. IEEE.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A causal framework for explaining the predictions of black-box sequence-to-sequence models",
"authors": [
{
"first": "David",
"middle": [],
"last": "Alvarez-Melis",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "412--421",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1042"
]
},
"num": null,
"urls": [],
"raw_text": "David Alvarez-Melis and Tommi Jaakkola. 2017. A causal framework for explaining the predictions of black-box sequence-to-sequence models. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing, pages 412- 421, Copenhagen, Denmark. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "MathQA: Towards interpretable math word problem solving with operation-based formalisms",
"authors": [
{
"first": "Aida",
"middle": [],
"last": "Amini",
"suffix": ""
},
{
"first": "Saadia",
"middle": [],
"last": "Gabriel",
"suffix": ""
},
{
"first": "Shanchuan",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Rik",
"middle": [],
"last": "Koncel-Kedziorski",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2357--2367",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1245"
]
},
"num": null,
"urls": [],
"raw_text": "Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Ha- jishirzi. 2019. MathQA: Towards interpretable math word problem solving with operation-based formalisms. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2357-2367, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques",
"authors": [
{
"first": "Vijay",
"middle": [],
"last": "Arya",
"suffix": ""
},
{
"first": "K",
"middle": [
"E"
],
"last": "Rachel",
"suffix": ""
},
{
"first": "Pin-Yu",
"middle": [],
"last": "Bellamy",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Dhurandhar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hind",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Hoffman",
"suffix": ""
},
{
"first": "Q",
"middle": [
"Vera"
],
"last": "Houde",
"suffix": ""
},
{
"first": "Ronny",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Aleksandra",
"middle": [],
"last": "Luss",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Mojsilovic",
"suffix": ""
},
{
"first": "Pablo",
"middle": [],
"last": "Mourad",
"suffix": ""
},
{
"first": "Ramya",
"middle": [],
"last": "Pedemonte",
"suffix": ""
},
{
"first": "John",
"middle": [
"T"
],
"last": "Raghavendra",
"suffix": ""
},
{
"first": "Prasanna",
"middle": [],
"last": "Richards",
"suffix": ""
},
{
"first": "Karthikeyan",
"middle": [],
"last": "Sattigeri",
"suffix": ""
},
{
"first": "Moninder",
"middle": [],
"last": "Shanmugam",
"suffix": ""
},
{
"first": "Kush",
"middle": [
"R"
],
"last": "Singh",
"suffix": ""
},
{
"first": "Dennis",
"middle": [],
"last": "Varshney",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Alek- sandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John T. Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, and Yi Zhang. 2019. One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. ArXiv, abs/1909.03012.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Interpreting neural networks to improve politeness comprehension",
"authors": [
{
"first": "M",
"middle": [],
"last": "Aubakirova",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2035--2041",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Aubakirova and M. Bansal. 2016. Interpreting neu- ral networks to improve politeness comprehension. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (Austin, Texas, 2016), page 2035-2041.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multimodal language analysis in the wild: CMU-MOSEI dataset and interpretable dynamic fusion graph",
"authors": [
{
"first": "Amirali Bagher",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2236--2246",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1208"
]
},
"num": null,
"urls": [],
"raw_text": "AmirAli Bagher Zadeh, Paul Pu Liang, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018. Multimodal language analysis in the wild: CMU- MOSEI dataset and interpretable dynamic fusion graph. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 2236-2246, Melbourne, Australia. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Interpretable emoji prediction via label-wise attention LSTMs",
"authors": [
{
"first": "Francesco",
"middle": [],
"last": "Barbieri",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Espinosa-Anke",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Schockaert",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4766--4771",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1508"
]
},
"num": null,
"urls": [],
"raw_text": "Francesco Barbieri, Luis Espinosa-Anke, Jose Camacho-Collados, Steven Schockaert, and Hora- cio Saggion. 2018. Interpretable emoji prediction via label-wise attention LSTMs. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4766-4771, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The price of interpretability",
"authors": [
{
"first": "Dimitris",
"middle": [],
"last": "Bertsimas",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Delarue",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Jaillet",
"suffix": ""
},
{
"first": "S\u00e9bastien",
"middle": [],
"last": "Martin",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitris Bertsimas, Arthur Delarue, Patrick Jaillet, and S\u00e9bastien Martin. 2019. The price of interpretability. ArXiv, abs/1907.03419.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Exploiting structure in representation of named entities using active learning",
"authors": [
{
"first": "Nikita",
"middle": [],
"last": "Bhutani",
"suffix": ""
},
{
"first": "Yunyao",
"middle": [],
"last": "Kun Qian",
"suffix": ""
},
{
"first": "H",
"middle": [
"V"
],
"last": "Li",
"suffix": ""
},
{
"first": "Mauricio",
"middle": [],
"last": "Jagadish",
"suffix": ""
},
{
"first": "Mitesh",
"middle": [],
"last": "Hernandez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vasa",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "687--699",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikita Bhutani, Kun Qian, Yunyao Li, H. V. Jagadish, Mauricio Hernandez, and Mitesh Vasa. 2018. Ex- ploiting structure in representation of named entities using active learning. In Proceedings of the 27th In- ternational Conference on Computational Linguis- tics, pages 687-699, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Extractive adversarial networks: High-recall explanations for identifying personal attacks in social media posts",
"authors": [
{
"first": "Qiaozhu",
"middle": [],
"last": "Samuel Carton",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Mei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Resnick",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3497--3507",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1386"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel Carton, Qiaozhu Mei, and Paul Resnick. 2018. Extractive adversarial networks: High-recall expla- nations for identifying personal attacks in social me- dia posts. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 3497-3507, Brussels, Belgium. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics",
"authors": [
{
"first": "V",
"middle": [],
"last": "Diogo",
"suffix": ""
},
{
"first": "Eduardo",
"middle": [
"M"
],
"last": "Carvalho",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"S"
],
"last": "Pereira",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cardoso",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3390/electronics8080832"
]
},
"num": null,
"urls": [],
"raw_text": "Diogo V. Carvalho, Eduardo M. Pereira, and Jaime S. Cardoso. 2019. Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics, 8(8):832. Number: 8 Publisher: Multidisciplinary Digital Publishing Institute.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Explaining non-linear classifier decisions within kernel-based deep architectures",
"authors": [
{
"first": "Danilo",
"middle": [],
"last": "Croce",
"suffix": ""
},
{
"first": "Daniele",
"middle": [],
"last": "Rossini",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "16--24",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5403"
]
},
"num": null,
"urls": [],
"raw_text": "Danilo Croce, Daniele Rossini, and Roberto Basili. 2018. Explaining non-linear classifier decisions within kernel-based deep architectures. In Proceed- ings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 16-24, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Auditing deep learning processes through kernel-based explanatory models",
"authors": [
{
"first": "Danilo",
"middle": [],
"last": "Croce",
"suffix": ""
},
{
"first": "Daniele",
"middle": [],
"last": "Rossini",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4037--4046",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1415"
]
},
"num": null,
"urls": [],
"raw_text": "Danilo Croce, Daniele Rossini, and Roberto Basili. 2019. Auditing deep learning processes through kernel-based explanatory models. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4037-4046, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "EditNTS: An neural programmer-interpreter model for sentence simplification through explicit editing",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Zichao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Mehdi",
"middle": [],
"last": "Rezagholizadeh",
"suffix": ""
},
{
"first": "Jackie Chi Kit",
"middle": [],
"last": "Cheung",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3393--3402",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1331"
]
},
"num": null,
"urls": [],
"raw_text": "Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. EditNTS: An neural programmer-interpreter model for sentence simplifi- cation through explicit editing. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3393-3402, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The distance-weighted k-nearest-neighbor rule",
"authors": [
{
"first": "A",
"middle": [],
"last": "Sahibsingh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dudani",
"suffix": ""
}
],
"year": 1976,
"venue": "IEEE Transactions on Systems, Man, and Cybernetics",
"volume": "",
"issue": "",
"pages": "325--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sahibsingh A Dudani. 1976. The distance-weighted k-nearest-neighbor rule. IEEE Transactions on Sys- tems, Man, and Cybernetics, (4):325-327.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Pathologies of neural models make interpretations difficult",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Shi Feng",
"suffix": ""
},
{
"first": "Alvin",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "I",
"middle": [
"I"
],
"last": "Grissom",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Rodriguez",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3719--3728",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1407"
]
},
"num": null,
"urls": [],
"raw_text": "Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018. Pathologies of neural models make interpretations difficult. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3719-3728, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Domain-adversarial training of neural networks",
"authors": [
{
"first": "Yaroslav",
"middle": [],
"last": "Ganin",
"suffix": ""
},
{
"first": "Evgeniya",
"middle": [],
"last": "Ustinova",
"suffix": ""
},
{
"first": "Hana",
"middle": [],
"last": "Ajakan",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Germain",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Laviolette",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Marchand",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Lempitsky",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Fran\u00e7ois Lavi- olette, Mario Marchand, , and Victor Lempitsky. 2016. Domain-adversarial training of neural net- works. JMLR.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Predicting and interpreting embeddings for out of vocabulary words in downstream tasks",
"authors": [
{
"first": "Nicolas",
"middle": [],
"last": "Garneau",
"suffix": ""
},
{
"first": "Jean-Samuel",
"middle": [],
"last": "Leboeuf",
"suffix": ""
},
{
"first": "Luc",
"middle": [],
"last": "Lamontagne",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "331--333",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5439"
]
},
"num": null,
"urls": [],
"raw_text": "Nicolas Garneau, Jean-Samuel Leboeuf, and Luc Lam- ontagne. 2018. Predicting and interpreting embed- dings for out of vocabulary words in downstream tasks. In Proceedings of the 2018 EMNLP Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 331-333, Brussels, Bel- gium. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Interpreting recurrent and attention-based neural models: a case study on natural language inference",
"authors": [
{
"first": "Reza",
"middle": [],
"last": "Ghaeini",
"suffix": ""
},
{
"first": "Xiaoli",
"middle": [],
"last": "Fern",
"suffix": ""
},
{
"first": "Prasad",
"middle": [],
"last": "Tadepalli",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4952--4957",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1537"
]
},
"num": null,
"urls": [],
"raw_text": "Reza Ghaeini, Xiaoli Fern, and Prasad Tadepalli. 2018. Interpreting recurrent and attention-based neural models: a case study on natural language infer- ence. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4952-4957, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Explaining character-aware neural networks for word-level prediction: Do they discover linguistic rules?",
"authors": [
{
"first": "Fr\u00e9deric",
"middle": [],
"last": "Godin",
"suffix": ""
},
{
"first": "Kris",
"middle": [],
"last": "Demuynck",
"suffix": ""
},
{
"first": "Joni",
"middle": [],
"last": "Dambre",
"suffix": ""
},
{
"first": "Wesley",
"middle": [],
"last": "De Neve",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Demeester",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3275--3284",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1365"
]
},
"num": null,
"urls": [],
"raw_text": "Fr\u00e9deric Godin, Kris Demuynck, Joni Dambre, Wesley De Neve, and Thomas Demeester. 2018. Explaining character-aware neural networks for word-level pre- diction: Do they discover linguistic rules? In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 3275- 3284, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A survey of methods for explaining black box models",
"authors": [
{
"first": "Riccardo",
"middle": [],
"last": "Guidotti",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Monreale",
"suffix": ""
},
{
"first": "Salvatore",
"middle": [],
"last": "Ruggieri",
"suffix": ""
},
{
"first": "Franco",
"middle": [],
"last": "Turini",
"suffix": ""
},
{
"first": "Fosca",
"middle": [],
"last": "Giannotti",
"suffix": ""
},
{
"first": "Dino",
"middle": [],
"last": "Pedreschi",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM Comput. Surv",
"volume": "",
"issue": "5",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3236009"
]
},
"num": null,
"urls": [],
"raw_text": "Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM Comput. Surv., 51(5).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "LISA: Explaining recurrent neural network judgments via layer-wIse semantic accumulation and example to pattern transformation",
"authors": [
{
"first": "Pankaj",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "154--164",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5418"
]
},
"num": null,
"urls": [],
"raw_text": "Pankaj Gupta and Hinrich Sch\u00fctze. 2018. LISA: Ex- plaining recurrent neural network judgments via layer-wIse semantic accumulation and example to pattern transformation. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and In- terpreting Neural Networks for NLP, pages 154-164, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Actual Causality",
"authors": [
{
"first": "Joseph",
"middle": [
"Y"
],
"last": "Halpern",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Y. Halpern. 2016. Actual Causality. MIT Press.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Learning explanations from language data",
"authors": [
{
"first": "David",
"middle": [],
"last": "Harbecke",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Schwarzenberg",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Alt",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "316--318",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5434"
]
},
"num": null,
"urls": [],
"raw_text": "David Harbecke, Robert Schwarzenberg, and Christoph Alt. 2018. Learning explanations from language data. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpret- ing Neural Networks for NLP, pages 316-318, Brus- sels, Belgium. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "An interpretable generative adversarial approach to classification of latent entity relations in unstructured sentences",
"authors": [
{
"first": "Changsung",
"middle": [],
"last": "Shiou Tian Hsu",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Moon",
"suffix": ""
},
{
"first": "Nagiza",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Samatova",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shiou Tian Hsu, Changsung Moon, Paul Jones, and Na- giza Samatova. 2018. An interpretable generative adversarial approach to classification of latent entity relations in unstructured sentences. In AAAI Confer- ence on Artificial Intelligence.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Attention is not Explanation",
"authors": [
{
"first": "Sarthak",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3543--3556",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1357"
]
},
"num": null,
"urls": [],
"raw_text": "Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 3543-3556, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Self-assembling modular networks for interpretable multi-hop reasoning",
"authors": [
{
"first": "Yichen",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4474--4484",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1455"
]
},
"num": null,
"urls": [],
"raw_text": "Yichen Jiang and Mohit Bansal. 2019. Self-assembling modular networks for interpretable multi-hop rea- soning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4474-4484, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Explore, propose, and assemble: An interpretable model for multi-hop reading comprehension",
"authors": [
{
"first": "Yichen",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Yen-Chun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2714--2725",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1261"
]
},
"num": null,
"urls": [],
"raw_text": "Yichen Jiang, Nitish Joshi, Yen-Chun Chen, and Mohit Bansal. 2019. Explore, propose, and assemble: An interpretable model for multi-hop reading compre- hension. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 2714-2725, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Detecting and explaining causes from text for a time series event",
"authors": [
{
"first": "Dongyeop",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Gangal",
"suffix": ""
},
{
"first": "Ang",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2758--2767",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1292"
]
},
"num": null,
"urls": [],
"raw_text": "Dongyeop Kang, Varun Gangal, Ang Lu, Zheng Chen, and Eduard Hovy. 2017. Detecting and explaining causes from text for a time series event. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing, pages 2758- 2767, Copenhagen, Denmark. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Detecting linguistic characteristics of alzheimer's dementia by interpreting neural models",
"authors": [
{
"first": "Sweta",
"middle": [],
"last": "Karlekar",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "701--707",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sweta Karlekar, Tong Niu, and Mohit Bansal. 2018. Detecting linguistic characteristics of alzheimer's dementia by interpreting neural models. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 2 (Short Papers) (New Orleans, Louisiana, Jun. 2018), page 701-707.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Unsupervised token-wise alignment to improve interpretation of encoder-decoder models",
"authors": [
{
"first": "Shun",
"middle": [],
"last": "Kiyono",
"suffix": ""
},
{
"first": "Sho",
"middle": [],
"last": "Takase",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5410"
]
},
"num": null,
"urls": [],
"raw_text": "Shun Kiyono, Sho Takase, Jun Suzuki, Naoaki Okazaki, Kentaro Inui, and Masaaki Nagata. 2018. Unsupervised token-wise alignment to improve in- terpretation of encoder-decoder models. In Proceed- ings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 74-81, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Explainable ai: Foundations, industrial applications, practical challenges, and lessons learned",
"authors": [
{
"first": "Freddy",
"middle": [],
"last": "Lecue",
"suffix": ""
},
{
"first": "Krishna",
"middle": [],
"last": "Gade",
"suffix": ""
},
{
"first": "Krishnaram",
"middle": [],
"last": "Sahin Cem Geyik",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Kenthapadi",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Mithal",
"suffix": ""
},
{
"first": "Riccardo",
"middle": [],
"last": "Taly",
"suffix": ""
},
{
"first": "Pasquale",
"middle": [],
"last": "Guidotti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Minervini",
"suffix": ""
}
],
"year": 2020,
"venue": "AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Freddy Lecue, Krishna Gade, Sahin Cem Geyik, Krish- naram Kenthapadi, Varun Mithal, Ankur Taly, Ric- cardo Guidotti, and Pasquale Minervini. 2020. Ex- plainable ai: Foundations, industrial applications, practical challenges, and lessons learned. In AAAI Conference on Artificial Intelligence. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Human-grounded evaluations of explanation methods for text classification",
"authors": [
{
"first": "Piyawat",
"middle": [],
"last": "Lertvittayakumjorn",
"suffix": ""
},
{
"first": "Francesca",
"middle": [],
"last": "Toni",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5195--5205",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1523"
]
},
"num": null,
"urls": [],
"raw_text": "Piyawat Lertvittayakumjorn and Francesca Toni. 2019. Human-grounded evaluations of explanation meth- ods for text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5195-5205, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Visualizing and understanding neural models in nlp",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xinlei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1506.01066"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2015. Visualizing and understanding neural models in nlp. arXiv preprint arXiv:1506.01066.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "CNM: An interpretable complex-valued network for matching",
"authors": [
{
"first": "Qiuchi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Benyou",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Melucci",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4139--4148",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1420"
]
},
"num": null,
"urls": [],
"raw_text": "Qiuchi Li, Benyou Wang, and Massimo Melucci. 2019. CNM: An interpretable complex-valued network for matching. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4139-4148, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "A meaningbased English math word problem solver with understanding, reasoning and explanation",
"authors": [
{
"first": "Chao-Chun",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Shih-Hong",
"middle": [],
"last": "Tsai",
"suffix": ""
},
{
"first": "Ting-Yun",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Yi-Chung",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Keh-Yih",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "151--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao-Chun Liang, Shih-Hong Tsai, Ting-Yun Chang, Yi-Chung Lin, and Keh-Yih Su. 2016. A meaning- based English math word problem solver with under- standing, reasoning and explanation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstra- tions, pages 151-155, Osaka, Japan. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Program induction by rationale generation: Learning to solve and explain algebraic word problems",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "158--167",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1015"
]
},
"num": null,
"urls": [],
"raw_text": "Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun- som. 2017. Program induction by rationale genera- tion: Learning to solve and explain algebraic word problems. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 158-167, Vancou- ver, Canada. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Towards explainable NLP: A generative explanation framework for text classification",
"authors": [
{
"first": "Hui",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qingyu",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5570--5581",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1560"
]
},
"num": null,
"urls": [],
"raw_text": "Hui Liu, Qingyu Yin, and William Yang Wang. 2019. Towards explainable NLP: A generative explanation framework for text classification. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 5570-5581, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "On interpretation of network embedding via taxonomy induction",
"authors": [
{
"first": "Ninghao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Jundong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xia",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '18",
"volume": "",
"issue": "",
"pages": "1812--1820",
"other_ids": {
"DOI": [
"10.1145/3219819.3220001"
]
},
"num": null,
"urls": [],
"raw_text": "Ninghao Liu, Xiao Huang, Jundong Li, and Xia Hu. 2018. On interpretation of network embedding via taxonomy induction. In Proceedings of the 24th ACM SIGKDD International Conference on Knowl- edge Discovery & Data Mining, KDD '18, page 1812-1820, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Constructing interpretive spatio-temporal features for multiturn responses selection",
"authors": [
{
"first": "Junyu",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Chenbin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zeying",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Guang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"Chao"
],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zenglin",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "44--50",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1006"
]
},
"num": null,
"urls": [],
"raw_text": "Junyu Lu, Chenbin Zhang, Zeying Xie, Guang Ling, Tom Chao Zhou, and Zenglin Xu. 2019. Construct- ing interpretive spatio-temporal features for multi- turn responses selection. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 44-50, Florence, Italy. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Beyond polarity: Interpretable financial sentiment analysis with hierarchical query-driven attention",
"authors": [
{
"first": "Ling",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ao",
"suffix": ""
},
{
"first": "Feiyang",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Jin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Ningzi",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ling Luo, Xiang Ao, Feiyang Pan, Jin Wang, Tong Zhao, Ningzi Yu, and Qing He. 2018. Beyond polar- ity: Interpretable financial sentiment analysis with hierarchical query-driven attention.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "OpenDialKG: Explainable conversational reasoning with attention-based walks over knowledge graphs",
"authors": [
{
"first": "Seungwhan",
"middle": [],
"last": "Moon",
"suffix": ""
},
{
"first": "Pararth",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Anuj",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Subba",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "845--854",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1081"
]
},
"num": null,
"urls": [],
"raw_text": "Seungwhan Moon, Pararth Shah, Anuj Kumar, and Ra- jen Subba. 2019. OpenDialKG: Explainable conver- sational reasoning with attention-based walks over knowledge graphs. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 845-854, Florence, Italy. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Explainable prediction of medical codes from clinical text",
"authors": [
{
"first": "James",
"middle": [],
"last": "Mullenbach",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Wiegreffe",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Duke",
"suffix": ""
},
{
"first": "Jimeng",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1101--1111",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1100"
]
},
"num": null,
"urls": [],
"raw_text": "James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explainable pre- diction of medical codes from clinical text. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1101-1111, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "An interpretable joint graphical model for fact-checking from crowds",
"authors": [
{
"first": "An",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Kharosekar",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Lease",
"suffix": ""
},
{
"first": "Byron",
"middle": [],
"last": "Wallace",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "An Nguyen, Aditya Kharosekar, Matthew Lease, and Byron Wallace. 2018. An interpretable joint graph- ical model for fact-checking from crowds. In AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Unsupervised, knowledge-free, and interpretable word sense disambiguation",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Panchenko",
"suffix": ""
},
{
"first": "Fide",
"middle": [],
"last": "Marten",
"suffix": ""
},
{
"first": "Eugen",
"middle": [],
"last": "Ruppert",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Faralli",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Ustalov",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "91--96",
"other_ids": {
"DOI": [
"10.18653/v1/D17-2016"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander Panchenko, Fide Marten, Eugen Ruppert, Stefano Faralli, Dmitry Ustalov, Simone Paolo Ponzetto, and Chris Biemann. 2017. Unsupervised, knowledge-free, and interpretable word sense dis- ambiguation. In Proceedings of the 2017 Confer- ence on Empirical Methods in Natural Language Processing: System Demonstrations, pages 91-96, Copenhagen, Denmark. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Explaining the stars: Weighted multiple-instance learning for aspect-based sentiment analysis",
"authors": [
{
"first": "Nikolaos",
"middle": [],
"last": "Pappas",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Popescu-Belis",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "455--466",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1052"
]
},
"num": null,
"urls": [],
"raw_text": "Nikolaos Pappas and Andrei Popescu-Belis. 2014. Ex- plaining the stars: Weighted multiple-instance learn- ing for aspect-based sentiment analysis. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 455-466, Doha, Qatar. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Investigating robustness and interpretability of link prediction via adversarial modifications",
"authors": [
{
"first": "Pouya",
"middle": [],
"last": "Pezeshkpour",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3336--3347",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1337"
]
},
"num": null,
"urls": [],
"raw_text": "Pouya Pezeshkpour, Yifan Tian, and Sameer Singh. 2019a. Investigating robustness and interpretability of link prediction via adversarial modifications. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 3336-3347, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Investigating robustness and interpretability of link prediction via adversarial modifications",
"authors": [
{
"first": "Pouya",
"middle": [],
"last": "Pezeshkpour",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3336--3347",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pouya Pezeshkpour, Yifan Tian, and Sameer Singh. 2019b. Investigating robustness and interpretability of link prediction via adversarial modifications. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3336- 3347.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Evaluating neural network explanation methods using hybrid documents and morphosyntactic agreement",
"authors": [
{
"first": "Nina",
"middle": [],
"last": "Poerner",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "340--350",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1032"
]
},
"num": null,
"urls": [],
"raw_text": "Nina Poerner, Hinrich Sch\u00fctze, and Benjamin Roth. 2018. Evaluating neural network explanation meth- ods using hybrid documents and morphosyntactic agreement. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 340-350, Mel- bourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Learning interpretable negation rules via weak supervision at document level: A reinforcement learning approach",
"authors": [
{
"first": "Nicolas",
"middle": [],
"last": "Pr\u00f6llochs",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Feuerriegel",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Neumann",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "407--413",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1038"
]
},
"num": null,
"urls": [],
"raw_text": "Nicolas Pr\u00f6llochs, Stefan Feuerriegel, and Dirk Neu- mann. 2019. Learning interpretable negation rules via weak supervision at document level: A reinforce- ment learning approach. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 407-413, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Interpretable neural architectures for attributing an ad's performance to its writing style",
"authors": [
{
"first": "Reid",
"middle": [],
"last": "Pryzant",
"suffix": ""
},
{
"first": "Sugato",
"middle": [],
"last": "Basu",
"suffix": ""
},
{
"first": "Kazoo",
"middle": [],
"last": "Sone",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "125--135",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5415"
]
},
"num": null,
"urls": [],
"raw_text": "Reid Pryzant, Sugato Basu, and Kazoo Sone. 2018a. Interpretable neural architectures for attributing an ad's performance to its writing style. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: An- alyzing and Interpreting Neural Networks for NLP, pages 125-135, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Deconfounded lexicon induction for interpretable social science",
"authors": [
{
"first": "Reid",
"middle": [],
"last": "Pryzant",
"suffix": ""
},
{
"first": "Kelly",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Wagner",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1615--1625",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1146"
]
},
"num": null,
"urls": [],
"raw_text": "Reid Pryzant, Kelly Shen, Dan Jurafsky, and Stefan Wagner. 2018b. Deconfounded lexicon induction for interpretable social science. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 1615-1625, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Explain yourself! leveraging language models for commonsense reasoning",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Nazneen Fatema Rajani",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4932--4942",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1487"
]
},
"num": null,
"urls": [],
"raw_text": "Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019a. Explain your- self! leveraging language models for commonsense reasoning. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 4932-4942, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Explain yourself! leveraging language models for commonsense reasoning",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Nazneen Fatema Rajani",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.02361"
]
},
"num": null,
"urls": [],
"raw_text": "Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019b. Explain your- self! leveraging language models for commonsense reasoning. arXiv preprint arXiv:1906.02361.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Building applied natural language generation systems",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
}
],
"year": 1997,
"venue": "Natural Language Engineering",
"volume": "3",
"issue": "1",
"pages": "57--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Natural Lan- guage Engineering, 3(1):57-87.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Why should i trust you?\": Explaining the predictions of any classifier",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "1135--1144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?\": Explain- ing the predictions of any classifier. In Proceed- ings of the 22Nd ACM SIGKDD International Con- ference on Knowledge Discovery and Data Mining (New York, NY, USA, 2016), page 1135-1144.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Right for the right reasons: Training differentiable models by constraining their explanations",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Slavin Ross",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"C"
],
"last": "Hughes",
"suffix": ""
},
{
"first": "Finale",
"middle": [],
"last": "Doshi-Velez",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17",
"volume": "",
"issue": "",
"pages": "2662--2670",
"other_ids": {
"DOI": [
"10.24963/ijcai.2017/371"
]
},
"num": null,
"urls": [],
"raw_text": "Andrew Slavin Ross, Michael C. Hughes, and Finale Doshi-Velez. 2017. Right for the right reasons: Training differentiable models by constraining their explanations. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelli- gence, IJCAI-17, pages 2662-2670.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Global Sensitivity Analysis: The Primer",
"authors": [
{
"first": "A",
"middle": [],
"last": "Saltelli",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ratto",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Andres",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Campolongo",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Cariboni",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gatelli",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Saisana",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Tarantola",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Saltelli, M. Ratto, T. Andres, F. Campolongo, J. Cariboni, D. Gatelli, M. Saisana, and S. Taran- tola. 2008. Global Sensitivity Analysis: The Primer. John Wiley & Sons.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Train, sort, explain: Learning to diagnose translation models",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Schwarzenberg",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Harbecke",
"suffix": ""
},
{
"first": "Vivien",
"middle": [],
"last": "Macketanz",
"suffix": ""
},
{
"first": "Eleftherios",
"middle": [],
"last": "Avramidis",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "M\u00f6ller",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "29--34",
"other_ids": {
"DOI": [
"10.18653/v1/N19-4006"
]
},
"num": null,
"urls": [],
"raw_text": "Robert Schwarzenberg, David Harbecke, Vivien Mack- etanz, Eleftherios Avramidis, and Sebastian M\u00f6ller. 2019. Train, sort, explain: Learning to diagnose translation models. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics (Demonstra- tions), pages 29-34, Minneapolis, Minnesota. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "HEIDL: Learning linguistic expressions with deep learning and human-in-theloop",
"authors": [
{
"first": "Prithviraj",
"middle": [],
"last": "Sen",
"suffix": ""
},
{
"first": "Yunyao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Eser",
"middle": [],
"last": "Kandogan",
"suffix": ""
},
{
"first": "Yiwei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Lasecki",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "135--140",
"other_ids": {
"DOI": [
"10.18653/v1/P19-3023"
]
},
"num": null,
"urls": [],
"raw_text": "Prithviraj Sen, Yunyao Li, Eser Kandogan, Yiwei Yang, and Walter Lasecki. 2019. HEIDL: Learning linguis- tic expressions with deep learning and human-in-the- loop. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Sys- tem Demonstrations, pages 135-140, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Is attention interpretable?",
"authors": [
{
"first": "Sofia",
"middle": [],
"last": "Serrano",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2931--2951",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1282"
]
},
"num": null,
"urls": [],
"raw_text": "Sofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2931-2951, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Deep inside convolutional networks: Visualising image classification models and saliency maps",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Vedaldi",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1312.6034"
]
},
"num": null,
"urls": [],
"raw_text": "Karen Simonyan, Andrea Vedaldi, and Andrew Zisser- man. 2013. Deep inside convolutional networks: Vi- sualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Axiomatic attribution for deep networks",
"authors": [
{
"first": "Mukund",
"middle": [],
"last": "Sundararajan",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Taly",
"suffix": ""
},
{
"first": "Qiqi",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Inter- national Conference on Machine Learning, Sydney, Australia.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Interpretable question answering on knowledge bases and text",
"authors": [
{
"first": "Alona",
"middle": [],
"last": "Sydorova",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Poerner",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4943--4951",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1488"
]
},
"num": null,
"urls": [],
"raw_text": "Alona Sydorova, Nina Poerner, and Benjamin Roth. 2019. Interpretable question answering on knowl- edge bases and text. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 4943-4951, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Generating token-level explanations for natural language inference",
"authors": [
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Christodoulopoulos",
"suffix": ""
},
{
"first": "Arpit",
"middle": [],
"last": "Mittal",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "963--969",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1101"
]
},
"num": null,
"urls": [],
"raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2019. Gener- ating token-level explanations for natural language inference. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 963-969, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Iterative recursive attention model for interpretable sequence classification",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Tutek",
"suffix": ""
},
{
"first": "Jan\u0161najder",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "249--257",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5427"
]
},
"num": null,
"urls": [],
"raw_text": "Martin Tutek and Jan\u0160najder. 2018. Iterative recur- sive attention model for interpretable sequence clas- sification. In Proceedings of the 2018 EMNLP Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 249-257, Brussels, Bel- gium. Association for Computational Linguistics.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "Learning to explain entity relationships in knowledge graphs",
"authors": [
{
"first": "Nikos",
"middle": [],
"last": "Voskarides",
"suffix": ""
},
{
"first": "Edgar",
"middle": [],
"last": "Meij",
"suffix": ""
},
{
"first": "Manos",
"middle": [],
"last": "Tsagkias",
"suffix": ""
},
{
"first": "Wouter",
"middle": [],
"last": "Maarten De Rijke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weerkamp",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "564--574",
"other_ids": {
"DOI": [
"10.3115/v1/P15-1055"
]
},
"num": null,
"urls": [],
"raw_text": "Nikos Voskarides, Edgar Meij, Manos Tsagkias, Maarten de Rijke, and Wouter Weerkamp. 2015. Learning to explain entity relationships in knowl- edge graphs. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 564-574, Beijing, China. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "Interpreting neural networks with nearest neighbors",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "Shi",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "136--144",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5416"
]
},
"num": null,
"urls": [],
"raw_text": "Eric Wallace, Shi Feng, and Jordan Boyd-Graber. 2018. Interpreting neural networks with nearest neighbors. In Proceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 136-144, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "Attention is not not explanation",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Wiegreffe",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Pinter",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "11--20",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1002"
]
},
"num": null,
"urls": [],
"raw_text": "Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 11-20, Hong Kong, China. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF74": {
"ref_id": "b74",
"title": "An interpretable knowledge transfer model for knowledge base completion",
"authors": [
{
"first": "Qizhe",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "950--962",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1088"
]
},
"num": null,
"urls": [],
"raw_text": "Qizhe Xie, Xuezhe Ma, Zihang Dai, and Eduard Hovy. 2017. An interpretable knowledge transfer model for knowledge base completion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 950-962, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF75": {
"ref_id": "b75",
"title": "Interpretable relevant emotion ranking with event-driven attention",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Deyu",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yulan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "177--187",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1017"
]
},
"num": null,
"urls": [],
"raw_text": "Yang Yang, Deyu Zhou, Yulan He, and Meng Zhang. 2019. Interpretable relevant emotion ranking with event-driven attention. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Process- ing (EMNLP-IJCNLP), pages 177-187, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF76": {
"ref_id": "b76",
"title": "An interpretable reasoning network for multirelation question answering",
"authors": [
{
"first": "Mantong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2010--2022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mantong Zhou, Minlie Huang, and Xiaoyan Zhu. 2018. An interpretable reasoning network for multi- relation question answering. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2010-2022.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "prediction using the model itself (calculated from information made available from the model as part of making the prediction) Global Post-Hoc Perform additional operations to explain the entire model's predictive reasoning Global Self-Explaining Use the predictive model itself to explain the entire model's predictive reasoning (a.k.a. directly interpretable model)",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "declarative rules (Pezeshkpour et al., 2019b) (d) Raw declarative program (Amini et al., 2019) (e) Raw examples (Croce et al., 2019) Figure 1: Examples of different visualization techniques",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF0": {
"text": "Overview of the high-level categories of explanations (Section 3).",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF2": {
"text": ").",
"num": null,
"content": "<table><tr><td colspan=\"2\">None or Informal Comparison to</td><td>Human</td></tr><tr><td>Examination only</td><td>Ground Truth</td><td>Evaluation</td></tr><tr><td>32</td><td>12</td><td>9</td></tr></table>",
"html": null,
"type_str": "table"
}
}
}
}