{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:44:07.633880Z" }, "title": "InterpreT: An Interactive Visualization Tool for Interpreting Transformers", "authors": [ { "first": "Vasudev", "middle": [], "last": "Lal", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cognitive Computing Research", "location": { "country": "USA" } }, "email": "" }, { "first": "Arden", "middle": [], "last": "Ma", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cognitive Computing Research", "location": { "country": "USA" } }, "email": "" }, { "first": "Estelle", "middle": [], "last": "Aflalo", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cognitive Computing Research", "location": { "country": "USA" } }, "email": "" }, { "first": "Phillip", "middle": [], "last": "Howard", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cognitive Computing Research", "location": { "country": "USA" } }, "email": "" }, { "first": "Ana", "middle": [], "last": "Paula", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Q", "middle": [], "last": "Simoes", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cognitive Computing Research", "location": { "country": "USA" } }, "email": "" }, { "first": "Daniel", "middle": [], "last": "Korat", "suffix": "", "affiliation": { "laboratory": "Artificial Intelligence Lab", "institution": "Intel Labs", "location": { "country": "Israel" } }, "email": "" }, { "first": "Oren", "middle": [], "last": "Pereg", "suffix": "", "affiliation": { "laboratory": "Artificial Intelligence Lab", "institution": "Intel Labs", "location": { "country": "Israel" } }, "email": "" }, { "first": "Gadi", "middle": [], "last": "Singer", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cognitive Computing Research", "location": { "country": "USA" } }, "email": "" }, { "first": "Moshe", "middle": [], "last": "Wasserblat", "suffix": "", "affiliation": { "laboratory": "Artificial Intelligence Lab", "institution": "Intel Labs", "location": { "country": "Israel" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "With the increasingly widespread use of Transformer-based models for NLU/NLP tasks, there is growing interest in understanding the inner workings of these models, why they are so effective at a wide range of tasks, and how they can be further tuned and improved. To contribute towards this goal of enhanced explainability and comprehension, we present InterpreT, an interactive visualization tool for interpreting Transformer-based models. In addition to providing various mechanisms for investigating general model behaviours, novel contributions made in InterpreT include the ability to track and visualize token embeddings through each layer of a Transformer, highlight distances between certain token embeddings through illustrative plots, and identify task-related functions of attention heads by using new metrics. InterpreT is a task agnostic tool, and its functionalities are demonstrated through the analysis of model behaviours for two disparate tasks: Aspect Based Sentiment Analysis (ABSA) and the Winograd Schema Challenge (WSC).", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "With the increasingly widespread use of Transformer-based models for NLU/NLP tasks, there is growing interest in understanding the inner workings of these models, why they are so effective at a wide range of tasks, and how they can be further tuned and improved. To contribute towards this goal of enhanced explainability and comprehension, we present InterpreT, an interactive visualization tool for interpreting Transformer-based models. In addition to providing various mechanisms for investigating general model behaviours, novel contributions made in InterpreT include the ability to track and visualize token embeddings through each layer of a Transformer, highlight distances between certain token embeddings through illustrative plots, and identify task-related functions of attention heads by using new metrics. InterpreT is a task agnostic tool, and its functionalities are demonstrated through the analysis of model behaviours for two disparate tasks: Aspect Based Sentiment Analysis (ABSA) and the Winograd Schema Challenge (WSC).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In recent years, Transformer-based models (Vaswani et al., 2017 ) such as BERT (Devlin et al., 2019) , GPT-2 (Radford et al., 2019) , XLNET (Yang et al., 2019) and RoBERTa (Liu et al., 2019) have demonstrated state-of-the-art performance in many NLP tasks and have become the gold standard. However, there are many open questions regarding the behavior of these models. Phenomena such as why Transformers perform well on specific examples but not others, as well as how their internal mechanisms facilitate their ability to generalize to new tasks and settings (or lack therof) are not yet fully understood. Observations and insights which help answer these questions will be pivotal in driving the construction of more powerful and robust models.", "cite_spans": [ { "start": 42, "end": 63, "text": "(Vaswani et al., 2017", "ref_id": null }, { "start": 79, "end": 100, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 103, "end": 131, "text": "GPT-2 (Radford et al., 2019)", "ref_id": null }, { "start": 140, "end": 159, "text": "(Yang et al., 2019)", "ref_id": "BIBREF21" }, { "start": 172, "end": 190, "text": "(Liu et al., 2019)", "ref_id": null }, { "start": 561, "end": 577, "text": "(or lack therof)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The pursuit of such answers have spurred the development of a wide variety of analytical studies and tools to enable the visualization of information encapsulated in Transformer-based models. Clark et al. (2019) , studied the attention mechanisms of a pre-trained BERT model to find that certain heads correspond to specific linguistic patterns. Jawahar et al. (2019) investigated the distribution of phrase-level information throughout the layers of BERT using t-SNE (van der Maaten and Hinton, 2008) . The visualization tools of Aken et al. 2020and Reif et al. (2019) perform a layerwise analysis of BERT's hidden states to understand the internal workings of Transformer-based models that are fine-tuned for question-answering tasks. Other tools, such as Vig (2019) , focus on visualizations of the attention matrices of pretrained Transformer models. In the work of Tenney et al. (2020) , the authors introduce an interactive platform for the visualization and interpretation of NLP models. The tool includes, among other capabilities, attention visualizations, embedding space visualizations, and aggregate analysis. Other related tools include those by Wallace et al. (2019) and Hoover et al. (2020) . The increasingly large body of work on the interpretability and evaluation of Transformer-based models reveals the growing need for the development of tools and systems to aid in the fine-grained analysis and understanding of these models and their performance on complex language understanding tasks.", "cite_spans": [ { "start": 192, "end": 211, "text": "Clark et al. (2019)", "ref_id": "BIBREF3" }, { "start": 346, "end": 367, "text": "Jawahar et al. (2019)", "ref_id": "BIBREF7" }, { "start": 477, "end": 501, "text": "Maaten and Hinton, 2008)", "ref_id": "BIBREF9" }, { "start": 551, "end": 569, "text": "Reif et al. (2019)", "ref_id": "BIBREF14" }, { "start": 758, "end": 768, "text": "Vig (2019)", "ref_id": "BIBREF17" }, { "start": 870, "end": 890, "text": "Tenney et al. (2020)", "ref_id": "BIBREF15" }, { "start": 1159, "end": 1180, "text": "Wallace et al. (2019)", "ref_id": "BIBREF18" }, { "start": 1185, "end": 1205, "text": "Hoover et al. (2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "With this goal in mind, we present InterpreT 1 , a tool for interpreting Transformers. A key contribution of InterpreT is that it is a single system that enables users to track hidden representations of tokens throughout each layer of a Transformer model, as well as visualize and analyze attention head behaviors. Similarly to Tenney et al. (2020) , InterpreT enables dynamic point selection, aggregation of attention head statistics, visualization of attention head matrices, and the ability to compare models. Novel contributions made in InterpreT include the ability to track and visualize token embeddings through each layer of a Transformer (Section 3.2), highlight distances between certain token embeddings through illustrative plots (Section 3.6), and identify task-related functions of attention heads by using new metrics (Section 3.3).", "cite_spans": [ { "start": 328, "end": 348, "text": "Tenney et al. (2020)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Section 4 demonstrates how the new features introduced in InterpreT can be used to obtain novel insights into the underlying mechanisms used by Transformers to tackle diverse tasks such as Aspect-Based Sentiment Analysis (ABSA) and the Winograd Schema Challenge (WSC). More generally, these demonstrations illustrate how such features enable rich, granular analysis of Transformer models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The system flow consists of two main stages: offline extraction of model specific and task specific information such as targets, predictions, relevant hidden states, and attention matrices (henceforth referred to as \"collateral\") and running the application itself. During the offline stage, the extracted hidden states are processed using t-SNE before being saved to a file. The collateral generated for a specific model and task is independent of collateral from other models and tasks, which enables the user to either run the app to examine a single model or to compare two different models that were evaluated on the same task and data. In this latter case, the collateral files for the two models are linked at runtime. A detailed specification for the collateral, along with the source code used to run InterpreT can be found in our GitHub.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Design and Workflow", "sec_num": "2" }, { "text": "Key features of InterpreT include plots for the visualization and tracking of t-SNE representations of hidden states through the layers of a Transformer, a plot presenting summary statistics, custom metrics to quantify attention head behavior, and attention matrix visualizations. In addition,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "InterpreT includes a multi-select feature that enables groups of examples to be selected in the t-SNE plot and used as input to other plots in the application, as well as the flexibility to be used both for analyzing a single model and for visualizing the differences in behaviors between two models. In general, the core functionalities present in In-terpreT are model and task agnostic, working for a wide-variety of architectures, sequence lengths, and tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "A central component of InterpreT is the ability to visualize the contextualized embeddings of specific tokens throughout the layers of a Transformer. Following van Aken et al. 2019and Jawahar et al. 2019, we use t-SNE to project hidden representations of tokens after each Transformer layer onto a two-dimensional space, creating disjoint t-SNE spaces for each layer of each model. In the resulting t-SNE plot, token embeddings can be visualized for a specific model and layer, and colored using various color schemes ( Figure 1d ). An example selected in the t-SNE plot is tracked and continues to be highlighted in the new t-SNE space when the model or the layer is changed.", "cite_spans": [], "ref_spans": [ { "start": 520, "end": 529, "text": "Figure 1d", "ref_id": null } ], "eq_spans": [], "section": "t-SNE Embeddings", "sec_num": "3.2" }, { "text": "InterpreT includes a head summary plot that displays attention head summary statistics for each head and layer ( Figure 1b ). For a given sentence, all attention weights are obtained in a matrix of size (num layers \u00d7 num heads \u00d7 sentence length \u00d7 sentence length) and compute statistics over the final two dimensions, yielding a summary plot of size (num layers \u00d7 num heads). The following statistics are currently supported:", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 122, "text": "Figure 1b", "ref_id": null } ], "eq_spans": [], "section": "Head Summary", "sec_num": "3.3" }, { "text": "The Standard Deviation of an attention head is generated by calculating the standard deviation of the corresponding attention matrix weights. Intuitively, the standard deviation of an attention head increases as the attention patterns become less uniform, allowing a user to easily identify heads that exhibit interesting behaviors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head Summary", "sec_num": "3.3" }, { "text": "The Attention Matrix Correlation is obtained by computing the correlation between an attention matrix and an arbitrary, same-size matrix. In Section 4.1.2, this correlation is computed using a binary matrix that encodes syntactic dependency relations, analogous to the parse matrix used in Figure ( 1) The InterpreT user interface (rearranged for print) for the task of coreference resolution (see Section 4.2). The UI includes a short description of the currently selected models and example at the top, along with the main features (a-e) described in Section 3. Pereg et al. (2020) . This formulation of a \"grammar correlation\" metric provides an indicator of an attention head's ability to identify syntactic relations in a sentence.", "cite_spans": [ { "start": 564, "end": 583, "text": "Pereg et al. (2020)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 290, "end": 298, "text": "Figure (", "ref_id": null } ], "eq_spans": [], "section": "Head Summary", "sec_num": "3.3" }, { "text": "The Task-Specific Attention Intensity option allows a user to define and display custom metrics that highlight specific attention patterns. In Section 4.2.2, a \"coreference intensity\" metric is devised to pinpoint attention heads with an affinity for identifying coreference relationships. For this metric, each entry in the summary plot represents the attention weight between the coreferent spans being evaluated (if the span contains more than one token, the maximum is taken), for each head of each layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head Summary", "sec_num": "3.3" }, { "text": "When running InterpreT with two models, the head summary plot can be used to visualize differences in the summary statistics between both models. As mentioned previously, the multi-select feature can be used with any of the summary statistic options. When using multi-select, the statistics are averaged over the selected examples, enabling the user to analyze general trends in attention behavior.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head Summary", "sec_num": "3.3" }, { "text": "Similarly to other systems, InterpreT provides the ability to display the attention patterns and weights exhibited by specific attention heads, which can be selected by clicking on a specific head and layer in the head summary plot. These attention patterns can be displayed as either a heatmap (\"matrix\" view) or a token \"map\" (\"map\" view) visualization used in Clark et al. (2019) . There is an option to switch between the two views in-app (Figure 1c ). These visualizations can become unwieldy when using large sequence lengths, but this will not affect the functionality of the rest of the system.", "cite_spans": [ { "start": 363, "end": 382, "text": "Clark et al. (2019)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 443, "end": 453, "text": "(Figure 1c", "ref_id": null } ], "eq_spans": [], "section": "Attention Matrix/Map", "sec_num": "3.4" }, { "text": "A short summary table is provided, which contains task-specific information such as predicted token classifications and the gold (target) labels for the selected sentence/example (Figure 1a ). ", "cite_spans": [], "ref_spans": [ { "start": 179, "end": 189, "text": "(Figure 1a", "ref_id": null } ], "eq_spans": [], "section": "Summary Table", "sec_num": "3.5" }, { "text": "To complement t-SNE visualization of the hidden states, InterpreT also introduces a novel plot showing the average t-SNE space distance between specific groups of terms across all of the Transformers' layers ( Figure 1e ). Section 4.2.1 demonstrates how information conveyed in this plot contributes towards novel interpretations of the inner workings of BERT.", "cite_spans": [], "ref_spans": [ { "start": 210, "end": 219, "text": "Figure 1e", "ref_id": null } ], "eq_spans": [], "section": "Average t-SNE Distance Per Layer", "sec_num": "3.6" }, { "text": "The examples presented in this section focus on the analysis of bidirectional encoders using Inter-preT, however the system can be applied to generative models or encoder-decoder architectures as well, so long as the appropriate collateral can be generated. Further examples of use cases along with instructions on how to use InterpreT for custom applications is detailed in our GitHub.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Use Cases", "sec_num": "4" }, { "text": "A fundamental task in fine-grained sentiment analysis is the extraction of aspect and opinion terms. For example, in the sentence \"The chocolate cake was incredible\", the aspect term is chocolate cake and the opinion term is incredible. Supervised learning approaches have shown promising results in single-domain setups where the training and the testing data are from the same domain. However, these approaches typically do not scale across domains, where only unlabeled data is available for the target domain. It has been shown that syntax, which is a basic trait of language and is therefore domain invariant, can help bridge the gap between domains (Ding et al., 2017; Wang and Jialin Pan, 2018) . In a recent work (Pereg et al., 2020) , externally generated dependency relations are integrated into a pre-trained BERT model through the addition of a 13th attention head which incorporates the dependency relations into its Syntactically-Aware Self-Attention Mechanism. This model is referred to as Linguistically Informed BERT (LIBERT).", "cite_spans": [ { "start": 655, "end": 674, "text": "(Ding et al., 2017;", "ref_id": "BIBREF5" }, { "start": 675, "end": 701, "text": "Wang and Jialin Pan, 2018)", "ref_id": "BIBREF19" }, { "start": 721, "end": 741, "text": "(Pereg et al., 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Domain Aspect Based Sentiment Analysis (ABSA)", "sec_num": "4.1" }, { "text": "InterpreT is used to analyze LIBERT and a Baseline model that shares the same size and structure as LIBERT but does not incorporate syntactic information for the cross-domain ABSA task, where both models are fine-tuned on laptop reviews and are evaluated on restaurant reviews (Pontiki et al., 2014 (Pontiki et al., , 2015 Wang et al., 2016) . LIBERT and the Baseline model achieved aspect extraction F1 scores of 0.5143 and 0.4254 respectively on validation data from the restaurant domain.", "cite_spans": [ { "start": 277, "end": 298, "text": "(Pontiki et al., 2014", "ref_id": "BIBREF12" }, { "start": 299, "end": 322, "text": "(Pontiki et al., , 2015", "ref_id": "BIBREF11" }, { "start": 323, "end": 341, "text": "Wang et al., 2016)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Domain Aspect Based Sentiment Analysis (ABSA)", "sec_num": "4.1" }, { "text": "InterpreT is used to visualize how the incorporation of dependency relations in LIBERT contributes to bridging the gap between domains. Figure 2 depicts the final layer aspect term t-SNE embeddings from the restaurant and laptop domains produced by LIBERT and Baseline. The plot of the Baseline embeddings (2a) gives a prototypical depiction of the \"domain gap\" challenge present in cross-domain setups, through the clear separation of in-domain (blue) and out-of-domain (red) aspects. Conversely, the plot of LIBERT's embeddings (2b) demonstrates how LIBERT has learned to push the embeddings of some aspect terms from the out-of-domain region into the in-domain region, effectively overcoming the \"domain gap\" challenge for these examples. Furthermore, in the plot colored by the aspect extraction F1 score (2c), it is seen that LIBERT achieves a high F1 score on the out-of-domain examples that now overlap with in-domain examples, highlighting the usefulness of such visualizations for analyzing model performance and extensibility. ", "cite_spans": [], "ref_spans": [ { "start": 136, "end": 142, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Visualizing the Domain Gap", "sec_num": "4.1.1" }, { "text": "A key feature of InterpreT is the addition of metrics to help identify attention heads which carry out specific functions. For analyzing LIBERT, the \"grammar correlation\" metric described in Section 3.3 is used to identify attention heads with an affinity for detecting syntactic relations. Figure 3a demonstrates the result of using multi-selection to compute the average grammar correlation in each of LIBERT's attention heads aggregated over multiple examples.", "cite_spans": [], "ref_spans": [ { "start": 291, "end": 300, "text": "Figure 3a", "ref_id": null } ], "eq_spans": [], "section": "Grammar Correlation", "sec_num": "4.1.2" }, { "text": "As expected, the Syntactically-Aware Self Attention head (head 13) tends to show much higher grammar correlation than the regular Self Attention heads. Utilizing the granularity provided in the head summary plot, it is observed that LIB-ERT's 13th head seems to only express an affinity for parsing syntactic relations in layers 2,3,4, and 11. This is unexpected behavior, as the syntax information is relayed identically to the 13th head across all layers. To investigate further, InterpreT can be used to display attention matrices from head 13 in layers that have high grammar correlation. One such attention matrix, for an out-ofdomain example, is displayed in Figure 3b . In this attention matrix visualization, it can be seen how LIBERT's 13th head identifies syntactic relations such as the adjectival modifier relation between \"staff\" and \"attentive\", and how this can be useful for the cross-domain ABSA task where \"staff\" and \"attentive\" are aspect and opinion terms (respectively) in an out-of-domain example.", "cite_spans": [], "ref_spans": [ { "start": 665, "end": 674, "text": "Figure 3b", "ref_id": null } ], "eq_spans": [], "section": "Grammar Correlation", "sec_num": "4.1.2" }, { "text": "In this section, the utility of InterpreT is showcased for a markedly different task: coreference resolution. Coreference resolution is a challenging NLP task that often requires a nuanced understanding of context and sentence semantics. This task is the basis of the Winograd Schema Challenge (WSC) from the SuperGLUE benchmark (Alex Wang, 2020) , where the goal is to determine whether or not a pronoun is the correct referent of a given noun phrase. In this analysis of WSC, InterpreT demonstrates how information in the attention matrices and the hidden states of a Transformer can be used to understand the implicit mechanisms contributing to its ability to identify coreferent terms. BERT-base (uncased) is chosen for this analysis and is fine-tuned using the WSC task training set.", "cite_spans": [ { "start": 335, "end": 346, "text": "Wang, 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Coreference Resolution in the Winograd Schema Challenge (WSC)", "sec_num": "4.2" }, { "text": "Coreference Candidates (Fred, he) (George, he) \"... got back\" False True \"... got up\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example", "sec_num": null }, { "text": "True True ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example", "sec_num": null }, { "text": "While analyzing WSC with InterpreT, the system's wide-ranging capabilities gave rise to a novel observation, wherein it was discovered that a fine-tuned BERT model pushes closer together the embeddings of terms it predicts to be coreferent. Figure 5InterpreT plots tracking specific examples in WSC. These plots depict the final layer t-SNE embeddings and attention map visualizations of head 10 layer 7 for the following examples: \"Fred watched TV while George went out to buy groceries. After an hour he got back\" (a,c), and \"Fred watched TV while George went out to buy groceries. After an hour he got up.\" (b,d) . In (a) and (b), the yellow stars indicate candidate mention spans, and \"He\" and \"George\" are almost overlapping.", "cite_spans": [ { "start": 610, "end": 615, "text": "(b,d)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Spatial Convergence of Coreferent Terms", "sec_num": "4.2.1" }, { "text": "between terms which BERT predicts to be coreferent (blue) and terms which BERT predicts to not be coreferent (red), aggregated over the full WSC dataset. It is observed that in BERT's final layers, the model learns to modify the hidden representations of terms to increase or decrease the distance between them based on whether or not it predicts they are coreferents. This behavior can also be seen in the green trace, which measures the difference in the average distance of terms predicted to be coreferent and those that are not predicted to be coreferent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spatial Convergence of Coreferent Terms", "sec_num": "4.2.1" }, { "text": "Additionally, Figures 5a and 5b show a specific example of this phenomenon with the sentences: \"Fred watched TV while George went out to buy groceries. After an hour he got back\" (Figure 5a and Table 1 ) and \"Fred watched TV while George went out to buy groceries. After an hour he got up.\" (Figure 5b and Table 1 ). These two examples show how changing a single token (\"back\" became \"up\") significantly alters the sentence semantics, as in the first example, \"he\" refers to \"George\", and in the second example \"he\" refers to \"Fred\". InterpreT enables us to visualize this behavior using the t-SNE plots. Figure 5a show how for the first example, \"he\" and \"George\" are much closer together than \"he\" and \"Fred\" are. Figure 5b shows how in the second example, the change from \"he got back\" to \"he got up\" is reflected in BERT's behavior, where the representation of \"Fred\" to be pushed much closer to \"he\" than in the first example.", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 31, "text": "Figures 5a and 5b", "ref_id": null }, { "start": 179, "end": 189, "text": "(Figure 5a", "ref_id": null }, { "start": 194, "end": 201, "text": "Table 1", "ref_id": null }, { "start": 291, "end": 301, "text": "(Figure 5b", "ref_id": null }, { "start": 306, "end": 313, "text": "Table 1", "ref_id": null }, { "start": 605, "end": 614, "text": "Figure 5a", "ref_id": null }, { "start": 716, "end": 725, "text": "Figure 5b", "ref_id": null } ], "eq_spans": [], "section": "Spatial Convergence of Coreferent Terms", "sec_num": "4.2.1" }, { "text": "Another feature of InterpreT is the ability to utilize custom metrics, such as the \"coreference intensity\" metric described in Section 3.3. Coreference intensity is visualized using the head summary plot in Figure 4b . The figure shows that the finetuned model highlights attention heads that seem to perform well as coreferent predictors. Darker shades of red correspond to higher attention between the two coreferents being evaluated. It appears that the heads which are the most involved in the coreference resolution task after fine-tuning are the 7th head of layer 10 and the 3rd head of layer 11. This new metric is used to examine the example previously presented with \"Fred\", \"George\", and \"he\". Figures 5c and 5d show the attention matrix visualizations for the head selected in Figure 4b (head 7 in layer 10). The token map visualization depicts how \"he\" attends heavily to \"George\" in the first example (5c) while attending to both \"Fred\" and \"George\" in the second example (5d).", "cite_spans": [], "ref_spans": [ { "start": 207, "end": 216, "text": "Figure 4b", "ref_id": null }, { "start": 704, "end": 721, "text": "Figures 5c and 5d", "ref_id": null }, { "start": 788, "end": 798, "text": "Figure 4b", "ref_id": null } ], "eq_spans": [], "section": "Attention Patterns between Coreferent Terms", "sec_num": "4.2.2" }, { "text": "InterpreT is a generic system for interpreting Transformers, as evident through its suite of tools for understanding general model behaviors and for enabling granular analysis of attention patterns and hidden states for individual examples. The capabilities provided by InterpreT empower users with new insights into what their models are learning, as illustrated in the visualization of the mit-igation of the \"domain gap\" for ABSA and in the novel discovery of the spatial convergence of coreferent terms in WSC. These examples showcase how the fine-grained analysis enabled by In-terpreT affords a higher level of insight that is indispensable for interpreting model behavior for complex language understanding tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "InterpreT is an ongoing development effort. Future work will include support for additional use cases as well as additional analysis and interactivity features, such as the ability to dynamically add and modify examples while the app is running.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "The source code for InterpreT, along with a live demo and screencast describing its functionality is available at https://github.com/IntelLabs/nlp-architect/tree/master/solutions/InterpreT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the anonymous reviewers for their comments and suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "6" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "How does bert answer questions?", "authors": [ { "first": "Benjamin", "middle": [], "last": "Betty Van Aken", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Winter", "suffix": "" }, { "first": "Felix", "middle": [ "A" ], "last": "L\u00f6ser", "suffix": "" }, { "first": "", "middle": [], "last": "Gers", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 28th ACM International Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/3357384.3358028" ] }, "num": null, "urls": [], "raw_text": "Betty van Aken, Benjamin Winter, Alexander L\u00f6ser, and Felix A. Gers. 2019. How does bert answer questions? Proceedings of the 28th ACM Interna- tional Conference on Information and Knowledge Management.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Visbert: Hidden-state visualizations for transformers", "authors": [ { "first": "Benjamin", "middle": [], "last": "Betty Van Aken", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Winter", "suffix": "" }, { "first": "Felix", "middle": [ "A" ], "last": "L\u00f6ser", "suffix": "" }, { "first": "", "middle": [], "last": "Gers", "suffix": "" } ], "year": 2020, "venue": "Companion Proceedings of the Web Conference 2020, WWW '20", "volume": "", "issue": "", "pages": "207--211", "other_ids": { "DOI": [ "10.1145/3366424.3383542" ] }, "num": null, "urls": [], "raw_text": "Betty van Aken, Benjamin Winter, Alexander L\u00f6ser, and Felix A. Gers. 2020. Visbert: Hidden-state vi- sualizations for transformers. In Companion Pro- ceedings of the Web Conference 2020, WWW '20, page 207-211, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Superglue: A stickier benchmark for general-purpose language understanding systems", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikita Nangia Amanpreet Singh Julian Michael Felix Hill Omer Levy Samuel R. Bowman Alex Wang, Yada Pruksachatkun. 2020. Superglue: A stick- ier benchmark for general-purpose language under- standing systems.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "What does bert look at? an analysis of bert's attention", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Urvashi", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does bert look at? an analysis of bert's attention. In Black- BoxNLP@ACL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Recurrent neural networks with auxiliary labels for crossdomain opinion target extraction", "authors": [ { "first": "Ying", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Jianfei", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 2017, "venue": "Association for the Advancement of Artificial Intelligence", "volume": "", "issue": "", "pages": "3436--3442", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ying Ding, Jianfei Yu, and Jing Jiang. 2017. Recur- rent neural networks with auxiliary labels for cross- domain opinion target extraction. In Association for the Advancement of Artificial Intelligence, pages 3436--3442.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "2020. exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformer Models", "authors": [ { "first": "Benjamin", "middle": [], "last": "Hoover", "suffix": "" }, { "first": "Hendrik", "middle": [], "last": "Strobelt", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Gehrmann", "suffix": "" } ], "year": null, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "187--196", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-demos.22" ] }, "num": null, "urls": [], "raw_text": "Benjamin Hoover, Hendrik Strobelt, and Sebastian Gehrmann. 2020. exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformer Models. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguis- tics: System Demonstrations, pages 187-196, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "What does BERT learn about the structure of language", "authors": [ { "first": "Ganesh", "middle": [], "last": "Jawahar", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3651--3657", "other_ids": { "DOI": [ "10.18653/v1/P19-1356" ] }, "num": null, "urls": [], "raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 3651-3657, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Visualizing data using t-sne", "authors": [ { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2008, "venue": "Journal of Machine Learning Research", "volume": "9", "issue": "86", "pages": "2579--2605", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9(86):2579-2605.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Syntactically aware cross-domain aspect and opinion terms extraction", "authors": [ { "first": "Oren", "middle": [], "last": "Pereg", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Korat", "suffix": "" }, { "first": "Moshe", "middle": [], "last": "Wasserblat", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1772--1777", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oren Pereg, Daniel Korat, and Moshe Wasserblat. 2020. Syntactically aware cross-domain aspect and opinion terms extraction. In Proceedings of the 28th International Conference on Computational Lin- guistics, pages 1772-1777, Barcelona, Spain (On- line). International Committee on Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "SemEval-2015 task 12: Aspect based sentiment analysis", "authors": [ { "first": "Maria", "middle": [], "last": "Pontiki", "suffix": "" }, { "first": "Dimitris", "middle": [], "last": "Galanis", "suffix": "" }, { "first": "Haris", "middle": [], "last": "Papageorgiou", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "486--495", "other_ids": { "DOI": [ "10.18653/v1/S15-2082" ] }, "num": null, "urls": [], "raw_text": "Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. SemEval-2015 task 12: Aspect based sentiment analysis. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 486-495, Denver, Colorado. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "SemEval-2014 task 4: Aspect based sentiment analysis", "authors": [ { "first": "Maria", "middle": [], "last": "Pontiki", "suffix": "" }, { "first": "Dimitris", "middle": [], "last": "Galanis", "suffix": "" }, { "first": "John", "middle": [], "last": "Pavlopoulos", "suffix": "" }, { "first": "Harris", "middle": [], "last": "Papageorgiou", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "27--35", "other_ids": { "DOI": [ "10.3115/v1/S14-2004" ] }, "num": null, "urls": [], "raw_text": "Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evalua- tion (SemEval 2014), pages 27-35, Dublin, Ireland. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "A", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "R", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Radford, Jeffrey Wu, R. Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language mod- els are unsupervised multitask learners.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Visualizing and measuring the geometry of bert", "authors": [ { "first": "Emily", "middle": [], "last": "Reif", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Wattenberg", "suffix": "" }, { "first": "Fernanda", "middle": [ "B" ], "last": "Viegas", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Coenen", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Pearce", "suffix": "" }, { "first": "Been", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "8594--8603", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B Viegas, Andy Coenen, Adam Pearce, and Been Kim. 2019. Visualizing and measuring the geometry of bert. In Advances in Neural Information Process- ing Systems, volume 32, pages 8594-8603. Curran Associates, Inc.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The language interpretability tool: Extensible, interactive visualizations and analysis for nlp models", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "James", "middle": [], "last": "Wexler", "suffix": "" }, { "first": "Jasmijn", "middle": [], "last": "Bastings", "suffix": "" }, { "first": "Tolga", "middle": [], "last": "Bolukbasi", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Coenen", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Gehrmann", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Mahima", "middle": [], "last": "Pushkarna", "suffix": "" }, { "first": "Carey", "middle": [], "last": "Radebaugh", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Reif", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Yuan", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Tenney, James Wexler, Jasmijn Bastings, Tolga Bolukbasi, Andy Coenen, Sebastian Gehrmann, Ellen Jiang, Mahima Pushkarna, Carey Radebaugh, Emily Reif, and Ann Yuan. 2020. The language in- terpretability tool: Extensible, interactive visualiza- tions and analysis for nlp models. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing: System Demonstrations.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Visualizing attention in transformerbased language representation models", "authors": [ { "first": "Jesse", "middle": [], "last": "Vig", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jesse Vig. 2019. Visualizing attention in transformer- based language representation models.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "AllenNLP interpret: A framework for explaining predictions of NLP models", "authors": [ { "first": "Eric", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Tuyls", "suffix": "" }, { "first": "Junlin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Sanjay", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations", "volume": "", "issue": "", "pages": "7--12", "other_ids": { "DOI": [ "10.18653/v1/D19-3002" ] }, "num": null, "urls": [], "raw_text": "Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Sub- ramanian, Matt Gardner, and Sameer Singh. 2019. AllenNLP interpret: A framework for explaining predictions of NLP models. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 7-12, Hong Kong, China. Association for Compu- tational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Recursive neural structural correspondence network for crossdomain aspect and opinion co-extraction", "authors": [ { "first": "Wenya", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Sinno Jialin Pan", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenya Wang and Sinno Jialin Pan. 2018. Recursive neural structural correspondence network for cross- domain aspect and opinion co-extraction. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics, pages 1--11.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Recursive neural conditional random fields for aspect-based sentiment analysis", "authors": [ { "first": "Wenya", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Sinno Jialin Pan", "suffix": "" }, { "first": "Xiaokui", "middle": [], "last": "Dahlmeier", "suffix": "" }, { "first": "", "middle": [], "last": "Xiao", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "616--626", "other_ids": { "DOI": [ "10.18653/v1/D16-1059" ] }, "num": null, "urls": [], "raw_text": "Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016. Recursive neural conditional random fields for aspect-based sentiment analysis. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 616-626, Austin, Texas. Association for Computa- tional Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Xlnet: Generalized autoregressive pretraining for language understanding. Cite arxiv:1906.08237Comment: Pretrained models and code are", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregres- sive pretraining for language understand- ing. Cite arxiv:1906.08237Comment: Pre- trained models and code are available at https://github.com/zihangdai/xlnet.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Baseline (a) and LIBERT (b,c) final layer t-SNE embeddings of aspect terms colored by domain (a,b) and aspect extraction sentence level F1 score (c) as seen in InterpreT." }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "InterpreT's Head Summary plot displaying aggregated grammar correlation using multi-selection for LIBERT (a) along with an example of the the attention matrix of selected attention head (head 13 in layer 4) (b)." }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "Figure 4adisplays the average distance per layer InterpreT summary plots for WSC. These plots display summary statistics for the average predicted span token distance per layer (a) and coreference intensity metric (b) for fine-tuned BERT aggregated over the full dataset." }, "TABREF0": { "content": "", "html": null, "text": ") Predictions of the fine-tuned BERT model for the two examples. The values in bold are correct predictions.", "type_str": "table", "num": null } } } }