{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:09:31.754828Z" }, "title": "Tracking the Traces of Passivization and Negation in Contextualized Representations", "authors": [ { "first": "Hande", "middle": [], "last": "Celikkanat", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Helsinki Helsinki", "location": { "country": "Finland" } }, "email": "" }, { "first": "Sami", "middle": [], "last": "Virpioja", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Helsinki Helsinki", "location": { "country": "Finland" } }, "email": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Helsinki Helsinki", "location": { "country": "Finland" } }, "email": "" }, { "first": "Marianna", "middle": [], "last": "Apidianaki", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Helsinki Helsinki", "location": { "country": "Finland" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Contextualized word representations encode rich information about syntax and semantics, alongside specificities of each context of use. While contextual variation does not always reflect actual meaning shifts, it can still reduce the similarity of embeddings for word instances having the same meaning. We explore the imprint of two specific linguistic alternations, namely passivization and negation, on the representations generated by neural models trained with two different objectives: masked language modeling and translation. Our exploration methodology is inspired by an approach previously proposed for removing societal biases from word vectors. We show that passivization and negation leave their traces on the representations, and that neutralizing this information leads to more similar embeddings for words that should preserve their meaning in the transformation. We also find clear differences in how the respective features generalize across datasets.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Contextualized word representations encode rich information about syntax and semantics, alongside specificities of each context of use. While contextual variation does not always reflect actual meaning shifts, it can still reduce the similarity of embeddings for word instances having the same meaning. We explore the imprint of two specific linguistic alternations, namely passivization and negation, on the representations generated by neural models trained with two different objectives: masked language modeling and translation. Our exploration methodology is inspired by an approach previously proposed for removing societal biases from word vectors. We show that passivization and negation leave their traces on the representations, and that neutralizing this information leads to more similar embeddings for words that should preserve their meaning in the transformation. We also find clear differences in how the respective features generalize across datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Contextualized representations extracted from pretrained language models reflect the syntactic and semantic properties of words (Linzen et al., 2016; Hewitt and Manning, 2019; Rogers et al., 2020; Tenney et al., 2019) as well as variation in their context of use. We propose to explore the impact of context variation on word representations. We specifically address representations generated by the BERT model (Devlin et al., 2019) , trained using a language modeling objective, and translation models involving one or more language pairs (Artetxe and Schwenk, 2019; V\u00e1zquez et al., 2020) .", "cite_spans": [ { "start": 128, "end": 149, "text": "(Linzen et al., 2016;", "ref_id": "BIBREF15" }, { "start": 150, "end": 175, "text": "Hewitt and Manning, 2019;", "ref_id": "BIBREF9" }, { "start": 176, "end": 196, "text": "Rogers et al., 2020;", "ref_id": "BIBREF24" }, { "start": 197, "end": 217, "text": "Tenney et al., 2019)", "ref_id": "BIBREF26" }, { "start": 411, "end": 432, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 540, "end": 567, "text": "(Artetxe and Schwenk, 2019;", "ref_id": "BIBREF0" }, { "start": 568, "end": 589, "text": "V\u00e1zquez et al., 2020)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We run a series of controlled experiments using sentences illustrating both meaning preserving and meaning altering transformations from the SICK dataset (Marelli et al., 2014b) , and examples automatically generated using a template-based method (Prasad et al., 2019) . We explore the impact of specific alternations on the representations, namely passivization and negation. Examples in our datasets consist of sentences that only differ in terms of the specific alternation addressed. In order to detect the imprint of these transformations on the representations, we employ methodology inspired by work on linguistic bias detection in embedding representations (Bolukbasi et al., 2016; Lauscher et al., 2019; Ravfogel et al., 2020) .", "cite_spans": [ { "start": 154, "end": 177, "text": "(Marelli et al., 2014b)", "ref_id": "BIBREF17" }, { "start": 247, "end": 268, "text": "(Prasad et al., 2019)", "ref_id": "BIBREF21" }, { "start": 665, "end": 689, "text": "(Bolukbasi et al., 2016;", "ref_id": "BIBREF1" }, { "start": 690, "end": 712, "text": "Lauscher et al., 2019;", "ref_id": "BIBREF13" }, { "start": 713, "end": 735, "text": "Ravfogel et al., 2020)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Furthermore, we investigate the impact of removing the encoding of such alternations on word similarity. Intuitively, we would expect the representations of words present in sentences that have undergone passivization (PAS) to be highly similar despite the differences in syntactic structure. Consider, for example, the words mafia, millionaire and kidnapped in the examples 1 and 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1 The mafia kidnapped the millionaire. 2 The millionaire was kidnapped by the mafia.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "PAS changes the words' syntactic roles but their thematic roles remain the same. The meaning shift that results from this operation is mainly discursive, 1 shifting the focus from the theme to the agent, but the content words in the two sentences still refer to the same event and entities. 2 Their representations should thus be highly similar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We also address a meaning altering transformation which involves inserting (or removing) the negation particle to produce contradictions, as in 3 and 4 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3 The boy is playing the piano. 4 The boy is not playing the piano.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The effect of negation (NEG) at the sentence level is obvious. However, the meaning of specific words (boy, playing, piano) should remain the same despite of the whole sentence having the opposite meaning. Below, we explore the extent to which this type of context variation affects the similarity of the representations of word instances in the two sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We show that passivization and negation 3 have a significant imprint on the representations, and that their removal can improve word similarity estimation. Our results also highlight that this type of context variation is differently marked in representations generated by models trained with different objectives. Specifically, we find that variation in the embeddings produced by models trained with a translation objective generalize better than those derived from models trained with a masked language modeling objective, across datasets, in the sense that they seem to be encoded in features that are independent of the specific dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1 Note, however, that the impact of the alternation on the framing of the sentence can be significant. Passive avoids identifying a causal agent and therefore conceals the responsibility for an event (Greene and Resnik, 2009) .", "cite_spans": [ { "start": 200, "end": 225, "text": "(Greene and Resnik, 2009)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 In sentence 1 , the mafia is the agent and is in subject position, while the millionaire is the theme in direct object position. In 2 , the semantic relationship of the mafia and the millionaire to the kidnapping event is the same but their syntactic roles have changed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3 These two transformations were preferred on the basis that they do not change the words in the sentence, as opposed to other possible translations, which involve reformulations, eg. \"a sewing machine\" vs. \"a machine made for sewing\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The analysis and interpretation of the linguistic knowledge present in contextualized representations has recently been the focus of a large amount of work (Clark et al., 2019; Voita et al., 2019b; Tenney et al., 2019; Talmor et al., 2019) . The bulk of this interpretation work relies on probing tasks which serve to predict linguistic properties from the representations generated by the models (Linzen, 2018; Rogers et al., 2020) . These might involve structural aspects of language, such as syntax, word order, or number agreement (Linzen et al., 2016; Hewitt and Manning, 2019; Hewitt and Liang, 2019) , or semantic phenomena such as semantic role labeling and coreference (Tenney et al., 2019; Kovaleva et al., 2019) . In our work, we shift the focus from interpreting the knowledge about language encoded in the representations, to exploring the imprint of two specific transformations, passivization and negation, on word representations.", "cite_spans": [ { "start": 156, "end": 176, "text": "(Clark et al., 2019;", "ref_id": "BIBREF2" }, { "start": 177, "end": 197, "text": "Voita et al., 2019b;", "ref_id": "BIBREF28" }, { "start": 198, "end": 218, "text": "Tenney et al., 2019;", "ref_id": "BIBREF26" }, { "start": 219, "end": 239, "text": "Talmor et al., 2019)", "ref_id": "BIBREF25" }, { "start": 397, "end": 411, "text": "(Linzen, 2018;", "ref_id": "BIBREF14" }, { "start": 412, "end": 432, "text": "Rogers et al., 2020)", "ref_id": "BIBREF24" }, { "start": 535, "end": 556, "text": "(Linzen et al., 2016;", "ref_id": "BIBREF15" }, { "start": 557, "end": 582, "text": "Hewitt and Manning, 2019;", "ref_id": "BIBREF9" }, { "start": 583, "end": 606, "text": "Hewitt and Liang, 2019)", "ref_id": "BIBREF8" }, { "start": 678, "end": 699, "text": "(Tenney et al., 2019;", "ref_id": "BIBREF26" }, { "start": 700, "end": 722, "text": "Kovaleva et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The majority of the above mentioned works address representations generated by models trained with a language modeling objective, such as LSTM RNNs (Linzen et al., 2016) , ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) . Voita et al. (2019a) propose to study the representations obtained from models trained with a different objective. We take the same stance and investigate the impact of context on representations generated by BERT, and by the encoder of neural machine translation (NMT) models involving one or more language pairs.", "cite_spans": [ { "start": 148, "end": 169, "text": "(Linzen et al., 2016)", "ref_id": "BIBREF15" }, { "start": 177, "end": 198, "text": "(Peters et al., 2018)", "ref_id": "BIBREF20" }, { "start": 208, "end": 229, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 232, "end": 252, "text": "Voita et al. (2019a)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In order to detect the information related to the two studied transformations that is encoded in the representations, we employ methodology initially proposed for identifying and removing linguistic and other kinds of biases from representations. Such methods fall in two main paradigms: projection and adversarial methods. Projection methods identify specific directions in word embedding space that correspond to the protected attribute, and remove them. Bolukbasi et al. (2016) identify a gender subspace by exploring gendered word lists. Zhao et al. (2018) propose to train debiased word embeddings from scratch by altering the loss of the GloVe model (Pennington et al., 2014) to concentrate specific information (e.g., about gender) in a dedicated coordinate of each vector. Dev and Phillips (2019) propose a simple linear projection method to reduce the bias in word embed-dings. Lauscher et al. (2019) develop a variation of this method that introduces more flexibility in the formation of the debiasing vector used in the projection. Adversial methods extend the main task objective with a component that competes with the encoder trying to extract the protected information from its representation (Goodfellow et al., 2014; Xie et al., 2017; Zhang et al., 2018) . These models cannot, however, completely remove the protected information, and their training is difficult (Elazar and Goldberg, 2018) . Xu et al. (2017) propose a null-space cleaning operator as a privacy mechanism to minimize the exposure of confidential information in a dataset. Given a model pre-trained for a given task, they remove from the input a subspace that contains the null-space, hence removing information that is not used for the main task. Ravfogel et al. (2020) propose a similar method, Iterative Null-space Projection (INLP), for removing information regarding a certain property from representations. It is based on the mathematical notion of linear projection and is data-driven in the directions it removes, like adversarial methods. In our experiments, we repurpose the INLP method for identifying and removing traces of the passivization and negation transformations from contextualized representations.", "cite_spans": [ { "start": 457, "end": 480, "text": "Bolukbasi et al. (2016)", "ref_id": "BIBREF1" }, { "start": 542, "end": 560, "text": "Zhao et al. (2018)", "ref_id": "BIBREF33" }, { "start": 656, "end": 681, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF19" }, { "start": 781, "end": 804, "text": "Dev and Phillips (2019)", "ref_id": "BIBREF3" }, { "start": 887, "end": 909, "text": "Lauscher et al. (2019)", "ref_id": "BIBREF13" }, { "start": 1208, "end": 1233, "text": "(Goodfellow et al., 2014;", "ref_id": "BIBREF6" }, { "start": 1234, "end": 1251, "text": "Xie et al., 2017;", "ref_id": "BIBREF30" }, { "start": 1252, "end": 1271, "text": "Zhang et al., 2018)", "ref_id": "BIBREF32" }, { "start": 1381, "end": 1408, "text": "(Elazar and Goldberg, 2018)", "ref_id": "BIBREF5" }, { "start": 1411, "end": 1427, "text": "Xu et al. (2017)", "ref_id": "BIBREF31" }, { "start": 1732, "end": 1754, "text": "Ravfogel et al. (2020)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In our experiments, we use contextualized representations generated by the BERT language model and two Transformer-based machine translation models (Section 3.1). We generate representations for words in two datasets with sentence pairs illustrating passivization and negation (Section 3.2). We focus on the main verb, and the nouns found in subject and object positions in the sentence pairs. We study the effect of the transformations on the representations using binary classification and iterative nullspace projection (Section 3.3). 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "We obtain BERT representations using bert-base-uncased (Devlin et al., 2019) , a pre-trained language model that consists of 12 layers with 768 dimensions on each layer. We also extract representations from machine translation models involving one or more language pairs. We use a bilingual English-to-German model (which we call MT: EN > DE) and a model with two languages, German and Greek, on the target side (MT: EN > DE+EL). The latter is trained using language flag tokens in the spirit of Johnson et al. (2017) . We, however, feed the flags to the decoder instead of encoder. This way, we avoid the risk that the encoder is influenced by the target language and force the model to create more generic abstractions. For the two MT models, we use Transformer architectures trained on a multiparallel subset of the Europarl dataset (Koehn, 2005) , spanning \u2248 400,000 aligned sentences (Marecek et al., 2020) , with the following parameters: 6 layers in the encoder and in the decoder, 16 attention heads, 512 as the dimension of the encodings, and 4,096 as the feed-forward network inner dimension.", "cite_spans": [ { "start": 55, "end": 76, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 496, "end": 517, "text": "Johnson et al. (2017)", "ref_id": "BIBREF10" }, { "start": 836, "end": 849, "text": "(Koehn, 2005)", "ref_id": "BIBREF11" }, { "start": 889, "end": 911, "text": "(Marecek et al., 2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Contextualized Representations", "sec_num": "3.1" }, { "text": "We explore the traces that the PAS transformation leaves on word representations using a dataset automatically created with the templates proposed by Prasad et al. (2019) . 5 The PAS sentence pairs generated by Prasad et al. (2019) in their original study, contain relative clauses and are often syntactically very complex (e.g., the obnoxious manager that was astonished by the interesting jobs trusted the modest receptionists last month). 6 To reduce complexity and focus on the phenomenon of interest, we modify the templates to generate PAS sentence pairs without relative clauses (e.g., the obnoxious manager was astonished by the interesting jobs). 1000 PAS sentence pairs are generated in this manner. We call this dataset TEMPL-PAS.", "cite_spans": [ { "start": 150, "end": 170, "text": "Prasad et al. (2019)", "ref_id": "BIBREF21" }, { "start": 173, "end": 174, "text": "5", "ref_id": null }, { "start": 211, "end": 231, "text": "Prasad et al. (2019)", "ref_id": "BIBREF21" }, { "start": 442, "end": 443, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.2" }, { "text": "We also use sentence pairs from the SICK (Sentences Involving Compositional Knowledge) dataset (Marelli et al., 2014b) . 7 The SICK dataset has been obtained through crowdsourcing and illustrates lexical, syntactic and semantic phenomena that compositional distributional semantic models are expected to account for. PAS is one of the meaning preserving alternations in SICK, where a sentence S2 results from the passivization of an active sentence S1. We use all the 276 sentence 5 The code is available at https://github.com/ grushaprasad/RNN-Priming.", "cite_spans": [ { "start": 95, "end": 118, "text": "(Marelli et al., 2014b)", "ref_id": "BIBREF17" }, { "start": 121, "end": 122, "text": "7", "ref_id": null }, { "start": 481, "end": 482, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.2" }, { "text": "6 The complexity of the sentences also resulted in numerous syntactic analysis errors when we tried to parse them using Stanza (Qi et al., 2020) . 7 The dataset was used in SemEval 2014 Task 1: Evaluation of Compositional Distributional Semantic Models on Full Sentences through Semantic Relatedness and Textual Entailment (Marelli et al., 2014a) . pairs (i.e., total of 552 sentences) in SICK that illustrate the PAS transformation, and call this dataset SICK-PAS.", "cite_spans": [ { "start": 127, "end": 144, "text": "(Qi et al., 2020)", "ref_id": "BIBREF22" }, { "start": 147, "end": 148, "text": "7", "ref_id": null }, { "start": 323, "end": 346, "text": "(Marelli et al., 2014a)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.2" }, { "text": "For exploring negation, we again generate 1,000 sentence pairs with the Prasad et al. (2019) templates, inserting negation to produce contradictions. We call this dataset TEMPL-NEG. We also use the 400 sentence pairs illustrating negation in the SICK dataset, which we call as SICK-NEG.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.2" }, { "text": "We distinguish nouns in subject and object positions, and call the main verb of the sentence with the label VERB. In the passivization examples, we compare nouns in subject position of active sentences with the corresponding noun in agent position of the passive sentence and label them as A-SUBJ/P-AG. Furthermore, we compare nouns in subject position of the passive examples with the nouns in object position of the corresponding active sentence, and label them as A-OBJ/P-SUBJ. In the negation examples we compare nouns in the same position and label them as SUBJECT or OBJECT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.2" }, { "text": "We parse both datasets with the Stanza parser (Qi et al., 2020) to obtain the dependency trees, from which we extract the elements for our comparison.", "cite_spans": [ { "start": 46, "end": 63, "text": "(Qi et al., 2020)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.2" }, { "text": "A straightforward approach for measuring the effect of the studied transformations on the contextualized word representations is to train a binary classifier to detect in which sentence variants (active/passive, affirmative/negated sentence) the word occurred. For this purpose, we form training and test sets (%70 and %30 of the SICK-PAS, SICK-NEG, TEMPL-PAS and TEMPL-NEG datasets) by grouping the noun and verb instances occurring in corresponding sentence pairs into two contrasting classes (e.g., active vs. passive). For a fair evaluation of the classifier performance, we make sure to preserve a lexical split between the training and test portions of the datasets, by grouping all instances of a specific word in one set (either train or test).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3.3" }, { "text": "A successful classification on the test set shows that the representations encode informative features describing each variant (active vs. passive or affirmative vs. negative). The debiasing methods discussed in Section 2 are suitable for neutralizing such features. Here, we utilize Iterative Nullspace Projection (INLP) (Ravfogel et al., 2020) . Given a set of vectors x i \u2208 R d and corresponding discrete attributes Z, z i \u2208 {1, ...,k}, the goal is to learn a transformation g :", "cite_spans": [ { "start": 322, "end": 345, "text": "(Ravfogel et al., 2020)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3.3" }, { "text": "R d \u2192 R d , such that z i cannot be predicted from g(x i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3.3" }, { "text": "The method is based on iteratively (1) training a linear classifier to predict z i from x i , followed by (2) projecting x i on the null-space of the classifier, using a projection matrix P N (W ) such that W (P N (W ) x) = 0 \u2200x, where W is the weight matrix of the classifier, and N (W ) is its null-space. Through the projection step in each iteration, the information detected by the trained linear classifier is removed from the representation. The procedure continues until the attempt to train a linear classifier on the projected data becomes unsuccessful. As a result of the procedure, one also obtains a projection matrix,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3.3" }, { "text": "P = P N (Wm) P N (W m\u22121 )...P N (W 0 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3.3" }, { "text": ", which is the multiplication of all the null-space projections applied in all steps. This projection matrix P can then potentially be applied to uncleaned data in a single step to reproduce the effect of the whole operation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3.3" }, { "text": "The features used by the classifiers may be very low-level, based on specific words or their role in the sentence. Such features are not very interesting as they are easily overfitted to the particular types of sentences in the training data. By testing the same features on a second dataset, we can measure if they are abstract enough to be generalizable. Specifically, we apply the trained INLP projection to the second dataset, then train a new classifier on it. If the new classifier is able to predict the sentence variant, this means that the projection is specific to the first dataset, and is thus not useful for removing information relevant for this distinction from the second dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3.3" }, { "text": "In this section, we present various analyses of the original data and the effects of the transformations on contextualized word representations. First, we provide a visualization of embeddings before and after null-space projection. Next, we study the classification results which demonstrate the success of INLP and, finally, we investigate the impact of the neutralization procedure on word similarity. We also provide evidence regarding the generalization capability of the algorithm and the projections it discovers. In all results, with the exception of visualizations, we report the average of 20 runs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "One of our main goals is to explore the extent to which grammatical variation is encoded in contextualized representations. Visualization is a useful tool for demonstrating the division of the representational space into different regions in controlled examples. We use multidimensional scaling (MDS) to show the impact of the variation on the encodings. MDS reveals the level of similarity of individual points in a dataset in terms of their pairwise distance. Our data points are the contextualized representations of words in the sentences. Figure 2 reflects the distinction between active and passive verb instances present in the TEMPL-PAS dataset.", "cite_spans": [], "ref_spans": [ { "start": 544, "end": 552, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Visualization", "sec_num": "4.1" }, { "text": "The top part of Figure 2 shows how the original representations are distributed. The separation between instances of the two classes seems almost linear, especially in the top layer of the models. For BERT, this is also the case for the middle layer (layer 6). The lower part of the figure shows that after the INLP procedure, the active and passive instances are no longer visually separable.", "cite_spans": [], "ref_spans": [ { "start": 16, "end": 24, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Visualization", "sec_num": "4.1" }, { "text": "For nouns in corresponding thematic roles in the active and passive sentences, the situation is similar except for the BERT-based representations. Figure 1 includes the plots for the top layer of each model, and the nouns reflecting the agent and theme in corresponding sentences. The separation between active and passive examples is clear in MT models but quite fuzzy when using BERT. 8 However, the following section on classificationbased results reveals that, even in this case, the distinction is still clearly present and can effectively be detected and removed by INLP.", "cite_spans": [], "ref_spans": [ { "start": 147, "end": 155, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Visualization", "sec_num": "4.1" }, { "text": "We also explore how easy it is to correctly assign different instances in the two classes using a logistic regression classifier with inverse L2 regularization strength of 0.001. 9 We conduct this experiment on the original data using two iterations of the INLP procedure. This shows the amount of information relevant to this distinction in the original dataset that is still present after null-space projection. Table 1 shows a successful classification of the TEMPL dataset before INLP for both transformations and all used grammatical categories, with the accuracy dropping to \u2248 0.5 by Iteration 2. This demonstrates that all representations explicitly encode the features that are altered by the PAS and NEG transformations, and that INLP can effectively remove them from the representations. This is especially informative for the BERT-based representations for nouns, a distinction that was not apparent from the visualization experiment discussed previously. The results for the SICK dataset are similar and available in the Appendix.", "cite_spans": [ { "start": 179, "end": 180, "text": "9", "ref_id": null } ], "ref_spans": [ { "start": 414, "end": 421, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Classification", "sec_num": "4.2" }, { "text": "We explore the similarity of individual word instances and how it is affected by the INLP neutralization procedure we apply. We study this effect on each of the encoder layers, and provide a comparison of four different measures to illustrate the impact of INLP on the embeddings. The first two metrics measure the distance between the classes C 1 and C 2 \u2208 C corresponding to our transformation variants, and we expect them to go down due to the neutralization procedure. Two additional 8 The full picture is available in the Appendix including MDS plots for SICK-PAS and NEG transformations.", "cite_spans": [ { "start": 488, "end": 489, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Similarity Estimation", "sec_num": "4.3" }, { "text": "9 Selected from among options of {0.1, 0.01, 0.001, 0.0001} to optimize the generalization of the classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Estimation", "sec_num": "4.3" }, { "text": "Positive-Negative VERB A-SUBJ/P-AG A-OBJ/ P-SUBJ VERB SUBJECT OBJECT It-0 It-2 It-0 It-2 It-0 It-2 It-0 It-2 It-0 It-2 It-0 It- Table 1 : Classification accuracy obtained on the TEMPL-PAS and TEMPL-NEG datasets before (Iteration 0, 'It-0') and after (Iteration 2, 'It-2') application of the INLP procedure.", "cite_spans": [], "ref_spans": [ { "start": 42, "end": 138, "text": "P-SUBJ VERB SUBJECT OBJECT It-0 It-2 It-0 It-2 It-0 It-2 It-0 It-2 It-0 It-2 It-0 It-", "ref_id": null }, { "start": 139, "end": 146, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Active-Passive", "sec_num": null }, { "text": "metrics measure the distance of instances within the same class in order to verify that INLP does not produce any unwanted side effects when modifying the representations. The first metric computes the average pairwise inter-class distance and is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Active-Passive", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "avg i\u2208S ||x A i \u2212 x B i ||,", "eq_num": "(1)" } ], "section": "Active-Passive", "sec_num": null }, { "text": "where S is the set of sentence pairs and x A i and x B i are the embeddings of the target word w i in sentence variants A and B (e.g., active and passive).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Active-Passive", "sec_num": null }, { "text": "We expect this to be high prior to neutralization, and to drop significantly afterwards. We also measure the global inter-class distance:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Active-Passive", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "avg i \u2208S,C 1 \u2208{A,B} ||x i C 1 \u2212 avg j\u2208S,C 2 \u2208{A,B}:C 2 =C 1 x C 2 j ||,", "eq_num": "(2)" } ], "section": "Active-Passive", "sec_num": null }, { "text": "which measures the average distance of the embedding x C 1 i of variant C 1 to the centroid of the corresponding word embeddings of the other variant C 2 , x C 2 j . We expect this value to also decrease after the projection, but less than the previous one since it includes distances between all data points rather than only the paired sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Active-Passive", "sec_num": null }, { "text": "Neutralization should not significantly affect similarities between embeddings of the same word w i in different contexts within the same sentence variant C k . We measure this using the same-word intra-class distance for instances of the same word, expecting this to stay approximately same:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Active-Passive", "sec_num": null }, { "text": "avg i\u2208S,C k \u2208{A,B} ||x C k i \u2212 avg j\u2208S:w j =w i ,j =i x C k j || (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Active-Passive", "sec_num": null }, { "text": "Finally, analogous to the global inter-class distance, we also measure the global intra-class distance:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Active-Passive", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "avg i\u2208S,C k \u2208{A,B} ||x C k i \u2212 avg j\u2208S x C k j ||,", "eq_num": "(4)" } ], "section": "Active-Passive", "sec_num": null }, { "text": "which computes the average distance of the embeddings x C k i to the centroid of the word embeddings of variant C k . Again, we expect this to not decrease. Figure 3 shows the results for the verbs and nouns in the TEMPL-PAS dataset before and after INLP. 10 In all plots, especially the MT ones, we see a significant drop in pairwise inter-class distance after INLP application, which shows the effectiveness of the procedure. As expected, global inter-class distance also shows a smaller degree of drop. On the contrary, and also as expected, we do not observe drops in same-word intra-class distance or global intra-class distance, which implies that the projection does not cause major damage to the information that needs to be preserved.", "cite_spans": [], "ref_spans": [ { "start": 157, "end": 165, "text": "Figure 3", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Active-Passive", "sec_num": null }, { "text": "Finally, we investigate the possibility to transfer null-space projections across data sets and word classes, in order to understand how generic the features representing the targeted transformation are.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Null-space Projection Transfer", "sec_num": "4.4" }, { "text": "We learn a projection on the TEMPL-PAS and TEMPL-NEG datasets, and use it to clean SICK-PAS and SICK-NEG respectively. We then evaluate how well the transfer works by using the cleaned dataset to train and test a classifier. If the transfer succeeds and the projection learned on the first dataset efficiently cleans the second dataset, the classification attempt will fail because all relevant information that would be useful to the classifier would have been removed. On the contrary, if a classifier can still be successfully trained on the cleaned version of the SICK datasets, then we as- sume that the transfer failed since information relevant to the distinction still persists.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer across Datasets", "sec_num": "4.4.1" }, { "text": "In Figure 4 , we compare (a) the classification accuracy on the original SICK-PAS and SICK-NEG datasets (dotted lines) to (b) the accuracy obtained on these datasets cleaned by using the null-space projection learned on TEMPL-PAS and TEMPL-NEG, respectively (solid lines). We report results for nouns and verbs obtained using representations generated by BERT and the MT encoders.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 4", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Transfer across Datasets", "sec_num": "4.4.1" }, { "text": "The transfer from TEMPL to SICK does not seem to work well with BERT representations, since a classifier trained on the cleaned SICK datasets still obtains fairly high accuracy. An exceptional to this is seen the in final layers of BERT, and subjects in the SICK-NEG dataset, for which the cleaned dataset shows slightly lower (70-90%) accurracy. For the MT representations, on the other hand, we observe low accuracies for the posttransfer classification, which suggests a successful transfer of information between the datasets. Es-pecially for TEMPL-PAS VERB and A-SUBJ/P-AG, representations obtained with the MT model that involves two language pairs respond better to the transfer, as shown by significantly lower postcleaning accuracies (i.e., less remaining information) than the ones obtained by the MT model with one target language. Notably, this trend is not seen for TEMPL-NEG.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer across Datasets", "sec_num": "4.4.1" }, { "text": "We also tried to transfer null-space projection between different grammatical categories, specifically by learning the projection for verbs, subjects or objects, and then trying to apply it to one of the other two. An example of such a transfer is shown in Figure 5 . Here, we apply the projection learned on verbs in the negation dataset to neutralize the same information from the noun in subject position. This seems to work surprisingly well for the MT-based representations. For BERT-based representations and for the passivization data set, on the other hand, the transfer across categories is not very successful with classification accuracies typically remaining above 80%. Results highlight that the information is highly specific to words of a certain grammatical category and that the projection cannot be applied as a universal neutralization procedure.", "cite_spans": [], "ref_spans": [ { "start": 257, "end": 265, "text": "Figure 5", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Transfer across Grammatical Categories", "sec_num": "4.4.2" }, { "text": "We have shown that transformations such as passivization and negation leave a strong imprint on contextualized representations. We demonstrate that leveraging this information, it is possible to build classifiers that successfully identify word instances falling in either category. The traces of these transformations also affect the similarity of word instances that refer to the same entities and events. Repurposing a method initially proposed for identifying and removing societal biases from representations, we show that it is possible to neutralize the trace of such transformations from contextualized representations, and preserve the similarity of word instances having the same reference. Interestingly, the features that predict the transformation variant seem to be more generalizable in the embeddings generated by an MT encoder than in the BERT embeddings, implying that the BERT embeddings contain more surface-level information specific to each dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Figures 6 and 7 provide the complete MDS visualizations for TEMPL-PAS and TEMPL-NEG. For TEMPL-PAS, we see a significant imprint for the nouns also. For TEMPL-NEG, the imprint is mostly visible for the verbs, however note that this does not mean the nouns are unclassifiable, since the INLP classifier is able to find a good classification for them as well (Table 1) . Table 2 shows the classification accuracies for the SICK-PAS and SICK-NEG datasets, before and after INLP. Similar to TEMPL-PAS and TEMPL-NEG results, these also show a good classification accuracy before, and a chance-level one after, demonstrating both a significant initial imprint, and the effectiveness of the INLP procedure. ", "cite_spans": [], "ref_spans": [ { "start": 357, "end": 366, "text": "(Table 1)", "ref_id": null }, { "start": 369, "end": 376, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Our code and data are available at https://github. com/Helsinki-NLP/Syntactic_Debiasing", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "TEMPL-NEG results are available in the Appendix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work has been supported by the Fo-Tran project, funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement \u2116 771113). We thank the reviewers for their thoughtful comments and valuable suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "Positive-Negative VERB A-SUBJ/P-AG A-OBJ/ P-SUBJ VERB SUBJECT OBJECT It-0 It-2 It-0 It-2 It-0 It-2 It-0 It-2 It-0 It-2 It-0 It-2 BERT L- ", "cite_spans": [], "ref_spans": [ { "start": 42, "end": 151, "text": "P-SUBJ VERB SUBJECT OBJECT It-0 It-2 It-0 It-2 It-0 It-2 It-0 It-2 It-0 It-2 It-0 It-2 BERT L-", "ref_id": null } ], "eq_spans": [], "section": "Active-Passive", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2019, "venue": "", "volume": "7", "issue": "", "pages": "597--610", "other_ids": { "DOI": [ "10.1162/tacl_a_00288" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe and Holger Schwenk. 2019. Mas- sively multilingual sentence embeddings for zero- shot cross-lingual transfer and beyond. volume 7, pages 597-610. MIT Press.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings", "authors": [ { "first": "Tolga", "middle": [], "last": "Bolukbasi", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Y", "middle": [], "last": "James", "suffix": "" }, { "first": "Venkatesh", "middle": [], "last": "Zou", "suffix": "" }, { "first": "Adam", "middle": [ "T" ], "last": "Saligrama", "suffix": "" }, { "first": "", "middle": [], "last": "Kalai", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems", "volume": "29", "issue": "", "pages": "4349--4357", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In Ad- vances in Neural Information Processing Systems 29, pages 4349-4357.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "What does bertlook at? an analysis of bert's attention", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Urvashi", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "276--286", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does bertlook at? an analysis of bert's attention. In 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276-286.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Attenuating bias in word vectors", "authors": [ { "first": "Sunipa", "middle": [], "last": "Dev", "suffix": "" }, { "first": "Jeff", "middle": [ "M" ], "last": "Phillips", "suffix": "" } ], "year": 2019, "venue": "CoRR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sunipa Dev and Jeff M. Phillips. 2019. Attenuating bias in word vectors. CoRR, abs/1901.07656.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Adversarial removal of demographic attributes from text data", "authors": [ { "first": "Yanai", "middle": [], "last": "Elazar", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "11--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 11- 21.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Generative adversarial nets", "authors": [ { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Pouget-Abadie", "suffix": "" }, { "first": "Mehdi", "middle": [], "last": "Mirza", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "David", "middle": [], "last": "Warde-Farley", "suffix": "" }, { "first": "Sherjil", "middle": [], "last": "Ozair", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "Advances in Neural Information Processing Systems", "volume": "27", "issue": "", "pages": "2672--2680", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative ad- versarial nets. In Advances in Neural Information Processing Systems 27, pages 2672-2680.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "More than words: Syntactic packaging and implicit sentiment", "authors": [ { "first": "Stephan", "middle": [], "last": "Greene", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "503--511", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Greene and Philip Resnik. 2009. More than words: Syntactic packaging and implicit sentiment. In Proceedings of the Annual Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 503-511.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Designing and interpreting probes with control tasks", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2733--2743", "other_ids": { "DOI": [ "10.18653/v1/D19-1275" ] }, "num": null, "urls": [], "raw_text": "John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2733-2743.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A Structural Probe for Finding Syntax in Word Representations", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "4129--4138", "other_ids": { "DOI": [ "10.18653/v1/N19-1419" ] }, "num": null, "urls": [], "raw_text": "John Hewitt and Christopher D. Manning. 2019. A Structural Probe for Finding Syntax in Word Repre- sentations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4129-4138.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "authors": [ { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Thorat", "suffix": "" }, { "first": "Fernanda", "middle": [], "last": "Vi\u00e9gas", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Wattenberg", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Macduff", "middle": [], "last": "Hughes", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "339--351", "other_ids": { "DOI": [ "10.1162/tacl_a_00065" ] }, "num": null, "urls": [], "raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: En- abling zero-shot translation. Transactions of the As- sociation for Computational Linguistics, 5:339-351.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Europarl: A parallel corpus for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "MT summit", "volume": "5", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, vol- ume 5, pages 79-86. Citeseer.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Revealing the Dark Secrets of BERT", "authors": [ { "first": "Olga", "middle": [], "last": "Kovaleva", "suffix": "" }, { "first": "Alexey", "middle": [], "last": "Romanov", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4365--4374", "other_ids": { "DOI": [ "10.18653/v1/D19-1445" ] }, "num": null, "urls": [], "raw_text": "Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the Dark Secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4365-4374.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A general framework for implicit and explicit debiasing of distributional word vector spaces", "authors": [ { "first": "Anne", "middle": [], "last": "Lauscher", "suffix": "" }, { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anne Lauscher, Goran Glava\u0161, Simone Paolo Ponzetto, and Ivan Vuli\u0107. 2019. A general framework for im- plicit and explicit debiasing of distributional word vector spaces.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "What can linguistics and deep learning contribute to each other? CoRR", "authors": [ { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tal Linzen. 2018. What can linguistics and deep learning contribute to each other? CoRR, abs/1809.04179.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies", "authors": [ { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Dupoux", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "521--535", "other_ids": { "DOI": [ "10.1162/tacl_a_00115" ] }, "num": null, "urls": [], "raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies. Transactions of the Association for Computational Linguistics, 4:521- 535.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "SemEval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment", "authors": [ { "first": "Marco", "middle": [], "last": "Marelli", "suffix": "" }, { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Raffaella", "middle": [], "last": "Bernardi", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Menini", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Zamparelli", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "1--8", "other_ids": { "DOI": [ "10.3115/v1/S14-2001" ] }, "num": null, "urls": [], "raw_text": "Marco Marelli, Luisa Bentivogli, Marco Baroni, Raf- faella Bernardi, Stefano Menini, and Roberto Zam- parelli. 2014a. SemEval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In Proceedings of the 8th International Workshop on Semantic Evaluation, pages 1-8.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A SICK cure for the evaluation of compositional distributional semantic models", "authors": [ { "first": "Marco", "middle": [], "last": "Marelli", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Menini", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Raffaella", "middle": [], "last": "Bernardi", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Zamparelli", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 9th International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "216--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zampar- elli. 2014b. A SICK cure for the evaluation of com- positional distributional semantic models. In Pro- ceedings of the 9th International Conference on Lan- guage Resources and Evaluation, pages 216-223.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Are multilingual neural machine translation models better at capturing linguistic features? The Prague Bulletin of Mathematical Linguistics", "authors": [ { "first": "David", "middle": [], "last": "Marecek", "suffix": "" }, { "first": "Hande", "middle": [], "last": "Celikkanat", "suffix": "" }, { "first": "Miikka", "middle": [], "last": "Silfverberg", "suffix": "" }, { "first": "Vinit", "middle": [], "last": "Ravishankar", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Marecek, Hande Celikkanat, Miikka Silfverberg, Vinit Ravishankar, and J\u00f6rg Tiedemann. 2020. Are multilingual neural machine translation models bet- ter at capturing linguistic features? The Prague Bul- letin of Mathematical Linguistics (in press).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Deep Contextualized Word Representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "2227--2237", "other_ids": { "DOI": [ "10.18653/v1/N18-1202" ] }, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Rep- resentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2227-2237.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models", "authors": [ { "first": "Grusha", "middle": [], "last": "Prasad", "suffix": "" }, { "first": "Marten", "middle": [], "last": "Van Schijndel", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "66--76", "other_ids": { "DOI": [ "10.18653/v1/K19-1007" ] }, "num": null, "urls": [], "raw_text": "Grusha Prasad, Marten van Schijndel, and Tal Linzen. 2019. Using Priming to Uncover the Organiza- tion of Syntactic Representations in Neural Lan- guage Models. In Proceedings of the 23rd Confer- ence on Computational Natural Language Learning (CoNLL), pages 66-76.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Stanza: A Python natural language processing toolkit for many human languages", "authors": [ { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Yuhao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yuhui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Bolton", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Null it out: Guarding protected attributes by iterative nullspace projection", "authors": [ { "first": "Shauli", "middle": [], "last": "Ravfogel", "suffix": "" }, { "first": "Yanai", "middle": [], "last": "Elazar", "suffix": "" }, { "first": "Hila", "middle": [], "last": "Gonen", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Twiton", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guarding protected attributes by iterative nullspace projection.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A Primer in BERTology: What we know about how BERT works", "authors": [ { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Kovaleva", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A Primer in BERTology: What we know about how BERT works. arXiv preprint:2002.12327v1.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "oLMpics -On what Language Model Pre-training Captures", "authors": [ { "first": "Alon", "middle": [], "last": "Talmor", "suffix": "" }, { "first": "Yanai", "middle": [], "last": "Elazar", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.13283v1" ] }, "num": null, "urls": [], "raw_text": "Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2019. oLMpics -On what Lan- guage Model Pre-training Captures. arXiv preprint arXiv:1912.13283v1.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "BERT Rediscovers the Classical NLP Pipeline", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4593--4601", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT Rediscovers the Classical NLP Pipeline. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4593- 4601.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives", "authors": [ { "first": "Elena", "middle": [], "last": "Voita", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4396--4406", "other_ids": { "DOI": [ "10.18653/v1/D19-1448" ] }, "num": null, "urls": [], "raw_text": "Elena Voita, Rico Sennrich, and Ivan Titov. 2019a. The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4396-4406.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned", "authors": [ { "first": "Elena", "middle": [], "last": "Voita", "suffix": "" }, { "first": "David", "middle": [], "last": "Talbot", "suffix": "" }, { "first": "Fedor", "middle": [], "last": "Moiseev", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5797--5808", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elena Voita, David Talbot, Fedor Moiseev, Rico Sen- nrich, and Ivan Titov. 2019b. Analyzing multi-head self-attention: Specialized heads do the heavy lift- ing, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, page 5797-5808.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A systematic study of inner-attention-based sentence representations in multilingual neural machine translation", "authors": [ { "first": "Ra\u00fal", "middle": [], "last": "V\u00e1zquez", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Raganato", "suffix": "" }, { "first": "Mathias", "middle": [], "last": "Creutz", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2020, "venue": "Computational Linguistics", "volume": "46", "issue": "2", "pages": "387--424", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ra\u00fal V\u00e1zquez, Alessandro Raganato, Mathias Creutz, and J\u00f6rg Tiedemann. 2020. A systematic study of inner-attention-based sentence representations in multilingual neural machine translation. Computa- tional Linguistics, 46(2):387-424.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Controllable invariance through adversarial feature learning", "authors": [ { "first": "Qizhe", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yulun", "middle": [], "last": "Du", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "585--596", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, and Graham Neubig. 2017. Controllable invariance through adversarial feature learning. In Advances in Neural Information Processing Systems 30, pages 585-596.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Cleaning the Null Space: A Privacy Mechanism for Predictors", "authors": [ { "first": "Ke", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Tongyi", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Swair", "middle": [], "last": "Shah", "suffix": "" }, { "first": "Crystal", "middle": [], "last": "Maung", "suffix": "" }, { "first": "Haim", "middle": [], "last": "Schweitzer", "suffix": "" } ], "year": 2017, "venue": "AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "2789--2795", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ke Xu, Tongyi Cao, Swair Shah, Crystal Maung, and Haim Schweitzer. 2017. Cleaning the Null Space: A Privacy Mechanism for Predictors. In AAAI Confer- ence on Artificial Intelligence, pages 2789-2795.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Mitigating unwanted biases with adversarial learning", "authors": [ { "first": "Brian", "middle": [], "last": "Hu Zhang", "suffix": "" }, { "first": "Blake", "middle": [], "last": "Lemoine", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2018, "venue": "AIES '18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society", "volume": "", "issue": "", "pages": "335--340", "other_ids": { "DOI": [ "10.1145/3278721.3278779" ] }, "num": null, "urls": [], "raw_text": "Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with ad- versarial learning. In AIES '18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and So- ciety, page 335-340.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Learning gender-neutral word embeddings", "authors": [ { "first": "Jieyu", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yichao", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Zeyu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4847--4853", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai- Wei Chang. 2018. Learning gender-neutral word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 4847-4853.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "> DE) representations MT (EN > DE+EL) representations", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "Multidimensional (MDS) visualization of representations obtained for verbs and nouns from active (red) and corresponding passive (blue) sentences. Data points are BERT representations (top) and the encodings from machine translation models involving one (middle) or two language pairs (bottom).", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "> DE) representations MT (EN > DE+EL) representations", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": "Multidimensional scaling (MDS) visualization of verbs in TEMPL-PAS. We show the word representations before (top part of the figure) and after INLP cleaning (lower part). The columns from left to right refer to the bottom, middle, and top layers of the encoder.", "uris": null, "type_str": "figure", "num": null }, "FIGREF4": { "text": "Average Euclidean distance for instances of nouns and verbs in the TEMPL-PAS dataset. Dashed lines show distances in the original dataset, and solid lines reflect distances after applying INLP. Distances are given for representations generated by each layer of the models.", "uris": null, "type_str": "figure", "num": null }, "FIGREF5": { "text": "Classification accuracies for the SICK-PAS and SICK-NEG datasets on (1) the original version of the dataset (dotted lines) vs. (2) the cleaned version of the dataset using information from the learned INLP projection on TEMPL-PAS and TEMPL-NEG. The larger the difference between the original and cleaned versions, the more useful the transferred projection is for cleaning. Error bars indicate standard deviation of 20 experiments.", "uris": null, "type_str": "figure", "num": null }, "FIGREF6": { "text": "Classification accuracies for subjects in TEMPL-NEG on (1) the original dataset vs. (2) the dataset cleaned using learned INLP projection on verbs of TEMPL-NEG. The larger the difference between the original and cleaned versions, the more useful is the transferred projection for cleaning. Error bars indicate standard deviation of 20 runs.", "uris": null, "type_str": "figure", "num": null }, "FIGREF7": { "text": "depicts the changes in the similarities of individual words of TEMPL-NEG using the four distance measures discussed in Section 4> DE) representations MT (EN > DE) representations MT (EN > DE) representations MT (EN > DE+EL) representations MT (EN > DE+EL) representations MT (EN > DE+EL) > DE) representations MT (EN > DE) representations MT (EN > DE) representations MT (EN > DE+EL) representations MT (EN > DE+EL) representations MT (EN > DE+EL) representations", "uris": null, "type_str": "figure", "num": null }, "FIGREF8": { "text": "Multidimensional scaling (MDS) visualisation for three word instance sets in the TEMPL-NEG dataset: Verbs (Left), A-SUBJ/P-AG nouns (Middle), A-OBJ/P-SUBJ nouns (Right). The top part of the figure depicts their representations before cleaning, while the bottom part shows the same word representations after the cleaning procedure. Red and blue points indicate instances in the Active and Passive sentences, respectively.", "uris": null, "type_str": "figure", "num": null }, "FIGREF9": { "text": "Average Euclidean distance for instances of nouns and verbs in the TEMPL-NEG dataset. Dashed lines show distances in the original dataset, and solid lines reflect distances after applying INLP. Distances are given for representations generated by each layer of the models.", "uris": null, "type_str": "figure", "num": null } } } }