|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:21:04.631060Z" |
|
}, |
|
"title": "Predicting and Explaining French Grammatical Gender", |
|
"authors": [ |
|
{ |
|
"first": "Saumya", |
|
"middle": [ |
|
"Yashmohini" |
|
], |
|
"last": "Sahai", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Dravyansh", |
|
"middle": [], |
|
"last": "Sharma", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Grammatical gender may be determined by semantics, orthography, phonology, or could even be arbitrary. Identifying patterns in the factors that govern noun genders can be useful for language learners, and for understanding innate linguistic sources of gender bias. Traditional manual rule-based approaches may be substituted by more accurate and scalable but harder-to-interpret computational approaches for predicting gender from typological information. In this work, we propose interpretable gender classification models for French, which obtain the best of both worlds. We present high accuracy neural approaches which are augmented by a novel global surrogate based approach for explaining predictions. We introduce auxiliary attributes to provide tunable explanation complexity.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Grammatical gender may be determined by semantics, orthography, phonology, or could even be arbitrary. Identifying patterns in the factors that govern noun genders can be useful for language learners, and for understanding innate linguistic sources of gender bias. Traditional manual rule-based approaches may be substituted by more accurate and scalable but harder-to-interpret computational approaches for predicting gender from typological information. In this work, we propose interpretable gender classification models for French, which obtain the best of both worlds. We present high accuracy neural approaches which are augmented by a novel global surrogate based approach for explaining predictions. We introduce auxiliary attributes to provide tunable explanation complexity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Grammatical gender is a categorization of nouns in certain languages which forms a basis for agreement with related words in sentences, and plays an important role in disambiguation and correct usage (Ibrahim, 2014) . An estimated third of the current world population are native speakers of gendered languages, and over one-sixth are L2 speakers. Having a gender assigned to nouns can potentially affect how the speakers think about the world (Samuel et al., 2019) . A systematic study of rules governing these assignments can point to the origin of and potentially help mitigate gender biases, and improve gender-based inclusivity (Sexton, 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 215, |
|
"text": "(Ibrahim, 2014)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 444, |
|
"end": 465, |
|
"text": "(Samuel et al., 2019)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 633, |
|
"end": 647, |
|
"text": "(Sexton, 2020)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Grammatical gender (hereon referred to by gender) need not coincide with \"natural gender\", which can make language acquisition more challenging. For example, Irish cail\u00edn (meaning \"girl\") is assigned a masculine gender. Works investigating the role of gender in acquiring a new language * Equal contribution (Sabourin et al., 2006; Ellis et al., 2012) have found that the speakers of a language with grammatical gender have an advantage when acquiring a new gendered language. Automated generation of simple rules for assigning gender can be helpful for L2 learners, especially when L1 is genderless.", |
|
"cite_spans": [ |
|
{ |
|
"start": 308, |
|
"end": 331, |
|
"text": "(Sabourin et al., 2006;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 332, |
|
"end": 351, |
|
"text": "Ellis et al., 2012)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Tools for understanding predictions of statistical models, for example variable importance analysis of Friedman (2001) , have been used even before the widespread use of black-box neural models. Recently the interest in such tools, reformulated as explainability in the neural context (Guidotti et al., 2018) , has surged, with a corresponding development of a suite of solutions (Bach et al., 2015; Sundararajan et al., 2017; Shrikumar et al., 2017; Lundberg and Lee, 2017) . These approaches typically explain the model prediction by attributing it to relevant bits in the input encoding. While faithful to the black box model's \"decision making\", the explanations obtained may not be readily intuited by human users. Surrogate models, which globally approximate the model predictions by a more interpretable model, or obtain prediction-specific explanations by perturbing the input in domainspecific ways, have been introduced to remedy this problem (Ribeiro et al., 2016; Molnar, 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 118, |
|
"text": "Friedman (2001)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 285, |
|
"end": 308, |
|
"text": "(Guidotti et al., 2018)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 380, |
|
"end": 399, |
|
"text": "(Bach et al., 2015;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 400, |
|
"end": 426, |
|
"text": "Sundararajan et al., 2017;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 427, |
|
"end": 450, |
|
"text": "Shrikumar et al., 2017;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 451, |
|
"end": 474, |
|
"text": "Lundberg and Lee, 2017)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 953, |
|
"end": 975, |
|
"text": "(Ribeiro et al., 2016;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 976, |
|
"end": 989, |
|
"text": "Molnar, 2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We consider a novel surrogate approach to explainability, where we map the feature embedding learned by the black box models to an auxiliary space of explanations. We contend that the best way to arrive at a decision (prediction) may not necessarily be the best way to explain it. While prior work is largely limited to the input encodings, by designing a set of auxiliary attributes we can provide explanations at desired levels of complexity, which could (for example) be made to suit the language learner's ability in our motivating setting. Our techniques overcome issues in prior art in our setting and are completely language-independent, with potential for use in broader natural language processing and other deep learning explanations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For illustration, we examine French in detail where the explanations require both meaning and form.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We consider the problem of obtaining rules for assigning grammatical gender, which has been extensively studied in the linguistic context (Brugmann, 1897; Konishi, 1993; Starreveld and La Heij, 2004; Nelson, 2005; Nastase and Popescu, 2009; Varlokosta, 2011) , but these studies are often limited to identifying semantic or morpho-phonological rules specific to languages and language families. In computational linguistics, prediction models have been discussed in contextual settings (Cucerzan and Yarowsky, 2003) and the role of semantics has been discussed (Williams et al., 2019) . Williams et al. (2020) use information-theoretic tools to quantify the strength of the relationships between declension class, grammatical gender, distributional semantics, and orthography for Czech and German nouns. Classification of gender using data mining approaches has been studied for Konkani (Desai, 2017) . In this work we look at explainable prediction using neural models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 154, |
|
"text": "(Brugmann, 1897;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 155, |
|
"end": 169, |
|
"text": "Konishi, 1993;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 170, |
|
"end": 199, |
|
"text": "Starreveld and La Heij, 2004;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 200, |
|
"end": 213, |
|
"text": "Nelson, 2005;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 214, |
|
"end": 240, |
|
"text": "Nastase and Popescu, 2009;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 258, |
|
"text": "Varlokosta, 2011)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 486, |
|
"end": 515, |
|
"text": "(Cucerzan and Yarowsky, 2003)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 561, |
|
"end": 584, |
|
"text": "(Williams et al., 2019)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 587, |
|
"end": 609, |
|
"text": "Williams et al. (2020)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 887, |
|
"end": 900, |
|
"text": "(Desai, 2017)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The noun gender can be predicted better by considering the word form (Nastase and Popescu, 2009) . Rule-based gender assignment in French has been extensively studied based on both morphonological endings (Lyster, 2006) and semantic patterns (Nelson, 2005) . These studies carefully form rules that govern the gender, argue merits and demerits that often involve factors beyond what rules concisely explain the patterns. Further they are organized as tedious lists of dozens of rules, and evaluated only manually on smaller corpora (less than 8% the size of our dataset). Cucerzan and Yarowsky (2003) show that it is possible to learn the gender by using a small set of annotated words, with their proposed algorithm combining both contextual and morphological models. The encoding of grammatical gender in contextual word embeddings has been explored for some languages in Veeman and Basirat (2020) . They find that adding more context to the contextualized word embeddings of a word is detrimental to the gender classifier's performance. Moreover these embeddings often learn gender from contextual agreement, like associated articles, which are not suitable for explanation (Lyster, 2006) . In contrast, here we will study the role of semantics in gender determination by learning an encoding of the lexical definition of the word, along with the role of form.", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 96, |
|
"text": "(Nastase and Popescu, 2009)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 205, |
|
"end": 219, |
|
"text": "(Lyster, 2006)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 256, |
|
"text": "(Nelson, 2005)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 572, |
|
"end": 600, |
|
"text": "Cucerzan and Yarowsky (2003)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 874, |
|
"end": 899, |
|
"text": "Veeman and Basirat (2020)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 1177, |
|
"end": 1191, |
|
"text": "(Lyster, 2006)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In modern applications of machine learning, it is often desirable to augment the model predictions with faithful (accurately capturing the model) and interpretable (easily understood by humans) explanations of \"why\" an algorithm is making a certain prediction . This is typically formulated as an attribution problem, that is one of identifying properties of the input used in a given prediction, and has been studied in the context of deep neural feedforward and recurrent networks (Fong and Vedaldi, 2019; Arras et al., 2019) . The attributes are usually just input features (encoding) used in training. By studying how these features, or perturbations thereof, propagate through a network, one obtains faithful explanations which may not necessarily be easy to interpret. In this work, we consider explanations obtained using auxiliary attributes which are not used in training, but correspond to a simpler and more intuitive space of interpretations. We learn a mapping of feature embedding (learned by the black-box neural model) to this space, to approximate faithfulness, at the profit of better explanations. A similar local surrogate based approach is considered by (Ribeiro et al., 2016), but it involves domain-specific input perturbations (e.g. deleting words in text, or pixels in image inputs) for explanation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 483, |
|
"end": 507, |
|
"text": "(Fong and Vedaldi, 2019;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 508, |
|
"end": 527, |
|
"text": "Arras et al., 2019)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We extract French words, their definitions and phonetic representations from Dbnary (S\u00e9rasset, 2015) , a Wiktionary-based multilingual lexical database. The words are filtered so that only nouns tagged with a unique gender are retained (for example voile which has senses with both genders is removed). For words with multiple definitions but the same gender, we retain the one that appears first as the semantic feature. We retrieve 124803 words, which are split 90-10-10 into train, validation and test sets respectively. The class distribution of the resulting dataset is not skewed, with 58% masculine and 42% feminine words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 100, |
|
"text": "(S\u00e9rasset, 2015)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Baselines. We consider two baselines. The majority baseline always predicts the masculine gender, while the textbook orthographic baseline is based on the following simple rules -predict masculine unless the word ends in -tion, -sion, -t\u00e9, -son, or -e, excepting -age, -me or -\u00e8ge endings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Semantic models (SEM). The definition of words is used to generate its semantic representation. These are tokenized on whitespace, and are then passed through a trainable embedding layer. These representations are passed through 2 layer bidirectional LSTM of size 25 each, with additive attention. The hidden representation is passed through fully connected layers, of sizes 1500, 1000 and 1. The last layer output is used to calculate cross entropy loss. The representations generated by the penultimate layer (size 1000) is the LSTM semantic embedding.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "XLM-R semantic embedding is also generated for the defintion using XLM-R (Conneau et al., 2020) . The [CLS] token is fine-tuned to predict the gender. The sequence of hidden states at the last layer represents the embedding.", |
|
"cite_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 95, |
|
"text": "(Conneau et al., 2020)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To represent the phonology of a word, we use n-grams features, which are constructed by taking last n characters of the syllabized phoneme sequence (derived from Wiktionary IPA transcriptions) where n is in {1, 2, . . . , k} for an empirically set k. A logistic classifier is trained using these features to predict the gender.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phonological model (PHON).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Orthographic model (ORTH). To encode the orthography of a word, we use two models. As with phonology, we consider n-grams features, which are constructed here by taking last n characters of the word spelling where n is in {1, 2, . . . , k} for an empirically set k. A logistic classifier to predict the gender is trained using these features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phonological model (PHON).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To generate dense representations for these features, the words are tokenized at character level. The tokens are passed through a 32 unit LSTM and then 2 fully connected layers of sizes 30 and 1. The output from the last layer is used to calculate cross entropy loss by comparing with the true gender labels. Once trained, the representation of penultimate layer (of size 30) is used as the orthographic embedding.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phonological model (PHON).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Combined models. A logistic classifier is trained on the concatenated orthographic and semantic features embeddings to discriminate between genders. This is done for both types of semantic embeddings, from LSTM and XLM-R models. We also add phonemic n-gram sequences (n is a hyperparameter set to a jointly optimal value here) as an additional model. All models and their test and validation accuracies are summarized in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 421, |
|
"end": 428, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Phonological model (PHON).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For each word, we calculate a set of easy-tointerpret auxiliary features, with semantic or orthographic connotations. Orthographic features are the top 1000 n-grams in a logistic regression fit. For semantic features, we calculate the scores of the meanings of the words by using word vectors implemented in SEANCE (Crossley et al., 2017) . The assignment of words to psychologically meaningful space can lead to increased interpretability. SEANCE package reports many lexical categories for words based on pre-existing sentiment and cognition dictionaries and has been shown by Crossley et al. 2017to outperform LIWC (Tausczik and Pennebaker, 2010). As SEANCE is only available for the English language, we use translation 1 of the French definitions to English.", |
|
"cite_spans": [ |
|
{ |
|
"start": 308, |
|
"end": 338, |
|
"text": "SEANCE (Crossley et al., 2017)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explainability", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Global explanations. The global explanations are evaluated for i) masculine and feminine class predictions and for ii) classes generated by clustering the best performing combined model embeddings ( Table 1) . The embeddings are clustered using BIRCH (Zhang et al., 1996) into 10 clusters. The number of clusters are chosen to minimize the overall misclassification rate (calculated by assigning the majority predicted class to a cluster). Decision tree classifiers are fit using the interpretable features 2 of about 25k samples (including those for which an explanation is to be generated) to predict the black box model's gender prediction and the cluster of a word.", |
|
"cite_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 271, |
|
"text": "(Zhang et al., 1996)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 199, |
|
"end": 207, |
|
"text": "Table 1)", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Explainability", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Local explanations. We extend the LIME approach of (Ribeiro et al., 2016) to our setting. A local decision tree classifier is trained on the k nearest neighbors of a given test point, to approximate the black box model on the neighborhood.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explainability", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The size of the decision tree is a hyperparameter which may be reduced to improve interpretability (i.e. smaller, more easily understood explanations) at the cost of model faithfulness (Figure 3) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 195, |
|
"text": "(Figure 3)", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Explainability", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The best orthographic model achieves an accuracy of 92.5%, whereas the semantic model alone achieves only 77.23%. Combining the features from the two models leads to a gain in the accuracy of the classifier, to 94.01%. We can conclude that for French, the gender can be predicted robustly by the word orthography, but adding semantic information can further improve prediction. Adding phonology to the mix does not seem to help much. This may be attributed to the fact that phonological forms contain less information than the orthographical forms in French, e.g. lit /li/ (bed, m.) and lie /li/ (dregs, f.). Not only are the written forms phonetic here (i.e. pronunciation is typically unambiguous given spelling) but they often contain additional (e.g. etymological) information which may be missing in the spoken forms. A more detailed error analysis and comparison of model pairs is presented in Appendix A. We define a 'good explanation' to be one with high model fidelity (measured by F1) and if it involves fewer rules (more easily interpretable). This can be quantified in the case of decision trees as the length of path from root to leaf node, when making a prediction. A class with higher average decision tree path length for its sample is less interpretable.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We observe the trade-off between achieving interpretability and model accuracy for masculine and feminine classes ( Figure 1 ) and for clusters generated via embeddings (Figure 2 ). The clusters are generated so that within a gender class, a distinction could be made for nouns that could have different rules, so that easier explanations per class could be generated. Both Figures 1 and 2 show that increasing size of the tree, always increases F1 score, but that comes at the cost of interpretability due to higher number of decision rules. Some ex- ample features that distinguish the different clusters are noted in Appendix B. We see in Figure 1 that the explainability is higher for feminine nouns than masculine. This is consistent with the fact that there are many rules to indicate the feminine gender (such as words ending in -ine, -elle, -esse), whereas masculine gender is a default category leading to more complex, and harder to explain rules. For the clusters, the misclassification rate for validation and testing set are 4.07% and 4.11% respectively, indicating that clusters mostly have one kind of gender. Figure 2 shows that some clusters (such as #2, #6, #7) are more explainable than the others (such as #1, #4), as latter show a poor F1 performance and low interpretability. Cluster #1 is majority feminine and #4 is majority masculine, indicating existence of exceptions in either gender. Identifying these clusters in the feature embedding can help in figuring out cases where the grammatical gender is assigned for formal reasons, in exception to semantic or morphonological rules. Moreover, these may be useful in designing a sys-tem with human-in-the-loop curation, for example by identifying relevant new auxiliary attributes. The local explanations seem to outperform global ones, and the performance improves as we reduce the size of the local neighborhood considered. However, we note that this comes at some cost to consistency of explanations. For example, two local explanations for test points distant in the feature embedding may contain some contradictory rules. This is usually not an issue in typical applications of LIME which simply highlight part of the input as an explanation to provide some model justification. However, inconsistent rules can be of consequence in some applications considered here, for instance language learning where these contradictions are undesirable. Also, while per example explanations are larger on average for the global approach, we have the same rule for entire clusters, giving fewer rules overall.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 124, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 169, |
|
"end": 178, |
|
"text": "(Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 389, |
|
"text": "Figures 1 and 2", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 642, |
|
"end": 650, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1125, |
|
"end": 1133, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Orthography predicts the grammatical gender in French with high accuracy, and adding semantic features can improve this prediction. The blackbox embedding can be explained by simpler decision tree models over a given auxiliary explanation space, both locally and globally. Global explanations lead to fewer rules across examples but are more complex on individual instances. Explainable gender prediction can be useful to language learners and gender bias researchers. A cross-linguistic extension of our study is deferred to future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "azure.microsoft.com/en-us/services/cognitiveservices/translator/. The authors manually verified the accuracy of translations, the word error rate was less than 2% on a sample of 250 words.2 Not to be confused with 'interpretable' and 'uninterpretable' features from formal linguistics(Svenonius, 2006).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Feature descriptions may be found at the following link: https://drive.google.com/file/d/ 1SUfSYNyuaWT2i4tQkiyr2rxVeqnh3cQe/view", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank the reviewers for their useful feedback.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We examine in detail the errors of all our models. Some salient observations are noted below. The errors of our baselines indicate their insufficiency but are easier to understand in isolation. For our models, it is perhaps best to look at interesting pairs of models and compare their errors.ORTH+SEM vs. ORTH: Adding phonology did not seem to help much in predicting gender beyond orthography itself. Even though phonology alone (PHON) is more accurate than the best semantics (SEM) model in predicting gender (81% vs. 77%), semantics provide more useful additions over what orthography already encodes. For example, poix (meaning \"pitch\" or \"tar\"), polio (\"polio\") and ardeur (\"ardor\") are recognized as feminine with help from semantics (ORTH+SEM) but are classified incorrectly by the ORTH model. Similarly the meaning helps identify that brais (\"crushed barley\"), polyane (\"plastic film\") and jurisconsulte (\"law expert\") should be classified as masculine. ORTH vs. PHON: Some examples which are correctly classified by the ORTH model but misclassified by the PHON model include meringue (\"meringue\", f.), boulaie (\"birch grove\", f.), coccyx (\"coccyx\", m.) and explicit (\"end of a chapter or book\", m.).ORTH+SEM: Finally we look at errors of our best model (we consider ORTH+SEM as better than ORTH+SEM+PHON as it gets the same accuracy with fewer features). The list seems to include relatively rarer words, where it often seems hard to explain the gender assignment. Some examples are -myrsite (\"Old medical wine\", m.), fomite (\"inanimate disease vector\", m.) cholestrophane (\"a chemical derived from caffeine\", f.), interpolateur (\"interpolator\", f.).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Error analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For the 10 clusters described for global explainability in section 4.2, we show the top-10 important features in Table 2 . These features are generated by training a decision tree classifier that could have at most 500 leaf nodes. The importance of a feature in each cluster was defined by the number of times it appeared on the decision path of the samples. The features are a mix of orthographic features (generated from word endings) and semantic features (generated from SEANCE) 3 . We emphasize that the features noted here are determined as the most common features for examples in the cluster, and are therefore more likely to appear in explanations of examples from that cluster -the exact explanation for an example is determined by the appropriate decision tree path. The Table 2 also shows the error rates per clusters, which are fraction of misclassified labels per cluster with respect of predictions from the combined black-box model.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 120, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 782, |
|
"end": 789, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "B Auxiliary features for global explanations", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Explaining and interpreting lstms", |
|
"authors": [ |
|
{ |
|
"first": "Leila", |
|
"middle": [], |
|
"last": "Arras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jos\u00e9", |
|
"middle": [], |
|
"last": "Arjona-Medina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Widrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gr\u00e9goire", |
|
"middle": [], |
|
"last": "Montavon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Gillhofer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus-Robert", |
|
"middle": [], |
|
"last": "M\u00fcller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wojciech", |
|
"middle": [], |
|
"last": "Samek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Explainable ai: Interpreting, explaining and visualizing deep learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "211--238", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leila Arras, Jos\u00e9 Arjona-Medina, Michael Widrich, Gr\u00e9goire Montavon, Michael Gillhofer, Klaus- Robert M\u00fcller, Sepp Hochreiter, and Wojciech Samek. 2019. Explaining and interpreting lstms. In Explainable ai: Interpreting, explaining and visual- izing deep learning, pages 211-238. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Bach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Binder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gr\u00e9goire", |
|
"middle": [], |
|
"last": "Montavon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frederick", |
|
"middle": [], |
|
"last": "Klauschen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus-Robert", |
|
"middle": [], |
|
"last": "M\u00fcller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wojciech", |
|
"middle": [], |
|
"last": "Samek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "PloS one", |
|
"volume": "10", |
|
"issue": "7", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Bach, Alexander Binder, Gr\u00e9goire Mon- tavon, Frederick Klauschen, Klaus-Robert M\u00fcller, and Wojciech Samek. 2015. On pixel-wise explana- tions for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "1897. The nature and origin of the noun genders in the Indo-European languages: A lecture delivered on the occasion of the sesquicentennial celebration of Princeton University. C. Scribner's sons", |
|
"authors": [ |
|
{ |
|
"first": "Karl", |
|
"middle": [], |
|
"last": "Brugmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karl Brugmann. 1897. The nature and origin of the noun genders in the Indo-European languages: A lecture delivered on the occasion of the sesquicenten- nial celebration of Princeton University. C. Scrib- ner's sons.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Unsupervised cross-lingual representation learning at scale", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kartikay", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vishrav", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Wenzek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2020", |
|
"issue": "", |
|
"pages": "8440--8451", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8440-8451. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Sentiment analysis and social cognition engine (seance): An automatic tool for sentiment, social cognition, and social-order analysis", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Scott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristopher", |
|
"middle": [], |
|
"last": "Crossley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danielle", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Kyle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mc-Namara", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Behavior research methods", |
|
"volume": "49", |
|
"issue": "3", |
|
"pages": "803--821", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Scott A Crossley, Kristopher Kyle, and Danielle S Mc- Namara. 2017. Sentiment analysis and social cog- nition engine (seance): An automatic tool for sen- timent, social cognition, and social-order analysis. Behavior research methods, 49(3):803-821.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Minimally supervised induction of grammatical gender", |
|
"authors": [ |
|
{ |
|
"first": "Silviu", |
|
"middle": [], |
|
"last": "Cucerzan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Silviu Cucerzan and David Yarowsky. 2003. Mini- mally supervised induction of grammatical gender. In Proceedings of the 2003 Human Language Tech- nology Conference of the North American Chapter of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Data mining techniques for konkani grammatical gender identification. Fr. Agnel College of Arts & Commerce Re-accredited by NAAC with \"A\" Grade Pilar-Goa", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ms Shilpa Desai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ms Shilpa Desai. 2017. Data mining techniques for konkani grammatical gender identification. Fr. Ag- nel College of Arts & Commerce Re-accredited by NAAC with \"A\" Grade Pilar-Goa, page 38.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "The acquisition of grammatical gender in l2 german by learners with afrikaans, english or italian as their l1", |
|
"authors": [ |
|
{ |
|
"first": "Carla", |
|
"middle": [], |
|
"last": "Ellis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simone", |
|
"middle": [], |
|
"last": "Conradie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kate", |
|
"middle": [], |
|
"last": "Huddlestone", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Stellenbosch Papers in Linguistics", |
|
"volume": "41", |
|
"issue": "", |
|
"pages": "17--27", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carla Ellis, Simone Conradie, and Kate Huddlestone. 2012. The acquisition of grammatical gender in l2 german by learners with afrikaans, english or ital- ian as their l1. Stellenbosch Papers in Linguistics, 41:17-27.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Explanations for attributing deep neural network predictions", |
|
"authors": [ |
|
{ |
|
"first": "Ruth", |
|
"middle": [], |
|
"last": "Fong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Vedaldi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Explainable AI: Interpreting, Explaining and Visualizing Deep Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "149--167", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ruth Fong and Andrea Vedaldi. 2019. Explanations for attributing deep neural network predictions. In Explainable AI: Interpreting, Explaining and Visual- izing Deep Learning, pages 149-167. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Greedy function approximation: a gradient boosting machine", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Jerome", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Friedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Annals of statistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1189--1232", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jerome H Friedman. 2001. Greedy function approx- imation: a gradient boosting machine. Annals of statistics, pages 1189-1232.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A survey of methods for explaining black box models", |
|
"authors": [ |
|
{ |
|
"first": "Riccardo", |
|
"middle": [], |
|
"last": "Guidotti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Monreale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salvatore", |
|
"middle": [], |
|
"last": "Ruggieri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franco", |
|
"middle": [], |
|
"last": "Turini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fosca", |
|
"middle": [], |
|
"last": "Giannotti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dino", |
|
"middle": [], |
|
"last": "Pedreschi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ACM computing surveys (CSUR)", |
|
"volume": "51", |
|
"issue": "5", |
|
"pages": "1--42", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5):1- 42.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Grammatical gender: its origin and development", |
|
"authors": [ |
|
{ |
|
"first": "Muhammad", |
|
"middle": [], |
|
"last": "Hasan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ibrahim", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "166", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Muhammad Hasan Ibrahim. 2014. Grammatical gen- der: its origin and development, volume 166. Wal- ter de Gruyter.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "The semantics of grammatical gender: A cross-cultural study", |
|
"authors": [ |
|
{ |
|
"first": "Toshi", |
|
"middle": [], |
|
"last": "Konishi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Journal of psycholinguistic research", |
|
"volume": "22", |
|
"issue": "5", |
|
"pages": "519--534", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Toshi Konishi. 1993. The semantics of grammatical gender: A cross-cultural study. Journal of psycholin- guistic research, 22(5):519-534.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A unified approach to interpreting model predictions", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Scott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Su-In", |
|
"middle": [], |
|
"last": "Lundberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "4765--4774", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Scott M Lundberg and Su-In Lee. 2017. A unified ap- proach to interpreting model predictions. Advances in Neural Information Processing Systems, 30:4765- 4774.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Predictability in french gender attribution: A corpus analysis", |
|
"authors": [ |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Lyster", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Journal of French Language Studies", |
|
"volume": "16", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roy Lyster. 2006. Predictability in french gender attri- bution: A corpus analysis. Journal of French Lan- guage Studies, 16(1):69.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Interpretable machine learning", |
|
"authors": [ |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Molnar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christoph Molnar. 2019. Interpretable machine learn- ing.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "What's in a name? In some languages, grammatical gender", |
|
"authors": [ |
|
{ |
|
"first": "Vivi", |
|
"middle": [], |
|
"last": "Nastase", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marius", |
|
"middle": [], |
|
"last": "Popescu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1368--1377", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vivi Nastase and Marius Popescu. 2009. What's in a name? In some languages, grammatical gender. In Proceedings of the 2009 Conference on Empiri- cal Methods in Natural Language Processing, pages 1368-1377.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "French gender assignment revisited", |
|
"authors": [ |
|
{ |
|
"first": "Don", |
|
"middle": [], |
|
"last": "Nelson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "56", |
|
"issue": "", |
|
"pages": "19--38", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Don Nelson. 2005. French gender assignment revisited. Word, 56(1):19-38.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "why should i trust you?\" explaining the predictions of any classifier", |
|
"authors": [ |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Marco Tulio Ribeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Guestrin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1135--1144", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \" why should i trust you?\" explain- ing the predictions of any classifier. In Proceed- ings of the 22nd ACM SIGKDD international con- ference on knowledge discovery and data mining, pages 1135-1144.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Transfer effects in learning a second language grammatical gender system", |
|
"authors": [ |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Sabourin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurie", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Stowe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ger J De", |
|
"middle": [], |
|
"last": "Haan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Second Language Research", |
|
"volume": "22", |
|
"issue": "", |
|
"pages": "1--29", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laura Sabourin, Laurie A Stowe, and Ger J De Haan. 2006. Transfer effects in learning a second language grammatical gender system. Second Language Re- search, 22(1):1-29.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Explainable AI: interpreting, explaining and visualizing deep learning", |
|
"authors": [ |
|
{ |
|
"first": "Wojciech", |
|
"middle": [], |
|
"last": "Samek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gr\u00e9goire", |
|
"middle": [], |
|
"last": "Montavon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Vedaldi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lars", |
|
"middle": [ |
|
"Kai" |
|
], |
|
"last": "Hansen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus-Robert", |
|
"middle": [], |
|
"last": "M\u00fcller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "11700", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wojciech Samek, Gr\u00e9goire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert M\u00fcller. 2019. Explainable AI: interpreting, explaining and visual- izing deep learning, volume 11700. Springer Na- ture.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Grammatical gender and linguistic relativity: A systematic review", |
|
"authors": [ |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Samuel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoff", |
|
"middle": [], |
|
"last": "Cole", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Madeline", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Eacott", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Psychonomic bulletin & review", |
|
"volume": "26", |
|
"issue": "6", |
|
"pages": "1767--1786", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven Samuel, Geoff Cole, and Madeline J Eacott. 2019. Grammatical gender and linguistic relativ- ity: A systematic review. Psychonomic bulletin & review, 26(6):1767-1786.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Dbnary: Wiktionary as a lemonbased multilingual lexical resource in rdf", |
|
"authors": [ |
|
{ |
|
"first": "Gilles", |
|
"middle": [], |
|
"last": "S\u00e9rasset", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Semantic Web", |
|
"volume": "6", |
|
"issue": "4", |
|
"pages": "355--361", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gilles S\u00e9rasset. 2015. Dbnary: Wiktionary as a lemon- based multilingual lexical resource in rdf. Semantic Web, 6(4):355-361.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Cross linguistic analysis of grammatical gender: Implications for critical language pedagogy. Thesis, Linguistics and Education departments of the", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Samantha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sexton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samantha R. Sexton. 2020. Cross linguistic analysis of grammatical gender: Implications for critical lan- guage pedagogy. Thesis, Linguistics and Educa- tion departments of the University of Massachusetts Amherst.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Learning important features through propagating activation differences", |
|
"authors": [ |
|
{ |
|
"first": "Avanti", |
|
"middle": [], |
|
"last": "Shrikumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peyton", |
|
"middle": [], |
|
"last": "Greenside", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anshul", |
|
"middle": [], |
|
"last": "Kundaje", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3145--3153", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Avanti Shrikumar, Peyton Greenside, and Anshul Kun- daje. 2017. Learning important features through propagating activation differences. In International Conference on Machine Learning, pages 3145-3153. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Phonological facilitation of grammatical gender retrieval. Language and Cognitive Processes", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Starreveld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wido La", |
|
"middle": [], |
|
"last": "Heij", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "19", |
|
"issue": "", |
|
"pages": "677--711", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Starreveld and Wido La Heij. 2004. Phonologi- cal facilitation of grammatical gender retrieval. Lan- guage and Cognitive Processes, 19(6):677-711.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Axiomatic attribution for deep networks", |
|
"authors": [ |
|
{ |
|
"first": "Mukund", |
|
"middle": [], |
|
"last": "Sundararajan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankur", |
|
"middle": [], |
|
"last": "Taly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiqi", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3319--3328", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Inter- national Conference on Machine Learning, pages 3319-3328. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Interpreting uninterpretable features", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Svenonius", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Linguistic Analysis", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Svenonius. 2006. Interpreting uninterpretable features. Linguistic Analysis.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "The psychological meaning of words: Liwc and computerized text analysis methods", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Yla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Tausczik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pennebaker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of language and social psychology", |
|
"volume": "29", |
|
"issue": "1", |
|
"pages": "24--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yla R Tausczik and James W Pennebaker. 2010. The psychological meaning of words: Liwc and comput- erized text analysis methods. Journal of language and social psychology, 29(1):24-54.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "The role of morphology in grammatical gender assignment. Morphology and its interfaces", |
|
"authors": [ |
|
{ |
|
"first": "Spyridoula", |
|
"middle": [], |
|
"last": "Varlokosta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Spyridoula Varlokosta. 2011. The role of morphology in grammatical gender assignment. Morphology and its interfaces, 178.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "An exploration of the encoding of grammatical gender in word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Hartger", |
|
"middle": [], |
|
"last": "Veeman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Basirat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2008.01946" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hartger Veeman and Ali Basirat. 2020. An exploration of the encoding of grammatical gender in word em- beddings. arXiv preprint arXiv:2008.01946.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Quantifying the semantic core of gender systems", |
|
"authors": [ |
|
{ |
|
"first": "Adina", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Damian", |
|
"middle": [], |
|
"last": "Blasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Wolf-Sonkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hanna", |
|
"middle": [], |
|
"last": "Wallach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5738--5743", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adina Williams, Damian Blasi, Lawrence Wolf- Sonkin, Hanna Wallach, and Ryan Cotterell. 2019. Quantifying the semantic core of gender systems. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5738- 5743.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Predicting declension class from form and meaning", |
|
"authors": [ |
|
{ |
|
"first": "Adina", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tiago", |
|
"middle": [], |
|
"last": "Pimentel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hagen", |
|
"middle": [], |
|
"last": "Blix", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Arya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eleanor", |
|
"middle": [], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Chodroff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6682--6695", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adina Williams, Tiago Pimentel, Hagen Blix, Arya D McCarthy, Eleanor Chodroff, and Ryan Cotterell. 2020. Predicting declension class from form and meaning. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 6682-6695.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Birch: an efficient data clustering method for very large databases", |
|
"authors": [ |
|
{ |
|
"first": "Tian", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raghu", |
|
"middle": [], |
|
"last": "Ramakrishnan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miron", |
|
"middle": [], |
|
"last": "Livny", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "ACM sigmod record", |
|
"volume": "25", |
|
"issue": "2", |
|
"pages": "103--114", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tian Zhang, Raghu Ramakrishnan, and Miron Livny. 1996. Birch: an efficient data clustering method for very large databases. ACM sigmod record, 25(2):103-114.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Class-specific/overall explainability (interpretability vs. fidelity) trade-off.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Cluster-specific explainability trade-off.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Explainability trade-off for local explanations for various neighborhood sizes.", |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Accuracy results of various models on test and validation sets." |
|
} |
|
} |
|
} |
|
} |