text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 327–336 August 11-16, 2024 ©2024 Association for Computational Linguistics What does Parameter-free Probing Really Uncover? Tommi Buder-Gröndahl University of Helsinki / Yliopistonkatu 3, 00014 Helsinki, Finland [email protected] Abstract Supervised approaches to probing large language models (LLMs) have been criticized of using pre-defined theory-laden target labels. As an alternative, parameter-free probing constructs structural representations bottom-up via information derived from the LLM alone. This has been suggested to capture a genuine “LLMinternal grammar”. However, its relation to familiar linguistic formalisms remains unclear. I extend prior work on a parameter-free probing technique called perturbed masking applied to BERT, by comparing its results to the Universal Dependencies (UD) formalism for English. The results highlight several major discrepancies between BERT and UD, which lack correlates in linguistic theory. This raises the question of whether human grammar is the correct analogy to interpret BERT in the first place. 1 Introduction Probing large language models (LLMs) consists in mapping their internal states to linguistic classes or relations (Rogers et al., 2020; Belinkov, 2022). Most methods use supervised learning for training a probe to predict pre-determined labels (Hewitt and Manning, 2019; Tenney et al., 2019; Kuznetsov and Gurevych, 2020; Manning et al., 2020; Lasri et al., 2022). However, critics have deemed this insufficient for determining whether LLMs actually represent linguistic structures (Kulmizev and Nivre, 2022; Buder-Gröndahl, 2023). For representation proper, the labels should not only be predictable from the LLM; they should somehow capture its internal architecture on a high level of abstraction. A possible way forward is to use parameter-free probing, which shuns separate probing classifiers by extracting structural information directly from the LLM (Clark et al., 2019; Mareˇcek and Rosa, 2019; Wu et al., 2020). As a bottom-up approach, this has been interpreted as uncovering the grammar intrinsic to the LLM without relying on a priori presumptions derived from linguistic theory. In this paper, I focus on a parameter-free probe called perturbed masking, originally presented and applied to BERT by Wu et al. (2020). While it has received criticism for underwhelming results compared to gold-standard parses (Niu et al., 2022), this overlooks its main goal of uncovering BERT’s inherent syntax – which may well deviate from linguistic theory (Wu et al., 2020, 4173). Such deviations do not call for discarding it; instead, they provide insight into how BERT’s architecture can differ from common linguistic assumptions. I compare dependency graphs derived from BERT to the Universal Dependencies (UD) annotation for English, and uncover major discrepancies related to verbal argument structure, noun phrase structure, modifiers, and prepositions. In particular, BERT treats the root (in UD’s annotation) as a head far more often than UD. This effect of being “attracted by the root” is especially strong in recursive embeddings, but also extends beyond these. Moreover, BERT’s behavior tends to resist linguistic explanation. For example, despite major disagreements within linguistic theory, argument structure is ubiquitously treated as clause-bound: no feasible analysis assimilates embedded clause arguments to main clause arguments. Yet, the BERT-parse regularly does exactly this. Indeed, the only cases where BERT’s deviations from UD have a salient linguistic interpretation concern prepositions and some possessive constructions, where dependent-head relations are flipped. The results thus point to the same direction as critiques of supervised probing: the assumption that BERT represents grammar in line with familiar linguistic formalisms lacks proper support. When this is not built directly into the experiment design (via pre-determined target labels), probing reveals fundamental disparities between BERT and commonly accepted syntactic principles. We are thus prompted to question whether human grammar is an appropriate analogy for BERT after all. 327 2 Methodology I describe the parameter-free probing technique investigated (Section 2.1), the dataset (Section 2.2), and the experiment pipeline (Section 2.3). 2.1 Perturbed masking Parameter-free probing aims to construct linguistic information directly from the LLM without separate training. Wu et al. (2020) present a prominent techique called perturbed masking, with which they aim to find “the ‘natural’ syntax inherent in BERT” (p. 4173) by utilizing an independently motivated relation of impact between tokens. I replicated their original setup,1 which uses the bert-base-uncased model presented in Wolf et al. (2020). As input, BERT takes a sequence of tokens x = [x1, ..., xn]. It maps each token xi to a contextual representation Hθ(x)i, where the influence of each token xj ∈x arises via Transformer attention (Vaswani et al., 2017) based on model parameters θ. For perturbed masking, Wu et al. (2020) first mask token xi, giving x\{xi}. They then also mask token xj, giving x\{xi, xj}. The impact of xj to the representation of xi is now measured as follows, where d is Euclidean distance:2 f(xi, xj) = d(Hθ(x\{xi})i, Hθ(x\{xi, xj})i) Impacts between all token pairs are collected into an impact matrix, which is given as input to an algorithm that constructs a directed dependency graph using the Eisner algorithm (Eisner, 1996).3 The intuitive idea is that heads have the highest impact on their dependents in the matrix. 2.2 Data Following Wu et al. (2020), I used the English Parallel Universal Dependencies (PUD) dataset (Zeman et al., 2017). Consisting of 1000 sentences of which I discarded seven (see Appendix A), it covers 21047 UD-annotated tokens. 2.3 Experiments UD assigns each word a head and a dependency relation type (deprel), as exemplified below:4 1https://github.com/LividWo/ Perturbed-Masking#dependency 2Wu et al. (2020) report superior performance to Euclidean distance compared to the difference between probability distributions across targets. 3Wu et al. (2020) also experimented with phrase-structures, but the present setup requires dependency graphs to obtain deprel labels (Section 2.3). See Niu et al. (2022) on phrasestructures generated via perturbed masking. 4All examples are taken from the PUD dataset (shortened). (1) then the commercial ends advmod det nsubj The arrow is read as marking a head-dependent relation (in this direction). The root is its own head, and is typically the main verb. The BERT-parse of the same sentence maps all tokens to the root ends: (2) then the commercial ends Here, UD and BERT differ in which head they assign to the determiner the. I denote this by marking the UD-assigned head-dependent relation above and the BERT-assigned relation below: (3) then the commercial ends BERT UD The challenge in interpreting BERT-parses is that they only give head-dependent relations, not deprels. We thus need external deprels as the theoretical basis of comparing BERT and UD. For this, I use UD-annotations as follows: Dep(x): deprel assigned to x by UD HeadUD(x): head assigned to x by UD HeadBERT (x): head assigned to x by BERT HU(x) = Dep(HeadUD(x)) HB(x) = Dep(HeadBERT (x)) That is, I compare UD- and BERT-assigned heads in terms of their UD-deprels. These values for the determiner in the example above are: Dep(the) = det HeadUD(the) = commercial HeadBERT (the) = ends HU(the) = Dep(commercial) = nsubj HB(the) = Dep(ends) = root Note that, since Dep is derived from UD, HB should not be read as directly describing how BERT treats the head. Instead, it describes how UD would treat the head assigned by BERT. By classifying discrepancies between BERT and UD, I assess their prevalence and nature in the PUD data. I focus on four phenomena: argument structure, noun phrase (NP) structure, adjective/adverb modifiers, and prepositional phrases (PPs). Sourcecode for the experiments is openly available.5 5https://github.com/tombgro/ parameter-free-probing 328 3 Results I replicated the original results of Wu et al. (2020) with their best setup on the PUD data,6 and investigated shifts between BERT and UD in terms of Dep, HU, and HB. Section 3.1 presents general findings, Sections 3.2–3.5 cover linguistic details, and Appendix B provides the raw data. 3.1 General findings Of all 21047 tokens, 58% were subject to a headdependent shift between UD and BERT. Nearly all Dep-types were involved here, and a clear majority (74%) had a shift ratio over 50%. Clearly the most common HB was root; i.e. shifts typically involved BERT assigning a head which was the root in the UD-parse. This covered 35% of all shifts. Wu et al. (2020, 4169) suggest that BERT mostly learns local dependencies. To assess this, we calculated dependent-head distances from both parses, and obtained contrasting results: the average is higher in BERT (3.66) than in UD (3.52). Locality thus does not explain the discrepancies. A likely explanation for the increased average dependent-head distance in BERT is its tendency to over-assign the root as a head. As covered in upcoming sections, this can lead to longer dependent-head distances in cases like embedded clauses, where the original UD-head is closer to its dependent than the root. 3.2 Argument structure Table 1 collects shifts per Dep–HU pair for active and passive clause subjects (nsubj, nsubj:pass) and direct objects (obj).7 In arguments of the root, BERT and UD mostly overlap with shift ratios of 15% −29%. However, with embedded clauses (ccomp, xcomp, conj, acl:relcl), BERT regularly continues to assign arguments to the root, with far higher shift ratios (64% −94%) and root as the most common HB. An example is shown below, where BERT assigns the main verb as the head of an embedded subject: (4) that ’s not what we need BERT UD The BERT-parse thus seems to shun recursion, preferring the root even for embedded arguments. 6 This gives the Unlabeled Attachment Score (UAS) of 41.7, the Undirected UAS (UUAS) of 52.1, and the Neutral Edge Direction (NED) score of 69.6. 7Tables 1–4 contain shifts with the minimum count of 20. “Ratio” denotes the frequency of shifts for each Dep–HU pair. Dep HU Ratio Count nsubj root 0.24 198 acl:relcl 0.81 140 ccomp 0.92 101 advcl 0.79 80 conj 0.83 68 parataxis 0.64 46 nsubj:pass root 0.29 38 acl:relcl 0.94 32 advcl 0.91 21 obj advcl 0.66 86 xcomp 0.75 82 acl:relcl 0.78 58 conj 0.66 58 acl 0.73 52 root 0.15 47 ccomp 0.73 29 Table 1: Verbal argument structure: subjects and objects. The same pattern also repeats for objects: (5) projects include extending the district BERT UD While the explanation of this behavior is not fully clear, in general it shows that the root has an especially high impact for determining the contextual embeddings of other words. One salient possibility is that this arises because the root is usually a main clause verb, which has central influence on both grammatical matters (such as inflection or valency) and semantic matters (such as the possible semantic classes of arguments). Hence, when BERT is pre-trained via masked-token prediction (Devlin et al., 2019), attending to the main clause verb is likely to give useful information pertaining to many masked tokens. A general high impact for the root would follow, in line with these findings. 3.3 Noun phrase structure Table 2 lists NP-related shifts for three variants of Dep: determiners (det), possessors (nmod:poss), and numerals (nummod). Some of these shifts are grammatically salient: for instance, UD treats the possessor as headed by the possessed noun, but BERT often takes it to be headed by the clitic ’s: (6) Clinton ’s large bank account UD BERT 329 Dep HU Ratio Count det obl 0.52 261 obj 0.67 253 nsubj 0.54 208 nmod 0.49 191 conj 0.57 44 nsubj:pass 0.54 43 nmod:poss 0.64 23 appos 0.68 21 nmod:poss obj 0.70 56 nmod 0.72 55 obl 0.58 54 nsubj 0.70 53 nummod obl 0.69 55 nmod 0.71 25 Table 2: Determiners, possessors, and numerals. However, many cases are linguistically incoherent. For example, BERT sometimes takes possessors to modify a preposition rather than a noun: (7) for his review BERT UD As usual, BERT also regularly assigns the root as the head, as for the determiner (a) shown here: (8) there was a time BERT UD In principle, the DP-analysis in formal linguistics treats determiners as noun phrase heads (Abney, 1987), and might initially justify taking the determiner to head the object (a time). However, this would require the noun (time) to be headed by the determiner, but instead it is headed by the root as well. BERT thus does not implement the DP-analysis; the determiner is simply attracted by the root. The same occurs for numeral modifiers: (9) 300,000 people (...) were involved BERT UD Since possessors, determiners, and numerals are the sine qua non of NP-arguments/modifiers, these results illustrate a drastic shift between BERT and widely shared syntactic assumptions about NPs. 3.4 Adjective and adverb modifiers Table 3 shows shifts related to adjectives (amod), adverbs (advmod), and nominal modifiers (nmod). Dep HU Ratio Count amod obj 0.62 151 obl 0.52 151 nmod 0.53 132 nsubj 0.53 118 conj 0.63 56 nsubj:pass 0.52 29 compound 0.57 21 advmod root 0.18 57 conj 0.62 53 advcl 0.72 51 acl:relcl 0.73 40 amod 0.73 36 advmod 0.71 32 nummod 0.75 27 ccomp 0.68 27 obl 0.72 21 xcomp 0.72 21 nmod obl 0.88 243 obj 0.89 202 nsubj 0.87 163 nmod 0.84 127 conj 0.88 59 nsubj:pass 0.83 34 appos 0.85 23 root 0.38 20 Table 3: Adjectival, adverbial, and nominal modifiers. The root is a prominent HB in embedded clauses as well as nested modifiers, indicating that BERT does not reliably treat modifiers recursively. For example, embedded wh-adverbs such as why are often assigned as dependents of the main verb: (10) I don’t know why I chose her BERT UD However, the lack of recursion is insufficient to explain all modifier-related shifts. In particular, adjectives of even non-embedded noun phrases are regularly treated as dependents of the root: (11) this will put new limits on (...) BERT UD This behavior resists interpretation in all prominent syntactic frameworks on adjectives, which ubiquitously treat them as modifiers of nouns or NPs (c.f. Baker 2003; Dixon 2004; Hofherr and Matushansky (ed.) 2010). 330 Dep HU Ratio Count case obl 0.72 877 nmod 0.73 783 nmod:poss 0.83 85 obl root 0.47 283 acl:relcl 0.97 117 advcl 0.95 92 conj 0.91 90 xcomp 0.95 89 acl 0.93 88 ccomp 0.96 50 parataxis 0.96 25 Table 4: Prepositional phrases. 3.5 Prepositional phrases Table 4 collects shifts related to prepositions or clitics (case) and their complements (obl). BERT regularly treats prepositions as dependents of the token modified by the PP, while UD takes them to be headed by the complement noun: (12) plenty of other military hardware BERT UD BERT also regularly treats the complement as the preposition’s dependent, in contrast to UD linking it directly to the token modified by the PP: (13) browse at the morning market BERT UD This is especially interesting since here UD prefers the root as opposed to BERT, unlike in our other findings. It thus looks like a genuine syntactic difference. However, the pattern is no longer reliable when the PP modifies a non-root, as shown by the high shift ratios with embedded clauses as HU. The most prominent HB here was again root. 3.6 Summary I draw four take-home messages: 1. The root is treated as a head far more by BERT than by UD, even across phrase boundaries. 2. BERT’s overlap with UD drastically decreases in embeddings, displaying a lack of recursion. 3. Headedness in PPs is systematically flipped between UD and BERT. 4. Overall, BERT-parses commonly lack a coherent linguistic interpretation. 4 Discussion The results are not easily explained by some trivial non-linguistic property. Locality does not account for BERT’s deviations from UD, since the average head-dependent distance is actually higher in BERT-parses (Section 3.1). Another initial possibility could be that BERT mimics naive right-chain performance.8 However, most examples in Sections 3.2–3.5 involve BERT assigning the head leftward (i.e. the dependent rightward). Sometimes this even goes directly against right-chain-like annotation in UD, as in example (11) (Section 3.4). It is also worth raising the controversial status of the UD format itself (c.f. Rehbein et al. 2017; Osborne and Gerdes 2019). The central issue here concerns function words, which UD treats as dependents of content words – going against alternative formats such as Surface-syntactic Universal Dependencies (SUD) (Gerdes et al., 2018) where these relations are reversed. The corresponding distinction appears in our results as well, with respect to prepositions and NPs (Section 3.5). BERT’s performance might thus accord better alternative formats to UD, such as SUD. That said, most discrepancies discussed in Section 3 are not specific only to UD. All mainstream syntactic frameworks distinguish between arguments/modifiers of main and embedded clauses (Sections 3.2, 3.4), and treat possessors, determiners, numerals, or adjectives as modifying nouns rather than verbs (Sections 3.3, 3.4). With the possible exception of (root-modifying) PPs (Section 3.5), the shifts are not made linguistically coherent by minor changes to the syntactic formalism. 5 Conclusions and future work This study uncovered several discrepancies between BERT and UD. While some were syntactically interpretable, BERT’s prevailing tendency to treat the root as a head across phrase boundaries lacks a clear linguistic analogy. This puts to question the idea that BERT should be interpreted in line with traditional grammatical formalisms. Instead, it highlights the need to explain LLMs in their own terms – avoiding reliance on a priori linguistic assumptions not motivated by LLMs themselves. 8Wu et al. (2020) report a 35.0 UAS for the naive rightchain baseline in comparison to the 41.7 UAS for BERT. A related issue concerns the comparison between BERT-derived phrase-structures and a naive right-branching baseline, the similarity between which is covered by Niu et al. (2022). 331 Limitations This short paper focused on one model architecture (BERT), one parameter-free probing technique (perturbed masking), and one English dataset (PUD). Extending the work to cover multiple variants of each is an important future prospect. I would especially highlight the importance of inter-lingual comparison, as well as more careful attention to assumptions behind the linguistic formalism. Methodologically, this study combined quantitative and qualitative analysis, both of which have limitations. Numerical information alone (in Tables 1–4) is insufficient for yielding thorough syntactic details on dependent-head shifts. For obtaining such further analyses, specific parse-pairs between BERT and UD need to be assessed, which is how the example cases were attained. But – as manual work – this is bound to have a smaller coverage. Without seeing any easy way out of this trade-off, I emphasize the need for further work extending both quantitative and qualitative coverage of related phenomena. I hope to have provided a fruitful starting-point for this line of research. Ethics Statement Prior source code and data used in the experiments is available as open-source, and the link is given in the paper (Section 2.1). No privacy-sensitive or otherwise harmful data was used, and no experiments on humans or non-human animals were conducted. The source code of the experiments is made available as open-source (Section 2.3). Acknowledgements I thank Jörg Tiedemann and Timothee Mickus for helpful discussions related to the paper. This project was funded by the Academy of Finland (decision number 350775). References Steven Abney. 1987. The English Noun Phrase in its Sentential Aspect. PhD thesis, Massachusets Institute of Technology. Mark Baker. 2003. Lexical Categories. Cambridge University Press, Cambridge. Yonatan Belinkov. 2022. Probing classifiers: Promises, shortcomings, and advances. Computational Linguistics, 48(1):207–219. Tommi Buder-Gröndahl. 2023. The ambiguity of BERTology: what do large language models represent? Synthese, 203:15. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? An analysis of BERT’s attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276–286. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pages 4171–4186. Robert M. W. Dixon. 2004. Adjective classes in typological perspective. In Robert M. W. Dixon and Alexandra Y. Aikhenvald, editors, Explorations in Linguistic Typology 1, pages 1–49. Oxford University Press, New York. Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of the 16th conference on Computational linguistics: Volume 1, pages 340–345. Kim Gerdes, Bruno Guillaume, Sylvain Kahane, and Guy Perrier. 2018. SUD or surface-syntactic Universal Dependencies: An annotation scheme nearisomorphic to UD. In Proceedings of the Second Workshop on Universal Dependencies (UDW 2018), pages 66–74. John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138. Patricia Cabredo Hofherr and Ora Matushansky (ed.). 2010. Adjectives: Formal Analyses in Syntax and Semantics. John Benjamins, Amsterdam. Artur Kulmizev and Joakim Nivre. 2022. Schrödinger’s tree—on syntax and neural language models. Frontiers in Artificial Intelligence, 5. 332 Ilia Kuznetsov and Iryna Gurevych. 2020. A matter of framing: The impact of linguistic formalism on probing results. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 171–182. Karim Lasri, Tiago Pimentel, Alessandro Lenci, Thierry Poibeau, and Ryan Cotterell. 2022. Probing for the usage of grammatical number. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics Volume 1: Long Papers, pages 8818–8831. Christopher D. Manning, Kevin Clark, and John Hewitt. 2020. Emergent linguistic structure in artificial neural networks trained by self-supervision. PNAS, 117(48):30046–30054. David Mareˇcek and Rudolf Rosa. 2019. From balustrades to Pierre Vinken: Looking for syntax in transformer self-attentions. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 263– 275. Jingcheng Niu, Wenjie Lu, Eric Corlett, and Gerald Penn. 2022. Using Roark-Hollingshead distance to probe BERT’s syntactic competence. In Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 325–334. T. Osborne and K. Gerdes. 2019. The status of function words in dependency grammar: A critique of universal dependencies (UD). Glossa, 4(1):17. I. Rehbein, J. Steen, B. Do, and Anette Frank. 2017. Universal dependencies are hard to parse – or are they? In Proceedings of the Fourth International Conference on Dependency Linguistics, pages 218– 228. Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in BERTology: What we know about how BERT works. Transactions of the Association for Computational Linguistics, 8:842–866. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593– 4601. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhins. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing, pages 6000–6010. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45. Zhiyong Wu, Yun Chen, Ben Kao, and Qun Liu. 2020. Perturbed masking: Parameter-free probing for analyzing and interpreting BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4166–4176. Daniel Zeman, Martin Popel, Milan Straka, Jan Hajiˇc, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinková, Jan Hajiˇc jr., Jaroslava Hlaváˇcová, Václava Kettnerová, Zdeˇnka Urešová, Jenna Kanerva, Stina Ojala, Anna Missilä, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung, MarieCatherine de Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Valeria de Paiva, Kira Droganova, Héctor Martínez Alonso, Çaˇgrı Çöltekin, Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Strnadová, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendonça, Tatiana Lando, Rattima Nitisaroj, and Josie Li. 2017. Conll 2017 shared task: Multilingual parsing from raw text to universal dependencies. In CoNLL 2017 Shared Task, pages 1–19. 333 A Appendix: Discarded data The algorithm for generating a dependency graph – obtained from Wu et al. (2020) – assumes that token IDs are unique and match positions in the sentence. However, in some coordinated sentences, the UD parse has the same ID appearing in two consecutive tokens. The BERT-parse, in turn, treats the repeated tokens as having separate IDs, which creates a disparity. Table 5 shows an example: Token Dep ID (UD) ID (BERT) Durán nsubj 1 1 acts root 2 2 acts conj 2 3 as case 3 4 spokesman obl 4 5 and cc 5 6 Ángel conj 6 7 Pintado flat 7 8 as case 8 9 treasurer obl 9 10 Table 5: Mismatch between UD and BERT in token IDs. Here, the verb (acts) is repeated since it serves a double role as the root and a conjunct. UD assigns the same ID (2) to both instances, but BERT uses an increasing counter of IDs. Hence, after the repetition, the respective token IDs between UD and BERT no longer match. Since dependent-head pairs are encoded in terms of IDs, this results in artificial disparities between the parses. Because the number of such sentences in the PUD data was marginal (7), I discarded them in the experiments to avoid this problem. However, the original UAS, UUAS, and NED scores – obtained via replicating Wu et al. (2020) – are calculated from the full PUD data containing these sentences (see Footnote 6). B Appendix: complete results Table 6 displays each Dep that was subject to a dependent-head shift between BERT and UD. Tables 7–8 show the same per HU and HB, respectively. Table 9 lists all shifts that appeared at least 20 times in the format Dep–HU–HB. This comprises the data discussed in the main paper, from which Tables 1–4 are derived. Dep Ratio Count case 0.7251 1799 punct 0.5135 1252 det 0.5433 1105 nmod 0.8500 912 obl 0.7082 869 amod 0.5402 719 nsubj 0.4683 650 compound 0.6675 538 conj 0.8176 511 mark 0.7964 442 obj 0.5011 438 cc 0.7615 431 advmod 0.5035 426 nmod:poss 0.6703 244 advcl 0.7158 209 aux 0.4474 183 acl:relcl 0.8483 179 xcomp 0.5815 157 nummod 0.6071 153 nsubj:pass 0.5720 135 acl 0.6895 131 appos 0.8310 118 flat 0.4978 114 cop 0.3270 103 ccomp 0.7259 98 aux:pass 0.2915 79 parataxis 0.5979 58 fixed 0.5243 54 root 0.0363 36 compound:prt 0.4714 33 nmod:tmod 0.6667 26 csubj 0.5926 16 expl 0.2459 15 obl:npmod 0.7000 14 obl:tmod 0.6111 11 nmod:npmod 0.5263 10 det:predet 0.8889 8 cc:preconj 0.5455 6 csubj:pass 1.0000 3 dislocated 1.0000 2 reparandum 1.0000 1 discourse 1.0000 1 iobj 0.1000 1 Table 6: All dependency-head shifts ordered by Dep (“Ratio”: ratio of shifts from all tokens with the Dep). 334 HU Ratio Count obl 0.6802 2048 root 0.2664 1694 nmod 0.6788 1655 conj 0.7654 1292 obj 0.7283 946 nsubj 0.6651 872 advcl 0.7791 663 acl:relcl 0.8109 579 xcomp 0.8168 495 ccomp 0.8327 458 acl 0.7762 281 appos 0.7301 238 parataxis 0.7409 223 nsubj:pass 0.6494 176 amod 0.7368 140 nmod:poss 0.7707 121 compound 0.6289 100 advmod 0.7810 82 csubj 0.7703 57 nummod 0.8036 45 flat 0.8276 24 cc 0.8750 14 obl:npmod 0.6667 14 obl:tmod 0.5833 14 csubj:pass 0.8667 13 mark 0.6000 9 nmod:tmod 0.2857 8 case 0.1591 7 dislocated 1.0000 6 nmod:npmod 0.8571 6 iobj 0.8333 5 dep 1.0000 2 det 0.6667 2 cc:preconj 1.0000 1 Table 7: All dependency-head shifts ordered by HU (“Ratio”: ratio of shifts from all tokens with the HU). HD Ratio Count root 0.4763 4244 case 0.9684 1135 amod 0.9386 764 compound 0.9107 602 nsubj 0.5525 542 obl 0.3431 503 nmod 0.3771 474 det 0.9978 453 punct 1.0000 404 obj 0.5306 399 advmod 0.9425 377 cc 0.9936 310 conj 0.4107 276 mark 0.9636 159 nummod 0.9341 156 advcl 0.4519 155 cop 1.0000 122 nsubj:pass 0.5622 122 nmod:poss 0.7707 121 aux 1.0000 119 xcomp 0.5174 119 acl 0.5622 104 flat 0.9533 102 aux:pass 1.0000 92 acl:relcl 0.3571 75 parataxis 0.4621 67 ccomp 0.3907 59 appos 0.3931 57 fixed 1.0000 55 compound:prt 1.0000 33 nmod:tmod 0.5455 24 expl 1.0000 14 obl:npmod 0.6316 12 det:predet 1.0000 9 nmod:npmod 0.9000 9 csubj 0.3462 9 cc:preconj 1.0000 4 obl:tmod 0.2308 3 reparandum 0.6667 2 dislocated 1.0000 1 discourse 1.0000 1 vocative 1.0000 1 csubj:pass 0.3333 1 Table 8: All dependency-head shifts ordered by HB (“Ratio”: ratio of shifts from all tokens with the HB). 335 Dep–HU–HB shift (count) case-obl-root (521) case-nmod-root (231) cc-conj-root (191) det-obj-root (141) det-nsubj-root (134) case-nmod-obl (122) punct-root-obl (117) nmod-obl-root (107) det-obl-case (101) det-nmod-case (100) case-nmod-obj (99) obl-root-case (97) mark-xcomp-root (87) nmod-nsubj-root (85) mark-advcl-root (84) nmod-obj-root (83) punct-root-nsubj (79) case-nmod-nsubj (79) case-nmod-nmod (73) det-obl-amod (66) nsubj-ccomp-root (66) amod-obj-root (64) det-obl-root (62) amod-obl-root (61) case-nmod:poss-root (56) nmod-nmod-root (54) punct-root-advmod (53) case-obl-acl (52) nsubj-acl:relcl-root (52) amod-nsubj-root (49) punct-root-punct (45) compound-nsubj-root (45) mark-ccomp-root (44) compound-obl-root (44) compound-nmod-root (43) obl-xcomp-root (43) obl-acl-root (43) obl-acl:relcl-root (43) punct-conj-cc (41) obl-conj-root (41) amod-obj-det (40) obl-root-amod (40) punct-root-nmod (38) amod-nmod-root (38) obl-advcl-root (38) obl-root-compound (38) nsubj-advcl-root (37) obj-advcl-root (36) nummod-obl-root (36) punct-root-parataxis (35) nsubj-root-amod (35) obj-xcomp-root (35) punct-conj-conj (35) nmod-obl-case (34) case-obl-advcl (33) case-obl-conj (33) punct-conj-root (32) nmod-obj-case (32) det-nmod-amod (31) amod-nmod-case (31) nmod-nmod-case (31) nsubj-root-compound (31) nmod:poss-obl-case (31) punct-appos-root (30) case-obl-acl:relcl (30) conj-nmod-root (30) case-nmod-det (29) det-nsubj-amod (28) nmod-obj-amod (28) cc-conj-obl (27) punct-conj-nmod (26) case-nmod-conj (26) det-nmod-root (26) det-obj-advcl (26) nmod-obl-compound (26) det-nmod-compound (25) nmod-conj-root (25) compound-obj-root (25) nsubj-conj-root (25) obj-acl-root (25) det-nsubj:pass-root (24) obl-root-nmod (24) conj-nsubj-root (24) amod-obl-det (23) nmod:poss-nmod-case (23) nmod:poss-nsubj-root (23) punct-conj-obl (22) det-obj-amod (22) obl-acl:relcl-case (22) nsubj-root-case (22) cc-conj-nmod (22) advmod-advcl-root (22) conj-nmod-cc (22) nmod-nsubj-case (21) obl-root-nummod (21) flat-nsubj-root (21) obj-acl:relcl-root (21) acl-obj-root (21) punct-root-det (20) case-obl-xcomp (20) nmod-obl-amod (20) compound-obl-det (20) compound-nmod-case (20) obl-ccomp-root (20) Table 9: Dep–HU–HB shifts and their counts (minimum count: 20). 336
2024_short
31
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 337–345 August 11-16, 2024 ©2024 Association for Computational Linguistics ATLAS: Improving Lay Summarisation with Attribute-based Control Zhihao Zhang1, Tomas Goldsack2, Carolina Scarton2, Chenghua Lin3∗ 1College of Economics and Management, Beijing University of Technology, China, 2Department of Computer Science, University of Sheffield, UK 3Department of Computer Science, The University of Manchester, UK [email protected] {tgoldsack1, c.scarton}@sheffield.ac.uk [email protected] Abstract Automatic scientific lay summarisation aims to produce summaries of scientific articles that are comprehensible to non-expert audiences. However, previous work assumes a one-sizefits-all approach, where the content and style of the produced summary are entirely dependent on the data used to train the model. In practice, audiences with different goals and levels of expertise will have specific needs, impacting what content should appear in a lay summary and how it should be presented. Aiming to address this disparity, we propose ATLAS, a novel abstractive summarisation approach that can control various properties that contribute to the overall “layness" of the generated summary using targeted control attributes. We evaluate ATLAS on a combination of biomedical lay summarisation datasets, where it outperforms state-of-the-art baselines using both automatic and human evaluations. Additional analyses provided on the discriminatory power and emergent influence of our selected controllable attributes further attest to the effectiveness of our approach. 1 Introduction Lay summarisation is defined as producing a summary of a scientific article that is comprehensible to non-experts (King et al., 2017). Recent work has shown that, when compared to technical abstracts, lay summaries typically are more readable (lexically and syntactically), more abstractive, and contain more background information, enabling a non-technical reader to better understand their contents (Luo et al., 2022; Cohen et al., 2021; Goldsack et al., 2023b). However, the extent to which these attributes are required within a lay summary depends largely on the specific needs of the reader. For example, a scientist from a related field will require less background information to understand an article’s contents than an entirely non-technical ∗Corresponding author reader, but they might still require domain-specific jargon to be simplified or explained. Despite its obvious benefits, to our knowledge, no work has yet explored how we can enable such fine-grained control over comprehensibility-related aspects for lay summary generation. In this paper, we propose ATLAS (ATtributecontrolled LAy Summarization), a novel scientific summarisation approach that aims to control four attributes targeting distinct properties contributing to the overall “layness" of the generated summary, thus allowing it to cater to the specific needs of different audiences. Although recent attempts at text simplification and story generation have had success influencing the style (Martin et al., 2020; Kong et al., 2021; Sheang and Saggion, 2021) and content (Kong et al., 2021; Tang et al., 2024) of generated text using fine-grained controllable attributes, no work to our knowledge has explored this for scientific summarisation. Luo et al. (2022) recently addressed the task of readability-controlled scientific summarisation, however, this is only done at a binary level, training a model to produce either a technical or non-technical summary based on a single control token. Our approach innovates by enabling a greater degree of controllability through the flexible handling of multiple attributes, allowing it to produce more diverse summaries and better address the specific needs of different audiences. Our results show that ATLAS outperforms state-of-the-art baselines in both automatic and human evaluations across three summary types with varying levels of technicality. Additional analyses confirm that attribute control positively influences performance, and suggest the selected control attributes are able to effectively capture the difference between technical and nontechnical summaries. 337 2 Methodology As discussed in §1, ATLAS aims to control four targeted attributes. We use BART-base as the base model for ATLAS as it represents the state-of-theart benchmark in previous lay summarisation works (Guo et al., 2021; Goldsack et al., 2022). Formally, each document x = (x1, x2, ..., xn) of length n, where xi is the i-th token, is prepended with a control token sequence l such that x = (l, x1, x2s, ..., xn). l consists of our four selected control tokens, each of which targets distinct characteristics of the output summary that contributes to its overall comprehensibility. We describe each aspect below: Length (L) The length of the output summary in characters. A more lay audience may require a longer summary to aid comprehension. Readability (R) How easy it is to read the text. This is measured using the Flesh-Kincaid Grade Level (FKGL) metric, which estimates the reading grade level (US) required to understand the generated text based on the total number of sentences, words, and syllables present within it. Background information (BG) The percentage of sentences classified as containing primarily background information. Intuitively, a more lay audience will require greater levels of background information to contextualise an article. Content word entropy (CWE) The average entropy of content words. We hypothesise that jargon terms are likely to possess higher entropy values, thus lower average CWE is likely to be a property of more lay text. Since jargon terms are predominately nouns, we extract noun phrases as content words using CoreNLP library (Manning et al., 2014).We then follow Xiao et al. (2020) to calculate I(xi) entropy of a given token xi as the negative logarithm of its generation probability P(xi), which is directly extracted from a pre-trained language model. I(xi) = −logP(xi) (1) During model training, true attribute values (as calculated on reference summaries) are used, allowing the model to learn to associate attribute values with summary properties. For all attributes, values are discretized into 10 fixed-width bins depending on their respective range in the train split (from minimum to maximum observed value), resulting in 10 unique control tokens for each attribute which are added to the vocabulary. For each attribute at test time, we use the most common bin value observed for reference summaries of the training set as attribute values. 3 Experimental Setup Data. We experiment on the biomedical lay summarisation datasets introduced in Goldsack et al. (2022), eLife (4.8k articles) and PLOS (27.5k articles), for which target lay summaries have been shown to contain different levels of “layness". Specifically, eLife’s lay summaries have been characterized as longer, more readable, and more abstractive than those of PLOS, as well as being empirically observed to be suitable for a more lay audience. We, therefore, combine both of these datasets, allowing us to expose ATLAS to a greater variety of attribute values during training.1 For each article in the combined dataset, we train our ATLAS to produce both the technical abstract and lay summary, using our control attributes to differentiate between them. Evaluation. We employ several automatic metrics to evaluate the performance of ATLAS. In line with common summarisation practice, we calculate ROUGE-1,2, and L variants (Lin, 2004) and BERTScore (Zhang et al., 2019). We also measure Dale-Chall Readability Score, a metric that estimates US grade level based on the frequency of common words. Baselines. To enable fair comparison, we rerun many of the baseline approaches used by Goldsack et al. (2022) (which have the abstract included in the input) on the combined datasets. Specifically, we rerun the Lead-3, Lead-K, and oracle heuristic baselines; TextRank (Mihalcea and Tarau, 2004), LexRank (Erkan and Radev, 2004), and HipoRank (Dong et al., 2021) unsupervised models; and BART and BARTScaffold supervised models. Here, we use the transformer-based BART base model (Lewis et al., 2020), which we fine-tune on our own datasets. BARTScaffold is the recreation of a model from Goldsack et al. (2022) which is trained using a binary control token (<abs> or <lay>) to produce either an abstract or lay summary for an article. This model is equivalent to that pro1To combine the datasets, we merge the training and validation sets. We evaluate on the test sets separately. 338 Figure 1: Visualisation of the density distributions of controllable attribute values for each summary type in the combined train split. posed by Luo et al. (2022), the only previous work on controllable lay summarisation.2 Finally, we include two baselines based on ChatGPT (3.5-turbo), so as to compare against an accessible and widely used method of controlling text generation (i.e., prompt engineering). Our first GPT baseline (GPT3.5-zs) uses the following zeroshot prompts: (i) “Summarize the following article for an expert audience that is familiar with the technical aspects of the content” to generate technical abstracts; (ii) “Summarize the following article for a non-expert audience that has some familiarity with the technical aspects of the content” to generate PLOS lay summaries, and (iii) “Summarize the following article for a non-expert audience that has no familiarity with the technical aspects of the content” to generate eLife lay summaries. Our second GPT baseline (GPT3.5-mdc) replicates the method of Turbitt et al. (2023), the best-performing team of the recent BioLaySumm shared task (Goldsack et al., 2023a). Based on in-context learning, this method dynamically selects the maximum number of input-output examples that fit in the context window (separated by the simple prompt “Explanation:”) to generate lay summaries based on only the article abstract. Implementation Details. As mentioned in §2, we employ BART-base as our base model. We train our ATLAS for a maximum of 5 epochs on a GeForce GTX-1080Ti GPU, retaining the checkpoint with the best average ROUGE-1/2/L score on the validation set. We set the batch size to 1 and keep the α scale factor (§2) at the default value of 0.2 from Kong et al. (2021). For calculating control attributes, we use SciBERT (Beltagy et al., 2019) for entropy calculation, and we employ a BERT-based sequential classi2The original code for Luo et al. (2022) is not yet available at the time of writing and their results are reported on a different dataset and thus are not comparable. Summary type Precision Recall F1 Abstract 0.69 0.75 0.72 eLife-Lay 0.71 0.71 0.71 PLOS-Lay 0.73 0.66 0.71 Table 1: Classifier performance for 3-way classification between summary types on the combined test set. fier (Cohan et al., 2019) trained on the PubMedRTC dataset (Dernoncourt and Lee, 2017) for background sentence classification (as described in Goldsack et al. (2022)). We compute the FKGL readability score using the textstat package. 4 Experimental Results Discriminatory ability of control attributes. To validate the ability of our controllable attributes to distinguish between different summary types, we plot the distribution of attribute values for each type in Figure 1. The figure suggests that, in combination, the attributes are able to capture characteristic differences between summary types, as instances in which two summary types share a similar distribution for one attribute can typically be separated by other attributes.3 To further evidence this, we use the training set to train a simple logistic regression classifier, using only the attribute values of the reference summaries as features, to discriminate between reference summary types. The test set results in Table 1 show that all summary types are classified with an F1-score above 0.7, attesting to the discriminatory power of our control attributes. Summarisation performance. Table 2 presents the performance of ATLAS and baseline models using automatic metrics on the test sets of PLOS 3E.g., PLOS lay summaries and abstracts have similar readability distributions but differ in their comprehensibility, length, and entropy distributions. Similarly, PLOS and eLife lay summaries have similar comprehensibility distributions but differ in their readability and length. 339 Model Abstract Lay summary - PLOS Lay summary - eLife R-1 R-2 R-L BS DCRS FKGL R-1 R-2 R-L BS DCRS FKGL R-1 R-2 R-L BS DCRS FKGL Heuristic Lead-3 23.86 5.66 21.48 81.17 12.66 14.82 27.41 6.87 24.61 83.36 12.66 15.08 19.41 4.06 18.02 81.65 12.65 13.30 Lead-K 35.69 9.07 32.70 82.86 11.69 14.49 38.28 9.45 34.8 83.72 11.88 14.95 37.27 7.53 35.18 82.05 10.58 11.89 Oracle 60.08 27.48 55.95 87.35 11.12 15.15 57.82 23.92 53.37 87.13 11.20 15.28 48.92 13.42 46.30 82.94 10.51 13.18 Unsup. TextRank 40.26 11.53 36.02 83.83 11.78 20.08 37.55 8.50 33.28 83.43 11.87 20.27 33.88 5.79 31.55 81.16 11.30 18.98 LexRank 38.22 13.06 35.42 83.85 9.70 14.23 31.20 9.09 28.72 82.97 9.70 14.59 32.25 5.73 30.45 80.67 9.68 13.32 HipoRank 36.95 10.19 33.89 83.22 12.15 14.46 37.67 9.22 34.28 83.68 12.15 14.69 31.50 5.17 29.68 80.88 12.13 12.13 Supervised BART 43.34 13.14 39.80 85.48 11.33 14.40 43.52 12.09 39.67 85.70 11.29 14.54 31.17 6.74 29.20 83.55 11.15 13.87 BARTScaffold 43.13 12.87 39.66 85.33 11.10 14.14 43.73 12.22 39.92 85.67 11.30 14.58 43.01 10.82 40.54 84.88 9.68 11.85 GPT3.5-zs 28.69 6.52 15.04 82.76 11.70 14.32 42.74 12.70 22.28 86.32 10.40 13.19 33.72 8.45 16.95 84.36 10.36 13.03 GPT3.5-mdc 44.41 14.16 41.12 86.55 10.36 13.32 37.97 9.39 35.57 84.22 10.78 13.70 ATLAS 45.87 14.08 42.32 85.54 10.96 14.21 44.44 12.33 40.60 85.70 11.22 14.58 46.80 12.57 44.14 85.20 8.95 10.87 ATLASOracle 46.11 14.07 42.51 85.69 10.99 14.13 44.97 12.49 41.02 85.82 11.21 14.48 46.61 12.29 43.95 85.11 9.18 11.39 Table 2: Summarization performance on the PLOS and eLife test sets (abstracts combined). R = ROUGE F1 (↑), BS = BERTScore (↑), DCRS = Dale-Chall Readability Score (↓), FKGL = Flesh-Kincaid Grade Level (↓). For supervised models, we highlight the best score obtained for each metric in bold and underline second best. and eLife. We include the results for ATLAS under two conditions: 1) one utilizing the average value for each attribute observed in the training data for each summary type (ATLAS); and 2) one using true attribute values obtained from gold standard summaries (ATLASOracle), where ATLASOracle is intended to provide an upper bound of the obtainable performance using our control attributes. For all metrics, it is evident from Table 2 that ATLAS exceeds the performance of all baseline approaches for both eLife lay summaries and abstracts, demonstrating a strong ability to control the technicality of generated text whilst producing high-quality summaries. Interestingly, although the GPT3.5-mdc baseline achieves a slightly stronger all-round performance for PLOS lay summaries, it fails to maintain this for the more “lay" summaries of eLife where ATLAS achieves significantly better performance, indicating that our control attributes can effectively capture these differences. In all cases, ATLAS also achieves scores that are comparable to (and sometimes exceeding) that of ATLASOracle, suggesting that the use of the most frequently observed bin value for control attributes is effective for producing the appropriate characteristics for each summary type. Ablation study. To assess the contribution of each attribute to model performance, we conduct an ablation study, evaluating ATLASOracle under different configurations.4 Table 3 reports the results of this study for abstracts and lay summaries on the combined test sets of PLOS and eLife. The table shows that the removal of control attributes has a significant detrimental effect on performance. Additionally, when only a single attribute is included, the length-based control has 4We use ATLASOracle as the subject of this experiment rather than ATLAS to get a true reflection of each attribute’s influence, rather than an approximation. Model Lay summary Abstract R-1 R-2 R-L DCRS R-1 R-2 R-L DCRS BART 41.68 11.29 38.12 11.27 43.34 13.14 39.80 11.33 +R 43.34 12.03 39.75 10.91 43.49 13.23 39.95 11.12 +BG 42.52 11.71 39.01 11.01 43.74 13.65 40.35 10.98 +CWE 41.58 11.21 38.04 11.28 44.23 13.48 40.56 11.35 +L 44.22 12.21 40.55 10.81 44.83 13.75 41.31 11.03 +L+BG 44.66 12.36 40.96 10.99 45.67 13.78 42.02 11.17 +L+R 44.52 12.10 40.73 10.92 45.54 13.64 41.78 11.21 +L+CWE 44.72 12.41 41.04 10.88 45.87 13.99 42.32 10.10 +L+R+BG 44.82 12.41 41.10 10.97 45.94 14.07 42.32 11.10 +L+R+CWE 44.83 12.39 41.05 10.90 45.60 13.63 41.84 11.21 +L+BG+CWE 45.01 12.56 41.38 10.88 46.04 14.16 42.44 11.06 ATLASOracle 45.22 12.47 41.45 10.91 46.11 14.07 42.51 10.99 Table 3: Ablation study on the ROUGE-based performance of ATLAS under different configurations using true attribute values. “+" denotes aspect addition. L = Length, R = Readability, CWE = Content Word Entropy, BG = Background information. the highest ROUGE scores, particularly for lay summaries. This is to be expected, as lay summaries are known to differ significantly in length between PLOS (avg. 175.6 words) and eLife (avg. 347.6 words). When employing attributes in combination, we can see that the addition of content word entropy control and the subsequent addition of background information control have the greatest benefit to performance for ATLAS with 2 and 3 attributes, respectively. Interestingly, no attribute emerges clearly as the least effective as, although readability score control is the only one not included in the 3 attribute model, its inclusion in the single attribute model has clear benefits for lay summary performance. This provides further evidence that, in combination, our control attributes are able to capture the differences between summary types and effectuate them during generation. Human evaluation. To provide a comprehensive assessment of the summaries generated, we conducted a human evaluation involving our proposed model ATLAS and the strongest baseline model 340 Criteria eLife PLOS BART ATLAS BART ATLAS Comprehensiveness 2.30 2.65 2.00 2.55 Layness 2.60 3.05 2.10 2.45 Factuality 2.20 2.85 2.05 2.40 Table 4: Human evaluation on eLife and PLOS. Mean evaluator ratings (1-5) obtained by BART and ATLAS outputs for each metric. (BART) using two experts.5 Specifically, adopting a similar setting to the original that of Goldsack et al. (2022), we take a random sample of 10 articles from the test split of each dataset. Alongside each model-generated lay summary, judges are presented with both the abstract and reference lay summary of the given article. We choose not to provide judges with the full article text in an effort to minimise the complexity of the evaluation and the cognitive burden placed upon them. Using 1-5 Likert scale, the judges are asked to rate the model output based on three criteria: (1) Comprehensiveness: to what extent does the model output contain the information that might be necessary for a non-expert to understand the high-level topic of the article and the significance of the research; (2) Layness: to what extent is the content of the model output comprehensible (or readable) to a non-expert, in terms of both structure and language; (3) Factuality: to what extent is the model generated lay summary factually consistent with the two other provided summaries (i.e. abstract and reference lay summary).6 Table 4 presents the average ratings from our manual evaluation. We calculate the Cohan Kappa scores to measure inter-rater reliability, where we obtain values of 0.50 and 0.57 for eLife and PLOS, attesting to the reliability of our evaluation. The overall results suggest that our proposed method performs better than the BART baseline in terms of all three criteria on both datasets, attesting to their quality. In terms of layness, the higher layness scores observed in the eLife dataset compared to the PLOS dataset align with the previous analysis for the two datasets from (Goldsack et al., 2022). Moreover, compared to baseline, it is worth noting that our model outputs are judged to produce much more factually correct outputs on both datasets, suggesting our method generates fewer hallucinations. 5Both judges have experience in scientific research and hold at least a bachelor’s degree. 6For example, for the “Layness“ criteria, a score of 5 is equal to “highly lay" and a score of 1, “highly technical". Model FKGL CLI DCRS PLOS ATLAStechnical 15.11 14.21 11.64 ATLASlay 13.22 13.97 11.22 eLife ATLAStechnical 14.77 14.02 11.32 ATLASlay 10.89 11.45 9.17 Table 5: Readability metrics for two versions of ATLAS with highly lay and technical attribute values. Controllability analysis. To assess the extent to which our control attributes enable controllability over the overall layness of the text, we conduct a further analysis using two additional versions of ATLAS with highly lay or technical values. Specifically, we create ATLASlay and ATLAStechnical by selecting the lowest and highest attribute bins, respectively, for which there are at least 100 observations in the training data (for all attributes other than length which is kept constant). We examine how these extreme attributes manifest themselves in generated summaries by calculating the average readability values obtained by the generated summaries for both datasets. We present the results of the analysis in Table 5, which show a significant divergence in the readability values obtained by each model on both datasets. Interestingly, this divergence is substantially wider for summaries generated on eLife, the dataset which is identified by Goldsack et al. (2022) as containing lay summaries that are more “lay” than those of PLOS, suggesting that exposure to more extreme values whilst training on this dataset may enable even greater controllability at inference time.7 5 Conclusion In this paper, we introduce ATLAS, a model for controllable lay summarisation that employs controllable attribute tokens to influence various properties of the generated summary, enabling it to cater to users of different levels of expertise. Using combined datasets for biomedical lay summarisation we perform multiple experiments whereby we confirm the ability of our selected control attributes to discriminate between summary types, demonstrate their effectiveness for controllable lay summarisation, and further investigate their ability to effectuate desired differences during generation. 7Examples of summaries generated by these models are included in the Appendices. 341 Limitations Although our results demonstrate that our selected control attributes are able to effectively capture the characteristics between summary types, it is highly likely that there are additional attributes that we have not explored that could benefit performance for controllable lay summarisation. We plan to explore this in future work, in addition to experimenting with more complex methods for enabling controllability. References Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615– 3620, Hong Kong, China. Association for Computational Linguistics. Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, and Dan Weld. 2019. Pretrained language models for sequential sentence classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3693–3699, Hong Kong, China. Association for Computational Linguistics. Nachshon Cohen, Oren Kalinsky, Yftah Ziser, and Alessandro Moschitti. 2021. Wikisum: Coherent summarization dataset for efficient human-evaluation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 2: Short Papers), Virtual Event, August 1-6, 2021, pages 212– 219. Association for Computational Linguistics. Franck Dernoncourt and Ji Young Lee. 2017. PubMed 200k RCT: a dataset for sequential sentence classification in medical abstracts. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 308–313, Taipei, Taiwan. Asian Federation of Natural Language Processing. Yue Dong, Andrei Mircea, and Jackie Chi Kit Cheung. 2021. Discourse-Aware unsupervised summarization for long scientific documents. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1089–1102, Online. Association for Computational Linguistics. G. Erkan and D. R. Radev. 2004. LexRank: Graphbased lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, 22:457–479. Tomas Goldsack, Zheheng Luo, Qianqian Xie, Carolina Scarton, Matthew Shardlow, Sophia Ananiadou, and Chenghua Lin. 2023a. Overview of the biolaysumm 2023 shared task on lay summarization of biomedical research articles. In The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, pages 468–477, Toronto, Canada. Association for Computational Linguistics. Tomas Goldsack, Zhihao Zhang, Chenghua Lin, and Carolina Scarton. 2022. Making science simple: Corpora for the lay summarisation of scientific literature. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10589–10604, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Tomas Goldsack, Zhihao Zhang, Chen Tang, Carolina Scarton, and Chenghua Lin. 2023b. Enhancing biomedical lay summarisation with external knowledge graphs. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 8016–8032, Singapore. Association for Computational Linguistics. Yue Guo, Wei Qiu, Yizhong Wang, and Trevor Cohen. 2021. Automated Lay Language Summarization of Biomedical Scientific Reviews. Proceedings of the AAAI Conference on Artificial Intelligence, 35(1):160–168. Stuart R F King, Emma Pewsey, and Sarah Shailes. 2017. Plain-language Summaries of Research: An inside guide to eLife digests. eLife, 6:e25410. Xiangzhe Kong, Jialiang Huang, Ziquan Tung, Jian Guan, and Minlie Huang. 2021. Stylized story generation with style-guided planning. In Findings of the Association for Computational Linguistics: ACLIJCNLP 2021, pages 2430–2436, Online. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7871–7880. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Zheheng Luo, Qianqian Xie, and Sophia Ananiadou. 2022. Readability controllable biomedical document summarization. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4667–4680, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. 342 Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55–60, Baltimore, Maryland. Association for Computational Linguistics. Louis Martin, Éric de la Clergerie, Benoît Sagot, and Antoine Bordes. 2020. Controllable sentence simplification. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4689– 4698, Marseille, France. European Language Resources Association. Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 404–411, Barcelona, Spain. Association for Computational Linguistics. Kim Cheng Sheang and Horacio Saggion. 2021. Controllable sentence simplification with a unified textto-text transfer transformer. In Proceedings of the 14th International Conference on Natural Language Generation, pages 341–352, Aberdeen, Scotland, UK. Association for Computational Linguistics. Chen Tang, Tyler Loakman, and Chenghua Lin. 2024. A cross-attention augmented model for event-triggered context-aware story generation. Computer Speech & Language, page 101662. Oisín Turbitt, Robert Bevan, and Mouhamad Aboshokor. 2023. MDC at BioLaySumm task 1: Evaluating GPT models for biomedical lay summarization. In The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, pages 611– 619, Toronto, Canada. Association for Computational Linguistics. Liqiang Xiao, Lu Wang, Hao He, and Yaohui Jin. 2020. Modeling content importance for summarization with pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3606–3611, Online. Association for Computational Linguistics. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. BERTScore: Evaluating text generation with BERT. A Appendix ChatGPT Baseline Prompts The prompts provided to ChatGPT for each summary type are given in Table 6. To ensure a fair comparison, we control the length of the GPT baselines using the generation arguments, (e.g., max_new_tokens). Note that we differentiate between the lay summary types (namely, PLOS and eLife) based on distinctions made by Goldsack et al. (2022), who recognise PLOS’ summaries as the less “lay" of the two, making them better suited to an audience with some technical knowledge. 343 Summary Type Prompt Abstract Summarize the following article for an expert audience that is familiar with the technical aspects of the content PLOS lay summary Summarize the following article for a non-expert audience that has some familiarity with the technical aspects of the content eLife lay summary Summarize the following article for a non-expert audience that has no familiarity with the technical aspects of the content Table 6: Prompts used for the GPT3.5-zs baseline for each summary type. eLife ATLAStechnical The effects of muscle fatigue on motor learning under fatigue are poorly understood. Here, we investigated the effect of fatigue on learning under a sequential pinch force task. Irrespective of whether the observed fatigue effects are domain-specific or present in another task that is cognitive demanding but requires minimal force control, we found that participants had impaired skill learning in both the fatigued and unfatigued effector. We replicated the findings of experiment 1 and found that disruption of rTMS to the motor cortex (Cantarero et al ., 2013a) alleviated the adverse effects of fatigue. Cortical excitability was similar to that observed in the untrained effector, but not in the unfatigued. Altogether, our findings suggest that motor fatigue has a domain-dependent lasting effect on skill learning. Future studies should focus on understanding the role of motor cortex excitability in the acquisition of motor skills under fatigue, as well as the potential role for maladaptive memory formation under fatigued conditions. Cortical and motor cortices should be included in training and rehabilitation regimens geared to improve motor skill acquisition. ATLASlay Muscle fatigue is a neuromuscular phenomenon that can impair performance over time. People who experience fatigue tend to be less able to learn a new motor skill than people who experience no fatigue. However, it is not clear how fatigue affects the ability of people to learn new motor skills . One way to study the effects of fatigue is to study how people learn a motor skill under fatigue conditions. One of the main challenges in studying motor learning under fatigue is the so-termed “performance-learning" distinction In this study, participants were asked to practice a motor task over two days and then had to catch up to the skill performance level of the non-fatigued group. Unexpectedly , participants who were only fatigued at the end of the training were less likely to learn the motor skill. This suggests that fatigue has a domain-specific lasting effect on the learning of a skill. ernas et al. now show that people who are unable to recover the motor task under fatigue are more likely to be unable to learn their motor skill when they are not fatigued. The experiments show that when people are trained to perform the task, their ability to recover from fatigue is severely impaired. This effect is due to a change in the strength of the motor cortex, a region of the brain that is involved in learning and memory. Figure 2: An case study from the eLife test set comparing summaries generated under highly lay and technical attribute values (with the length attribute being kept constant). 344 PLOS ATLAStechnical In this paper, we explore the conditions under which associations between antigenic, metabolic and virulence properties of strains within pneumococcal populations and predict how these may shift under vaccination. In this work , we use a conceptual framework to investigate the dynamics of associations between serotype, serotype and serotype-specific immunity in pneumococcus populations. We find that antigenic type (AT) is the principal determinant of non-capsular virulence factors (VF) , whereas MT is the major determinant. AT and MT are highly non-random; MT and AT are co-evolved and co-expressed. ET and CT are also found to be highly correlated, suggesting that they have synergistically adapted to a particular metabolic niche. IT and LD are found to have similar patterns of linkage disequilibrium (LD) than randomly selected genes not associated with metabolic/transport processes; AT is associated with a higher frequency of LD LD than MT LD; CT LD=0.013). CT is the first mathematical model to explain the non-overlapping association between serotypic and serotypes. TCT BC LD is a useful tool for predicting the potential impact of vaccination on the prevalence of serotypes associated with non-vaccine serotypes and for predicting how they may change under vaccination and vaccine serotype replacement. ATLASlay Pneumococcal populations are highly diverse in non-antigenic genes and are commonly classified into sequence types (ST) by Multi Locus Sequence Typing (MLST) of seven metabolic housekeeping genes. STs have been documented to occur regularly throughout the past 7 decades, yet many studies (eg) show an intriguing pattern of largely non-overlapping associations between serotype and ST. It has been noted that many STs that were previously associated with vaccine serotypes now occur in association with non-vaccine serotypes. It has been proposed that a combination of immune-mediated interference between identical antigenic types and direct competition between identical metabolic types can generate non-overlapping association between antigenic and STs in populations of the bacterial pathogen Neisseria meningitidis . In this paper, we explore whether pneumococcal population structure, can be explained within a similar conceptual framework. in which pathogen strains are profiled by antigenic type, AT, metabolic type (MT) and additional non-capsular virulence factors (VF). Figure 3: An case study from the eLife test set comparing summaries generated under highly lay and technical attribute values (with the length attribute being kept constant). 345
2024_short
32
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 346–355 August 11-16, 2024 ©2024 Association for Computational Linguistics EmbSpatial-Bench: Benchmarking Spatial Understanding for Embodied Tasks with Large Vision-Language Models Mengfei Du1*, Binhao Wu1*, Zejun Li1, Xuanjing Huang2, Zhongyu Wei1† 1School of Data Science, Fudan University, China 2School of Computer Science, Fudan University, China {mfdu22, bhwu22}@m.fudan.edu.cn {zejunli20, xjhuang, zywei}@fudan.edu.cn Abstract The recent rapid development of Large VisionLanguage Models (LVLMs) has indicated their potential for embodied tasks. However, the critical skill of spatial understanding in embodied environments has not been thoroughly evaluated, leaving the gap between current LVLMs and qualified embodied intelligence unknown. Therefore, we construct EmbSpatial-Bench, a benchmark for evaluating embodied spatial understanding of LVLMs. The benchmark is automatically derived from embodied scenes and covers 6 spatial relationships from an egocentric perspective. Experiments expose the insufficient capacity of current LVLMs (even GPT4V). We further present EmbSpatial-SFT, an instruction-tuning dataset designed to improve LVLMs’ embodied spatial understanding. 1 Introduction Embodied AI is the frontier direction of generalpurpose AI systems, requiring intelligent agents to understand instructions, perceive physical environments, plan and execute actions to accomplish corresponding tasks (Anderson et al., 2018). Recently, LLM-based large vision-language models (LVLMs) have demonstrated powerful capabilities in following instructions and performing planning based on the visual contexts (Li et al., 2023b; Zhu et al., 2023; OpenAI, 2023), paving a promising path for the development of embodied AI systems. However, recent studies have revealed significant deficiencies of LVLMs in understanding visual contents (Li et al., 2023c). In terms of embodied scenarios, the ability to understand spatial relationships between objects is particularly vital for agents to effectively interact with the environment (Anderson et al., 2018; Padmakumar et al., 2022). Evaluating and enhancing such capabilities of LVLMs is essential for constructing LVLM-driven embodied *Equal contribution †Corresponding author [False] The zebra is right of the person. [True] The zebra is left of the person. Existing Benchmarks Question: How are curtain and shelves positioned in relation to each other in the image? Options: A. The curtain is left of the shelves. B. The curtain is under the shelves. C. The curtain is right of the shelves. D. The curtain is out of the shelves. EmbSpatial-Bench Universal Image Source: VG, COCO [True] A dog to the right of a bench. [False] A dog to the left of a bench. Embodied 3D Scene Figure 1: Comparison between EmbSpatial-Bench and existing benchmarks for spatial understanding. Existing benchmarks may determine spatial relationships based on a coordinate system centered on the subject in the image (upper right), whereas EmbSpatial-Bench consistently determines them from an egocentric perspective. agents. Yet, existing benchmarks are not suitable for accurately assessing such capabilities. In this paper, we argue that two important features should be considered for excellent evaluation of spatial understanding abilities in embodied tasks. First, the spatial relationships should be described from the egocentric perspective, for the reason that agents take themselves as the center of coordinates to follow instructions and infer decisions in embodied tasks. However, previous benchmarks for spatial understanding (Liu et al., 2023a) tend to depict spatial relationships from the perspective of subject within images, as illustrated in Figure 1. Second, the visual scenes for evaluation should be consistent with that in embodied tasks. Nevertheless, existing benchmarks (Liu et al., 2023a; Kamath et al., 2023) are mainly constructed from universal image-text datasets like MSCOCO (Lin et al., 2014) and VG (Krishna et al., 2017) which are weakly related to embodied scenarios. To meet aforementioned requirements, we establish EmbSpatial-Bench, a benchmark for evaluating spatial understanding abilities of LVLMs in embodied environments. As shown in Figure 1, we focus on six spatial relationships described from the ego346 Spatial Relation Extraction Horizon: x • (obj1, on left of, obj2) • (obj1, on right of, obj2) • (obj1, above, obj2) • (obj1, below, obj2) • (obj1, closer, obj2) • (obj1, farther, obj2) EmbSpatial Bench Human Verification QA Generation & Filter Spatial Images Collection Depth: z Vertical: y Figure 2: Overview of the construction pipeline for EmbSpatial-Bench based on existing annotated 3D environments. centric perspective, including above, below, left, right, close and far, which completely covers three dimensions of the coordinates. The benchmark is organized into the format of multiple-choice questions. The images used for evaluation are directly collected from embodied 3D scenes, namely MP3D (Chang et al., 2017), AI2-THOR (Kolve et al., 2017) and ScanNet (Dai et al., 2017). Based on EmbSpatial-Bench, various LVLMs have been assessed. Experimental results indicate the poor embodied spatial understanding of current LVLMs, including GPT-4V (OpenAI, 2023) and Qwen-VL-Max (Bai et al., 2023). To address the issue, we further construct an instruction-tuning dataset, EmbSpatial-SFT, to empower LVLMs with embodied spatial understanding ability. LVLMs fine-tuned on EmbSpatial-SFT consistently demonstrate improved spatial perception abilities across different scenarios. 1 2 EmbSpatial-Bench Unlike existing benchmarks built on 2D images (Liu et al., 2023a), EmbSpatial-Bench is constructed from 3D scenes. Figure 2 illustrates the construction pipeline. We first generate target images from 3D scenes and extract spatial relations among objects. Then, we generate QA pairs and conduct filtering. Section 2.1 provides detailed explanations of each part, while Section 2.2 offers statistics of the benchmark. 2.1 Dataset Construction Spatial Image Sources. Current embodied 3D simulators offer comprehensive annotations for tasks such as visual navigation (Chang et al., 2017) and room rearrangement (Weihs et al., 2021), mak1https://github.com/mengfeidu/ EmbSpatial-Bench ing them ideal for constructing a challenging benchmark to evaluate embodied spatial understanding. Therefore, we choose MP3D (Chang et al., 2017), ScanNet (Dai et al., 2017) and AI2-THOR (Kolve et al., 2017). Specifically, we utilize the test scenes from MP3D and validation scenes from ScanNet and A. Within each 3D scene, we randomly select viewpoints and capture the corresponding RGB-D images accordingly. In AI2-THOR, we select 7 types of household tasks from ALFRED (Shridhar et al., 2020), spanning 93 different scenes. During task execution, we identify key RGB-D images based on the dataset’s PDDL (Aeronautiques et al., 1998) annotations.(See Appendix A). Spatial Relation Extraction. Instead of relying on object detectors (Tejas et al., 2023), we extract spatial relations directly from well-annotated 3D datasets. For each object in each image, we can utilize the camera parameters along with the corresponding 3D coordinates to obtain its 2D coordinates in the image (in the form of bounding boxes). With the 2D annotations, we extract the spatial relation triples with non-overlapping bounding boxes. We consider six spatial relationships from the viewer’s perspective: above, below, left, right, close and far. For the first four types, we determine the spatial relation based on position of the entire bounding boxes. For instance, if the entire bounding box of object A is located to the left of object B, we consider the relationship between A and B as A is left of B. For the other two types, we use the average depth within the bounding box to determine which object is farther or closer. QA Generation. The format of our benchmark is multiple-choice questions, a widely adopted approach in various LVLM benchmarks (Liu et al., 2023c; Li et al., 2023d). For the relations above, 347 chair bowl window table cabinet knife picture desk television sink shelves tomato curtain sofa lamp pan lettuce mug potato bed lettuce slice mirror pillow box bag door toilet tomato slice cup counter 100 150 200 250 300 350 400 450 500 Object Count Figure 3: Distribution of top 30 object categories. Data Source #QA Pairs #Image #Object #Scene Matterport3D 1,201 928 133 26 AI2-THOR 1,239 683 95 93 ScanNet 1,200 570 35 175 Overall 3,640 2,181 294 277 Table 1: Dataset Statistics of EmbSpatial-Bench below, left and right, we design 5 templates to generate questions asking spatial relations between objects, with unrelated relations provided as false options. For the relations far and close, we aggregate the relation triples for each image and generate questions for identifying the farthest or closest one among the given objects in the image. Filtering and Human Verification. To ensure the reliability of our benchmark, we initially filter out QA pairs with overly large or small bounding boxes, while maintaining a balanced distribution of spatial relations. Subsequently, we check each sample and remove the inappropriate questions that referring unclear objects or wrong spatial relationships. See appendix A.4 for more filtering and human verification details. 2.2 Dataset Statistics As shown in Table 1, the constructed benchmark comprises a total of 3,640 QA pairs, covering 294 object categories and 6 spatial relationships. The distribution of top 30 object categories can be observed in Figure 3. The set of objects is the collection among samples from three embodied datasets. Indoor objects such as "chair", "bowl" and "window" are the most frequent across different scenes. The distribution of most common spatial relation triples are depicted in Figure 4, highlighting the diversity of the combination of object spatial relations present in our benchmark. We also maintain a balanced distribution of spatial relations (details in Appendix A). The diversity and balance of the data enhance the reliability of our benchmark. Figure 4: The top 10 most common triples from each spatial relation in EmbSpatial-Bench. 3 EmbSpatial-SFT To further improve LVLMs’ capacity in embodied spatial understanding, we construct an instructiontuning dataset, EmbSpatial-SFT, which provides QA data for two tasks: spatial relationship identification and object localization. The former task setting is consistent with EmbSpatial-Bench, while the latter serves as an auxiliary task to enhance the model’s ability to ground target objects. The auxiliary task can be considered as the foundational skill for relationship identification. EmbSpatial-SFT is solely built on the training split of MP3D. In this way, we can still conduct zero-shot evaluations of the instruction-tuned models using data from the other two scenes in EmbSpatial-Bench. Spatial Relation Identification. Following the automatic pipeline in Section 2, We construct 25K training samples for spatial relation identification. Object Localization. Based on the coordinates of objects in 2D images, we construct object localization data in the form of the object grounding task (Kazemzadeh et al., 2014). The model is supposed to answer the location of inquired objects. The location is represented in the textual format of bounding boxes, following Chen et al. (2023a). 348 Model Generation Likelihood BLIP2 (2023b) 37.99 35.71 InstructBLIP (2023) 38.85 33.41 Cheetor (2023a) 24.56 32.80 Lynx (2023) 29.09 41.62 mPlugOwl (2023) 24.12 27.42 ImagebindLLM (2023) 26.46 33.46 Shikra (2023b) 28.38 34.75 MiniGPT4 (2023) 23.54 31.70 MiniGPT-v2 (2023a) 23.93 43.85 LLaVA-1.6 (2023b) 35.19 38.84 GPT-4V (2023) 36.07 Qwen-VL-Max (Bai et al., 2023) 49.11 Human 90.33 Table 2: Zero-shot performance (Acc%) of LVLMs in EmbSpatial-Bench. Bold indicates the best results. 4 Experiments 4.1 Experimental Setup Based on EmbSpatial-Bench, we conduct zeroshot evaluation of current LVLMs, using accuracy as the metric. Two evaluation strategies are employed. The first one is the generation-based strategy, which directly uses predicted options from the textual outputs of models. Considering the insufficient instruction-following ability of some LVLMs, we also employed a likelihood strategy, using the option with the highest probability generated by the model (Li et al., 2023d). Please refer to Appendix B for more evaluation details. 4.2 Zero-shot Performance Table 2 presents the zero-shot performance of 10 open-source LVLMs and 2 closed-source models. The results indicate that current LVLMs, including powerful closed-source models like GPT-4V and Qwen-VL-Max, have not demonstrated satisfactory spatial understanding abilities in embodied scenes. The best performance among all LVLMs merely reaches an accuracy of 49.11% (Generation) or 43.85% (Likelihood) which is significantly lower than human performance (90.33%). We present failure cases of GPT-4V in Appendix C, revealing its poor abilities of both object localization and spatial relation identification. The versions of these models can be found in Appendix B.3. 4.3 Instruction Tuning on EmbSpatial-SFT Furthermore, we fine-tune MiniGPT-v2 on EmbSpatial-SFT, to explore whether the data could further enhance the model’s spatial understanding capabilities. The trainable parameters include the visual connection module and LoRA (Hu et al., 2021) modules in the LLM backbone. Model In-Domain Out-Domain All MP3D AI2-THOR ScanNet Generation MiniGPT-v2 (2023a) 23.31 20.58 28.00 23.93 Finetuned MiniGPT-v2 31.64 34.06 33.17 32.97 w/o LoRA 26.81 25.26 23.25 25.11 w/o OL 34.22 31.40 31.92 32.50 Likelihood MiniGPT-v2 (2023a) 46.71 41.97 42.92 43.85 Finetuned MiniGPT-v2 80.52 73.69 80.25 78.10 w/o LoRA 48.38 38.90 44.17 43.76 w/o OL 80.35 72.15 79.67 77.34 Table 3: Performance (Acc%) of MiniGPT-v2 tuned on EmbSpatial-SFT. OL stands for object localization while w/o LoRA indicates that only the connection module is fine-tuned. Bold indicates the best results. Main Results. According to Table 3, under the likelihood evaluation strategy, learning from EmbSpatial-SFT consistently improves the performance across both in-domain and out-domain environments, with an increase of 34.25% in the overall accuracy. Though not as significant as that under likelihood strategy, the evaluated results under generation strategy still demonstrate an adequate performance improvement (+9.04% overall) after instruction-tuning. The improvement in AI2THOR is less than in ScanNet, which we attribute to AI2-THOR primarily consisting of simulated scenes, unlike the real-world scenarios in MP3D and ScanNet. Ablations. We further validate the effectiveness of finetuning LLM backbone with LoRA and the auxiliary object localization data. As shown in Table 3, tuning the LLM backbone with LoRA significantly contributes to the performance across all scenarios compared to the variant with a frozen LLM backbone. This phenomenon implies the necessity for the LLM backbone to learn corresponding reasoning abilities for spatial understanding, rather than solely adjusting the input visual representations. The auxiliary data also contribute to the performance across different embodied environments, leading to an overall improvement of 0.47% and 0.76% under generation strategy and likelihood strategy, respectively. 5 Related Works Large Vision-Language Models The prevalent LVLMs (Dai et al., 2023; Zeng et al., 2023) learn visual representations from abundant image-text interleaved datasets with a lightweight connection module. Further works (Tsai et al., 2023; Zheng et al., 2023) fine-tunes LVLMs-based architecture 349 and obtain acceptable performance on embodied tasks, which preliminarily reveal the potential of LVLMs as embodied intelligence. However, these works neither evaluate nor empower LVLMs with spatial understanding ability, which is essential for various embodied tasks. Benchmarks for Spatial Understanding. While there are numerous universal benchmarks available for LVLMs (Xu et al., 2023; Fu et al., 2023; Li et al., 2023d), dedicated benchmarks for evaluating spatial understanding remain scarce. VSR (Liu et al., 2023a) typically examines spatial relationships from the perspective of the subject within the image. What’sUp (Kamath et al., 2023) addresses data bias and generates uncluttered images to eliminate interference from unrelated objects. SR2D (Tejas et al., 2023) focuses on evaluating textto-image generative model. However, all of them are built on COCO (Veit et al., 2016) or VG (Krishna et al., 2017) which are not consistent with the embodied scenarios. This lack of specialized benchmarks leaves the spatial understanding capabilities of LVLMs in embodied tasks unexplored. 6 Conclusion In this work, we propose EmbSpatial-Bench, a benchmark to evaluate embodied spatial understanding of LVLMs. The evaluation results reveal the weak spatial understanding ability of current popular LVLMs. We further propose EmbSpatialSFT, an instruction tuning dataset to enhance the capacity of LVLMs. Extensive experiments valid the effectiveness of each data component in our EmbSpatial-SFT, with the goal of empowering the spatial understanding ability of LVLMs. Limitations Spatial understanding in embodied environments is a crucial aspect of LVLMs’ capabilities for embodied tasks. In this study, we advance towards this goal by constructing benchmark and instructiontuning datasets from well-annotated 3D embodied datasets. These datasets are derived from three widely used indoor embodied datasets, which may restrict their suitability for outdoor environments. Additionally, our study only investigates the English language, thus limiting the generalizability of the benchmark and findings to other languages. Ethical Considerations The benchmark and instruction-tuning data are built from publicly available embodied datasets, which include either photorealistic scenes or generated rendered scenes without any copyright issues. Besides, our data source does not contain any personal data, uniquely identifiable individuals, or offensive content. Acknowledgements This work is supported by National Natural Science Foundation of China (No. 62176058) and National Key R&D Program of China (2023YFF1204800). The project’s computational resources are supported by CFFF platform of Fudan University. References Constructions Aeronautiques, Adele Howe, Craig Knoblock, ISI Drew McDermott, Ashwin Ram, Manuela Veloso, Daniel Weld, David Wilkins Sri, Anthony Barrett, Dave Christianson, et al. 1998. Pddl| the planning domain definition language. Technical Report, Tech. Rep. Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton Van Den Hengel. 2018. Visionand-language navigation: Interpreting visuallygrounded navigation instructions in real environments. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3674– 3683. Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. 2023. Qwen-vl: A frontier large vision-language model with versatile abilities. arXiv:2308.12966. Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. 2017. Matterport3d: Learning from rgb-d data in indoor environments. arXiv preprint arXiv:1709.06158. Jun Chen, Deyao Zhu, Xiaoqian Shen, Xiang Li, Zechun Liu, Pengchuan Zhang, Raghuraman Krishnamoorthi, Vikas Chandra, Yunyang Xiong, and Mohamed Elhoseiny. 2023a. Minigpt-v2: large language model as a unified interface for vision-language multi-task learning. arXiv preprint arXiv:2310.09478. Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, and Rui Zhao. 2023b. Shikra: Unleashing multimodal llm’s referential dialogue magic. arXiv preprint arXiv:2306.15195. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan 350 Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An opensource chatbot impressing gpt-4 with 90%* chatgpt quality. Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. 2017. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proc. Computer Vision and Pattern Recognition (CVPR), IEEE. Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. 2023. Instructblip: Towards general-purpose vision-language models with instruction tuning. Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, et al. 2023. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394. Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, et al. 2023. Llama-adapter v2: Parameter-efficient visual instruction model. arXiv preprint arXiv:2304.15010. Jiaming Han, Renrui Zhang, Wenqi Shao, Peng Gao, Peng Xu, Han Xiao, Kaipeng Zhang, Chris Liu, Song Wen, Ziyu Guo, et al. 2023. Imagebind-llm: Multi-modality instruction tuning. arXiv preprint arXiv:2309.03905. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685. Amita Kamath, Jack Hessel, and Kai-Wei Chang. 2023. What’s" up" with vision-language models? investigating their struggle with spatial reasoning. arXiv preprint arXiv:2310.19785. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. 2014. Referitgame: Referring to objects in photographs of natural scenes. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 787– 798. Eric Kolve, Roozbeh Mottaghi, Winson Han, Eli VanderBilt, Luca Weihs, Alvaro Herrasti, Matt Deitke, Kiana Ehsani, Daniel Gordon, Yuke Zhu, et al. 2017. Ai2-thor: An interactive 3d environment for visual ai. arXiv preprint arXiv:1712.05474. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123:32–73. Juncheng Li, Kaihang Pan, Zhiqi Ge, Minghe Gao, Hanwang Zhang, Wei Ji, Wenqiao Zhang, TatSeng Chua, Siliang Tang, and Yueting Zhuang. 2023a. Fine-tuning multimodal llms to follow zeroshot demonstrative instructions. arXiv preprint arXiv:2308.04152. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023b. Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. Blip: Bootstrapping language-image pretraining for unified vision-language understanding and generation. In International Conference on Machine Learning, pages 12888–12900. PMLR. Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. 2023c. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355. Zejun Li, Ye Wang, Mengfei Du, Qingwen Liu, Binhao Wu, Jiwen Zhang, Chengxing Zhou, Zhihao Fan, Jie Fu, Jingjing Chen, et al. 2023d. Reform-eval: Evaluating large vision language models via unified re-formulation of task-oriented benchmarks. arXiv preprint arXiv:2310.02569. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer. Fangyu Liu, Guy Emerson, and Nigel Collier. 2023a. Visual spatial reasoning. Transactions of the Association for Computational Linguistics, 11:635–651. Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. 2023b. Improved baselines with visual instruction tuning. arXiv preprint arXiv:2310.03744. Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. 2023c. Mmbench: Is your multi-modal model an all-around player? arXiv preprint arXiv:2307.06281. OpenAI. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Aishwarya Padmakumar, Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gokhan Tur, and Dilek Hakkani-Tur. 2022. Teach: Task-driven embodied agents that chat. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 2017–2025. Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. 2020. Alfred: A benchmark for interpreting grounded instructions for everyday tasks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10740–10749. 351 Tejas, Hamid Gokhale, Besmira Palangi, Vibhav Nushi, Eric Vineet, Ece Horvitz, Chitta Kamar, Yezhou Baral, and Yang. 2023. Benchmarking spatial relationships in text-to-image generation. arXiv preprint arXiv:2212.10015. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Yao-Hung Hubert Tsai, Vansh Dhar, Jialu Li, Bowen Zhang, and Jian Zhang. 2023. Multimodal large language model for visual navigation. arXiv preprint arXiv:2310.08669. Andreas Veit, Tomas Matera, Lukas Neumann, Jiri Matas, and Serge Belongie. 2016. Coco-text: Dataset and benchmark for text detection and recognition in natural images. arXiv preprint arXiv:1601.07140. Luca Weihs, Matt Deitke, Aniruddha Kembhavi, and Roozbeh Mottaghi. 2021. Visual room rearrangement. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Peng Xu, Wenqi Shao, Kaipeng Zhang, Peng Gao, Shuo Liu, Meng Lei, Fanqing Meng, Siyuan Huang, Yu Qiao, and Ping Luo. 2023. Lvlm-ehub: A comprehensive evaluation benchmark for large visionlanguage models. arXiv preprint arXiv:2306.09265. Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. 2023. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178. Yan Zeng, Hanbo Zhang, Jiani Zheng, Jiangnan Xia, Guoqiang Wei, Yang Wei, Yuchen Zhang, and Tao Kong. 2023. What matters in training a gpt4-style language model with multimodal inputs? arXiv preprint arXiv:2307.02469. Duo Zheng, Shijia Huang, Lin Zhao, Yiwu Zhong, and Liwei Wang. 2023. Towards learning a generalist model for embodied navigation. arXiv preprint arXiv:2312.02010. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592. Appendix A Dataset Details A.1 AI2-THOR Image Selection Due to the significant similarity between many images in the observation sequences for each task in AI2-THOR, filtering is necessary. Based on the detailed PDDL annotations from ALFRED (Shridhar et al., 2020), we select key images that show significant content changes after each sub-goal is reached as our benchmark image resources. A.2 Dataset Statistics The wordcloud of object categories can be observed in Figure 5. The distribution of questions for each spatial relation is illustrated in Figure 6. The diversity and balance of the data enhance to the reliability of our benchmark. A.3 Data Cases Three samples of EmbSpatial-Bench constructed from MP3D (Chang et al., 2017), AI2-THOR (Kolve et al., 2017) and ScanNet (Dai et al., 2017) are shown in Fig. 7, Fig. 8 and Fig. 9. A.4 Filtering and Verification Initially, we will implement two primary filtering processes to enhance the robustness and quality of our benchmark. First, we filter out objects with excessively large or small bounding boxes. To exclude improperly displayed objects, we filter out spatial relationship triplets where the length or width of the bounding box is less than 50 or greater than half the length of the corresponding dimension of the image. After automated construction and filtering processes, the human verification is implemented to further ensure the correctness of our benchmark. Specifically, the correctness of each sample is examined by human from several aspects: 1) the objects involved in the question can be identified in the image uniquely and clearly; 2) the target object conforms to the described spatial relationship; 3) the negative options are indeed incorrect objects or relationships. Any sample that does not meet either of these conditions is discarded. Appendix B Experiments B.1 Experimental Details Implementation details. We use MiniGPTv2 (Chen et al., 2023a) as a baseline LVLM for investigation. The architecture of MiniGPT-v2 comprises three components, including a vision encoder 352 Figure 5: Wordcloud of object categories. Figure 6: Distribution of spatial relationships in EmbSpatial-Bench. , a linear connection layer and a large language model. We initialize the model parameters with the official checkpoint after its instruction-tuning. We finetune the connection layer and the large language model of MiniGPT-v2 with LoRA (Hu et al., 2021). In our implementation, we set the LoRA rank, Rr = 64 and scaling factor, Rα = 16. Training and hyper-parameters. We adopt AdamW optimizer with a cosine learning rate scheduler during the finetune process. The model is finetuned for 25,000 steps on 4xV100 GPUs with a initial learning rate of 1e-5, a minimum learning rate of 1e-6, a warmup learning rate of 1e-6 and a global batch size of 16. The finetuning stage lasts around 10 hours. B.2 Evaluation Strategy Following the evaluation approach (Li et al., 2023d), we evaluate LVLMs with generation and likelihood strategy. The likelihood strategy relies on LVLMs’ intrinsic nature as generative models and separates their instruction-following capacity from the capacity being evaluated. Given the image v, the question q, and N options C = {ci}N i=1, the prediction can be determined by the generation likelihood of LVLM: ˆc = arg max ci∈C Pθ(ci|v, q) (1) where Pθ(ci|v, q) is parameterized by the causalLLM-based LVLMs. The generation strategy extracts the option mark from generated textual output as predicted option. B.3 Models We select 10 open-source and 2 closed-source LVLMs for a comprehensive evaluation, including BLIP2 (Li et al., 2022), InstructBLIP (Dai et al., 2023), Cheetor (Li et al., 2023a), Lynx (Zeng et al., 2023), mPlugOwl (Ye et al., 2023), ImagebindLLM (Han et al., 2023), Shikra (Chen et al., 2023b), MiniGPT4 (Zhu et al., 2023), MiniGPT-v2 (Chen et al., 2023a), LLaVA-1.6 (Liu et al., 2023b), GPT-4V (OpenAI, 2023), Qwen-VLMax (Bai et al., 2023). Among the open-source models, BLIP2 and InstructBLIP have the Flant5 LLM backbones. The LLM backbone of Cheetor, Lynx, MiniGPT4 and LLaVA1.6 is Vicuna (Chiang et al., 2023). mPlugOwl chooses LLaMA (Gao et al., 2023) as backbone and MiniGPTv2 choooses LLaMA2 (Touvron et al., 2023) as backbone. All experimental open-source models have a parameter size of approximately 7B. We select version of “gpt-4-1106-vision-preview” for GPT-4V. B.4 Main Results of Each Spatial Relation We have analyse the models’ performance before and after instruct-tuning on different spatial relations, as shown in the table 4. After instruct-tuning on EmbSpatial-SFT, MiniGPT-v2 significantly improved or maintained comparable accuracy on various spatial relationship categories across different environments. In the likelihood evaluation, compared to the horizontal and vertical dimensions, performance in the depth dimension is significantly lower. We attribute this to the training data of LVLMs lacking depth estimation and the need to identify four objects in complex scenes, instead of just two objects in the other two dimensions. In the generation evaluation, both MiniGPT-v2 and the fine-tuned model perform poorly. Improving generation performance of open-source models remains an open question for further exploration. Appendix C GPT-4V Cases Utilizing the strong instruction following ability of GPT-4V, we delved deeper into the possible reasons for the poor performance of current LVLMs. Inspired by the two processes decoupling from spatial understanding, we prompt GPT-4V to inspect 353 Model In-Domain Out-Domain MP3D AI2THOR ScanNet above below left right close far above below left right close far above below left right close far Generation MiniGPT-v2 31.22 26.90 24.76 20.48 21.29 15.54 25.60 23.41 18.93 16.19 25.24 13.93 31.00 33.00 30.00 22.50 29.50 22.00 Finetuned MiniGPT-v2 38.62 46.19 22.86 23.33 34.65 25.39 36.23 49.76 28.16 26.67 34.76 28.86 39.00 48.00 27.50 23.00 28.50 33.00 Likelihood MiniGPT-v2 91.01 76.65 30.95 30.48 25.74 29.53 79.71 62.93 30.58 25.24 32.38 20.9 78.50 73.50 28.00 32.50 27.00 18.00 Finetuned MiniGPT-v2 92.59 91.88 84.29 82.38 71.78 60.10 93.72 88.78 83.50 80.95 50.00 44.77 90.50 89.00 89.50 90.50 56.50 65.50 Table 4: Performance (Acc%) of MiniGPT-v2 and fine-tuned MiniGPT-v2 across different spatial relations. Question: How are curtain and shelves positioned in relation to each other in the image? Options: A. The curtain is left of the shelves. B. The curtain is under the shelves. C. The curtain is right of the shelves. D. The curtain is out of the shelves. Question: From your perspective, which object in the image is at the shortest distance? Options: A. table. B. chair. C. sculpture. D. fireplace. Figure 7: Data samples from Matterport3D. Question: What is the spatial arrangement of pan and pepper shaker in the image concerning each other? Options: A. The pan is left of the pepper shaker. B. The pan is blocking the pepper shaker. C. The pan is inside the pepper shaker. D. The pan is right of the pepper shaker. Question: Which object from the list is situated at the largest distance from your point of view within the image? Options: A. potato. B. bowl. C. pot. D. bin. Figure 8: Data samples from AI2-THOR. whether object localization or spatial relationships determination becomes a bottleneck. As shown in Figure 10, the GPT-4V not only makes mistakes in object positioning, but also misjudge their spatial relationship when successfully localizing the objects involved. In the first case (left part), GPT-4V mistakenly positions the clock in top left corner to the top right corner, further leading to the incorrect selection of option with the word "right". In the second case (right part), GPT-4V successfully locates the positions of all object referred in the question, but incorrectly choose the pillow as the nearest object rather than the bed. The case study demonstrate the potential room for improvement in Question: How are television and shelf positioned in relation to each other in the image? Options: A. The television is right of the shelf. B. The television is under the shelf. C. The television is above the shelf. D. The television is left of the shelf. Question: Which object from the list is situated at the shortest distance from your point of view within the image? Options: A. window. B. lamp. C. curtain. D. clothes. Figure 9: Data samples from ScanNet. both two processes. 354 System: Assume you are a viewer seeing current observation. You are supposed to understand the spatial relationships among several objects. The spatial relationships should be described in the viewer's perspective. You need to select the option to answer the question below: Question: In the image, how do the positions of television and clock interact with each other? Options: (A) The television is under the clock. (B) The television is outside the clock. (C) The television is blocking the clock. (D) The television is at the right side of the clock. 1. Please first describe the position of television and clock respectively in the image. 2. Please choose the option to answer the question above with the objects' positions determined in the previous step. 1. In the image, the television is situated near the bottom left corner. The clock is visible on the wall closer to the top right corner, above and to the right of the television. 2. Based on the positions determined in the previous step, the correct option is: (D) The television is at the right side of the clock. Question: Which object, in relation to your current position, holds the nearest placement in the image? Options: (A) Pillow. (B) Headboard. (C) Door. (D) Bed. 1. Please first describe the position of pillow, headboard, door and bed respectively in the image. 2. Please choose the option to answer the question above with the objects' positions determined in the previous step. 1. In the image, the pillow is positioned on the bed, closest to the bottom left of the frame. The headboard is directly behind the pillow, attached to the bed. The door is located across the room, near the center of the image in the background. The bed is situated in the foreground of the image, occupying most of the bottom half of the visual field. 2. To answer the question, the pillow (A) is the nearest object in relation to the current position, as it is the closest to the camera's perspective at the bottom left of the image. Figure 10: Case study of GPT-4V on our benchmark. It not only makes mistakes in object positioning, but also misjudge their spatial relationship when successfully localizing the objects involved. The text in red means the wrong answers generated by GPT-4V. 355
2024_short
33
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 356–369 August 11-16, 2024 ©2024 Association for Computational Linguistics Understanding the Effects of Noise in Text-to-SQL: An Examination of the BIRD-Bench Benchmark Niklas Wretblad1,* Fredrik Gordh Riseby1,* Rahul Biswas2 Amin Ahmadi2 Oskar Holmström1 1Linköping University, 2Silo AI [email protected] Abstract Text-to-SQL, which involves translating natural language into Structured Query Language (SQL), is crucial for enabling broad access to structured databases without expert knowledge. However, designing models for such tasks is challenging due to numerous factors, including the presence of ‘noise,’ such as ambiguous questions and syntactical errors. This study provides an in-depth analysis of the distribution and types of noise in the widely used BIRDBench benchmark and the impact of noise on models. While BIRD-Bench was created to model dirty and noisy database values, it was not created to contain noise and errors in the questions and gold SQL queries. We found that noise in questions and gold queries are prevalent in the dataset, with varying amounts across domains, and with an uneven distribution between noise types. The presence of incorrect gold SQL queries, which then generate incorrect gold answers, has a significant impact on the benchmark’s reliability. Surprisingly, when evaluating models on corrected SQL queries, zero-shot baselines surpassed the performance of state-of-the-art prompting methods. We conclude that informative noise labels and reliable benchmarks are crucial to developing new Textto-SQL methods that can handle varying types of noise. All datasets, annotations, and code are available at this URL. 1 Introduction Text-to-SQL with large language models facilitates broader access to structured databases without requiring expert knowledge. To develop such models, high-quality open datasets and benchmarks are essential resources, and over the years, several benchmarks and datasets have been created. Early benchmarks, such as WikiSQL (Zhong et al., 2017), modeled simple scenarios, often with single-table queries, and following datasets attempts to closer *Equal Contribution - What is the average loan amount by male borrowers? SELECT AVG(T3.amount) FROM client AS T1 INNER JOIN account AS T2 ON T1.district_id = T2.district_id INNER JOIN loan AS T3 ON T2.account_id = T3.account_id WHERE T1.gender = 'M' SELECT AVG(T1.amount) FROM loan AS T1 INNER JOIN account AS T2 ON T1.account_id = T2.account_id INNER JOIN disp AS T3 ON T2.account_id = T3.account_id INNER JOIN client AS T4 ON T3.client_id = T4.client_id WHERE T4.gender = 'M' Incorrect Gold Query Corrected Query Question Figure 1: Example of an incorrect SQL query that generates the wrong gold reference answer for the given question. The JOIN operation incorrectly matches clients and accounts by district_id. Due to the possibility of multiple clients and accounts in the same district, accounts are incorrectly associated with the wrong users. approximate real-world scenarios: complex queries with join-statements over several tables (Yu et al., 2018), unseen domain-specific datasets (Gan et al., 2021b; Lee et al., 2021), and noisy questions (Gan et al., 2021a). BIRD-Bench, a recent and challenging benchmark, aims to further close the gap between Text-to-SQL research and real-world applications by for example containing large and dirty database values and requiring external knowledge (Li et al., 2023). While BIRD-Bench does not explicitly introduce noise to the questions in the data, it could be that it is added inadvertently due to human error during dataset creation. For the same reason, noise is an essential aspect of real-world use cases, as human inputs often are ambiguous and contain syntactical errors. However, for the benchmark to be a helpful tool for judging model properties, such as noise handling, the data must be valid and inform us in what areas a model can be improved. This paper continues the tradition of examining the suitability and limitations of open datasets and benchmarks. We specifically focus on how noise is represented in questions and queries in BIRD356 Figure 2: Distribution of question difficulties and execution accuracy of the DIN-SQL model on the different domains of the BIRD-Bench development set. Bench. We perform a qualitative analysis of what types of noise exist in the data and the noise distribution in specific domains. We then study the effects of noise on different models and prompting techniques, using both strong baselines and stateof-the-art methods. We find that noise in questions and gold SQL queries is prevalent, that noise is unevenly distributed across domains, and that categories of noise types are represented unequally in the data. Errors in gold SQL queries are also common and decrease the reliability of BIRD-Bench. When evaluating models on a dataset with corrected gold queries, the performance gap between zero-shot baselines and state-of-the-art prompting techniques is closed, questioning how we should interpret model performance on BIRD-Bench. 2 Related Work Datasets WikiSQL is a large Text-to-SQL dataset containing only simple SELECT and WHERE operations without nested queries or JOIN operations (Zhong et al., 2017). SPIDER (Yu et al., 2018) was later developed to approximate real-life scenarios more closely, requiring models to construct complex queries and understand the database schema. While complexity is a critical aspect of real use cases, variations of SPIDER have been created to contain noisy questions (Gan et al., 2021a) and domain-specific questions (Gan et al., 2021b). BIRD-Bench was created to close the gap between academic research and real-world applications by introducing large and dirty database values, questions requiring external knowledge and optimizing SQL execution efficiency (Li et al., 2023). Text-to-SQL Methods The notable gap in accuracy between automated systems (65.45%) and human experts (92.96%)1, highlights the need for ongoing developments in Text-to-SQL models. Different approaches have been taken to create models capable of Text-to-SQL generation. A more traditional approach is to finetune LLMs on Text-to-SQL examples. While these models offer promising results, there is a performance gap to instruction-tuned LLMs, in particular GPT-4, that is adapted to the Text-to-SQL task through prompt engineering (Li et al., 2023). Prompts are often chained, where each prompt is applied to the task sub-problems, such as schema linking, decomposition of queries, and refinement of model generations (Pourreza and Rafiei, 2023a; Wang et al., 2023). Noise in Datasets The contemporaneous works of Wang et al. (2023) and Sun et al. (2024) shows that ambiguous questions and incorrect SQL queries exist in BIRD-Bench. However, unlike our work, they do not study how noise varies across domains or how the identified noise and errors affect 1BIRD-Bench benchmark as of 2024-02-16 (https://birdbench.github.io) 357 Statistic Financial California Schools Superhero Toxicology Thrombosis Prediction Question & SQL query pairs with noise 52/106 (49%) 9/20 (45%) 3/20 (15%) 7/20 (35%) 8/20 (40%) Noisy questions 44/106 (41.5%) 5/20 (25%) 2/20 (10%) 6/20 (30%) 3/20 (15%) Erroneous gold queries 22/106 (20.7%) 8/20 (40%) 1/20 (5%) 2/20 (10%) 6/20 (30%) Table 1: Statistics of the total amount of pairs of questions and SQL queries that contain errors and the amount of errors for questions and gold SQL queries separately across five domains. Noise Type Financial California Schools Superhero Toxicology Thrombosis Prediction Spelling/Syntactical Errors 23 2 1 4 2 Vague/Ambiguous Questions 17 1 1 1 1 Incorrect SQL query 22 8 1 2 6 Synonyms 2 0 0 0 0 String Capitalization 7 0 0 0 0 Question does not map to DB 1 4 1 0 0 Total number of errors 72 15 4 7 9 Table 2: Distribution of different types of noise encountered in the domains. model performance. Pourreza and Rafiei (2023b) perform a more fine-grained analysis of incorrect SQL queries but also mention categories of noise that we cover in our work (e.g., natural language question does not match database schema). In contrast to their work, we perform a more fine-grained analysis of noise in the natural language questions, for example the effects of syntactical errors, synonyms, and ambiguous questions. Katsogiannis-Meimarakis and Koutrika (2023) points out that database schemas often misalign with data entities, which may cause lexical or syntactic ambiguities affecting Text-to-SQL models. 3 Method 3.1 Data The BIRD-Bench dataset (Li et al., 2023) is studied in this paper as it is a recent and widely used dataset that is the most similar to real world scenarios among current benchmarks. BIRD contains 12,751 samples across many domains. Because of the time-consuming human annotation performed in this work, the main focus of the analysis is on the financial domain2, which includes queries related to banking operations. The development set of the financial domain contains 106 question and SQL query pairs, which represent approximately 7.5% of the data points in the development set, and are structured around eight distinct tables presented in full in Appendix 2This was also motivated by the fact this paper was a collaborative endeavor with the Swedish bank SEB. A.1. Each question is annotated with a difficulty level (simple, moderate, and challenging). The specific distribution is found in Figure 2. We selected four additional domains to validate our noise analysis of the financial domain and performed the same analysis on 20 randomly sampled questions from each domain. The domain selection was based on question difficulties and model accuracy of DIN-SQL3, as presented in Figure 2. We selected California Schools with low accuracy and simple questions, Superhero with high accuracy and simple questions, Toxicology with similar accuracy to the financial domain but more complex questions, and Thrombosis Prediction with low accuracy and moderately difficult questions. 3.2 Annotation of Noise All questions and SQL queries in the selected domains were annotated to determine whether they contained errors. The annotations were performed independently by two authors of this paper, fluent in English and experts in SQL. In the first phase, annotators independently identified questions and SQL queries with errors. The Cohen’s Kappa coefficient was 0.73, demonstrating a substantial level of agreement between annotators. The annotators then independently named the types of errors. In the second phase, the annotators resolved disagreements by observing the other annotator’s reasoning and the remaining disagreements were 3Results of DIN-SQL across domains were provided by the creators of DIN-SQL. 358 Figure 3: Accuracy of various models on Bird-Bench’s financial domain. Models are evaluated on the original data (left), corrected SQL queries (middle), and corrected SQL queries and corrected noisy questions. resolved through discussion. The identified errors were grouped based on similarity and named after the errors’ common properties, as shown in Table 2. The annotations were then used to generate two distinct datasets: one where SQL was corrected, and one where both SQL queries and noisy questions were corrected. 3.3 Models and Prompt Techniques Two models, GPT-3.5 and GPT-4, were used with three different prompting methods: zero-shot prompting as a baseline and the more advanced DIN-SQL (Pourreza and Rafiei, 2023a) and MACSQL (Wang et al., 2023). We used GPT-3.5 and GPT-4 for zero-shot prompting, but for the advanced prompting techniques, we only used GPT3.5 since chaining prompts with GPT-4 was beyond the resources for this project. We chose the models and prompting methods because they were the highest-performing publicly available models on BIRD-Bench at the time of writing. Information about the database schema is crucial to generating correct queries for BIRD-Bench questions. DIN-SQL and MAC-SQL has a predefined format for adding the database schema. For the zero-shot model, we provide the database schema in-context in the form of SQL table creation statements, as this has been shown to improve accuracy compared to other formats (Nan et al., 2023). The prompt template for the zero-shot model is found in Appendix A.2. The code base is published after the anonymity period. 4 Qualitative Analysis of Noise Even though BIRD-Bench was not intentionally created to contain noise in questions and SQL queries, our analysis reveals that noise exists in all studied domains to different extents. The financial domain exhibits the highest levels of noise at 49% closely followed by the California Schools domain at 45%, as shown in Table 1. In contrast, the Superhero domain demonstrated the lowest noise levels, with only 15% of data points containing errors. As presented in Section 3.1 and Figure 2, the Superhero domain had the highest accuracy while having a similar distribution of question difficulties. This could indicate that model accuracy across tasks correlates with noise, which implies that noise in questions and SQL queries need to be carefully considered during dataset design. The categories and absolute frequency of noise per dataset are presented in Table 2, and both examples and descriptions of the noise types are presented in Appendix A.3. Our analysis shows that spelling/syntactic errors and incorrect SQL queries were most prevalent in the financial domain. The presence of noise in questions is not necessarily undesirable, as it more closely mimics real-life scenarios. However, noise distribution across the categories is unequal. While this could approximate a real-world distribution, it might unfairly bias the benchmark towards models better at handling syntactical errors. Given the uneven distribution of errors and the lack of noise labels, the benchmark does not inform which noise types are challenging for current models and in which areas they should improve. A more severe issue is that all domains contained incorrect SQL queries, which are used for generating gold reference answers. An example of an erroneous SQL query is shown in Figure 1. These types of errors question the reliability of the benchmark to accurately determine model performance, which is explored in the next section. 359 Error Category Total DIN-SQL (3.5) Zero-shot (3.5) Zero-shot (4) MAC-SQL (3.5) Spelling/Syntactical Errors 23 2 6 4 6 Vague/Ambiguous Questions 17 1 2 3 4 Incorrect SQL 22 0 2 2 4 Synonyms 2 0 0 0 0 String Capitalization 7 2 1 1 0 Question does not map to DB 1 0 0 0 0 Table 3: Model performance on the financial domain for various error categories and overall correct predictions on non-erroneous questions. 5 Impact of Noise on Model Performance We apply models to the original dataset, a dataset where SQL has been corrected, and a dataset where both SQL queries and noisy questions have been corrected. Figure 3 presents the results of a single evaluation for all models on all datasets. MAC-SQL slightly outperforms DIN-SQL and the zero-shot baselines on the original dataset, where noise exists in both questions and queries. However, correcting SQL queries decreases MACSQL’s performance, tying it with DIN-SQL as the poorest performers. Surprisingly, even the zeroshot GPT-3.5 baseline outperforms the more advanced DIN-SQL and MAC-SQL. The dataset with corrected SQL queries could also be considered optimal since gold labels are correct and noise in questions is represented. Given the drastic re-ranking of models, it is relevant to question if BIRD-Bench is a reliable assessor of models and a useful tool to assist researchers in developing new methods for Text-to-SQL. When evaluating models on the dataset with both questions and SQL queries corrected, the accuracy of all models increases significantly. While zeroshot GPT-4 performs the best, the remaining models perform similarly with DIN-SQL slightly ahead. Compared to the ideal scenario where only SQL queries are corrected, the presence of noise noticeably impacts all models’ accuracy. However, models are not equally affected by noise as some models have a more pronounced increase in accuracy. Table 3 presents each model’s performance for the error categories. MAC-SQL outperforms the other models slightly on errors related to Spelling and Syntactical Errors, Ambiguous Questions, and Incorrect SQL. The main difference between MACSQL and the other methods is an extensive filtering process of tables and columns and the increase of relevant information in the context could make the model more robust to noise. However, such a hypothesis must be confirmed or rejected by studying what the model has seen during the generation phase, which we leave to future studies. 6 Conclusions and Future Work This paper analyzed the quality and distribution of noise in the BIRD-Bench benchmark for Text-toSQL. We show that noise in both questions and SQL queries are prevalent, and noise is unevenly distributed across noise types and domains. Errors in gold SQL queries were common, decreasing the reliability of BIRD-Bench. Surprisingly, when evaluating models on corrected gold queries, zero-shot baselines surpassed more advanced prompting techniques. These findings highlight the necessity for developing benchmarks that can guide researchers in designing models that are more resistant to noise. Therefore, a significant improvement would be to label noise types across the dataset. In future work, we plan to study how large language models can be applied to noise classification, a new task that could also be critical in systems where Text-to-SQL is employed. Overall, this study provides a deeper understanding of how noise is expressed in Text-to-SQL tasks and how noise and models interact, pinpointing areas for improvement in the BIRD-Bench dataset. Limitations While our study provides valuable insights regarding the influence of dataset noise in Text-to-SQL translation tasks, it has several limitations. As the analysis was performed mainly on the BIRD-Bench dataset’s financial domain, our findings’ generalizability may be limited. We only examined a small subset of other domains to validate our findings, which may represent only some of the noise distribution across domains. Additionally, annotators may have introduced subjective bias during noise annotation, even 360 though we attempt to minimize this by having two independent annotators. Further, our decision to categorize noise into six specific classes might have oversimplified the complexity and diversity of noise types in these benchmarks. Our choice of models and prompting techniques could also be a potential limitation. We only employed two models, GPT-3.5 and GPT-4, and three different prompting methods. Evaluating a more comprehensive array of models and prompting techniques might have given a more comprehensive understanding of their performance under the influence of noise. Lastly, the substantial effort required to correct SQL queries and noisy questions in the dataset may have introduced errors despite the review process. This might influence the model performances we report when evaluating models on the corrected datasets. Acknowledgments We extend our gratitude to Mohammadreza Pourreza for the results from the DIN-SQL model. We are also grateful to SEBx for their generous support and the provision of resources. Additionally, this research was partial funded by the National Graduate School of Computer Science in Sweden (CUGS). References Yujian Gan, Xinyun Chen, Qiuping Huang, Matthew Purver, John R. Woodward, Jinxia Xie, and Pengsheng Huang. 2021a. Towards robustness of textto-SQL models against synonym substitution. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2505– 2515, Online. Association for Computational Linguistics. Yujian Gan, Xinyun Chen, and Matthew Purver. 2021b. Exploring underexplored limitations of cross-domain text-to-SQL generalization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8926–8931, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. George Katsogiannis-Meimarakis and Georgia Koutrika. 2023. A survey on deep learning approaches for textto-sql. The VLDB Journal, 32(4):905–936. Chia-Hsuan Lee, Oleksandr Polozov, and Matthew Richardson. 2021. KaggleDBQA: Realistic evaluation of text-to-SQL parsers. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2261–2273, Online. Association for Computational Linguistics. Jinyang Li, Binyuan Hui, Ge Qu, Binhua Li, Jiaxi Yang, Bowen Li, Bailin Wang, Bowen Qin, Rongyu Cao, Ruiying Geng, Nan Huo, Xuanhe Zhou, Chenhao Ma, Guoliang Li, Kevin C. C. Chang, Fei Huang, Reynold Cheng, and Yongbin Li. 2023. Can llm already serve as a database interface? a big bench for large-scale database grounded text-to-sqls. In Advances in Neural Information Processing Systems. Spotlight Poster. Linyong Nan, Yilun Zhao, Weijin Zou, Narutatsu Ri, Jaesung Tae, Ellen Zhang, Arman Cohan, and Dragomir Radev. 2023. Enhancing text-to-SQL capabilities of large language models: A study on prompt design strategies. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 14935–14956, Singapore. Association for Computational Linguistics. Mohammadreza Pourreza and Davood Rafiei. 2023a. Din-sql: Decomposed in-context learning of textto-sql with self-correction. In Advances in Neural Information Processing Systems 36. Accepted for poster presentation, full citation details to be updated. Mohammadreza Pourreza and Davood Rafiei. 2023b. Evaluating cross-domain text-to-SQL models and benchmarks. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 1601–1611, Singapore. Association for Computational Linguistics. 361 Ruoxi Sun, Sercan Ö. Arik, Alex Muzio, Lesly Miculicich, Satya Gundabathula, Pengcheng Yin, Hanjun Dai, Hootan Nakhost, Rajarishi Sinha, Zifeng Wang, and Tomas Pfister. 2024. Sql-palm: Improved large language model adaptation for text-to-sql (extended). Bing Wang, Changyu Ren, Jian Yang, Xinnian Liang, Jiaqi Bai, Qian-Wen Zhang, Zhao Yan, and Zhoujun Li. 2023. Mac-sql: A multi-agent collaborative framework for text-to-sql. Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911–3921, Brussels, Belgium. Association for Computational Linguistics. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning. arXiv e-prints, page arXiv:1709.00103. 362 A Appendix A.1 Database Schema of the Financial Domain Figure 4: Database schema of the database in the financial domain of BIRD-Bench. Figure 4 displays the database schema for the financial domain. This schema contains various tables, such as those for loans, transactions, accounts, cards and clients, all reflecting the financial orientation of the database. Descriptions of what information these tables contain are presented in Table 4. The database consists of 55 columns distributed across eight distinct tables. While the majority of the column names are intuitively understandable, some present interpretative challenges, as evident in the schema. An illustrative example is the district table, which incorporates 16 unique columns. This includes a column titled district_ID along with 15 other columns, ranging from A2 to A16. The latter columns’ names do not readily convey the nature of the data they hold, making them less intuitive to understand. In practice, a database schema will often be accompanied by a data dictionary or documentation that explains each table and column in detail. Such documentation would typically provide the context needed to fully understand the meaning of each element in the schema, the range of possible values for fields with unspecified types, and the business logic underlying the relationships. Without this additional documentation, fully 363 interpreting and effectively using the database can be challenging as illustrated by the column names in the districts table. The BIRD-Bench dataset includes a unique feature for each question termed hint. This feature is designed to offer insights or supplementary information corresponding to the specifics detailed in such database documentation. This feature is provided to all models described in 3.3 for each question during the experiments. Table 4: Table descriptions of the tables in the database of the financial domain of BIRD-Bench. Table Name Description loan Contains details of loans. order Holds information about monetary orders. trans Represents financial transactions. account Contains account information. disp Links clients to accounts (dispositions). card Contains details about cards issued. client Holds client information. district Contains details about districts or regions. Further, the lines in Figure 4 between the tables represent relationships, where the nature of the relationship is indicated by the shape of the tail end of the lines where they connect to each table. A one-to-many relationship is indicated by the line beginning with a single line and the one digit above it, and then ending in a crow’s foot (three lines) at the opposite end. For example, an account can have multiple orders, transactions, dispositions, and loans associated with it, but each of those entities is only linked to one account. An account can have many loans, but one loan is exclusively only linked to one account, which makes sense. Further, clients and accounts are related through the disposition table in a many-to-many relationship. An account can have many different clients associated with it, for example, one client listed as the owner of the account and multiple other clients listed as users for the account. This could for example be practical for sharing an account in a family, where one parent could be the owner of the account and then multiple other family members listed as users. A single client can also be related to many different accounts in the other way around. A.2 Prompt Templates 1 """Database schema in the form of CREATE_TABLE statements: 2 3 {database_schema} 4 5 Using valid SQL, answer the following question based on the 6 tables provided above. 7 8 Hint helps you to write the correct sqlite SQL query. 9 Question: {question} 10 Hint: {evidence} 11 DO NOT return anything else except the SQL query.""" Listing 1: Zero-Shot Prompting Template. The prompt template underlying the zero-shot models described in Section 3.3 can be found in Listing 1. The prompt integrates a given question, the associated database schema, an instruction directing the LLM to generate valid SQL, and a hint provided by the BIRD-Bench dataset. The hint is designed to offer 364 insights or supplementary information needed in order to accurately interpret the database schema and to correctly convert the question into a SQL query. Note that the other models implemented in this research is also provided with this feature. 365 A.3 Examples of Errors and Corrections This section provides examples of erroneous data points and their corrections from the different error categories found in Table 1. Example 1: Spelling/Syntactical Error In Figure 5, an example question with a syntactical error is provided, representing the question with ID 125 from the financial domain in the BIRD-Bench development set. The grammatical structure of the question complicates the interpretation of its meaning for a human reader and makes it difficult to understand which information it is asking for. Therefore, there is a chance that an LLM might also misinterpret the question. A corrected version of the question can be seen in the figure. For loans contracts which are still running where client are in debt, list the district of the and the state the percentage unemployment rate increment from year 1995 to 1996. For loan contracts that are still active and where clients are in debt, state the percentage increase in unemployment rate from 1995 to 1996. Corrected Question Original Question With Noise Figure 5: Question with ID 125 from the development set of BIRD-Bench which contains syntactical errors and a corrected version of the question. Example 2: Ambiguous/Vague Question Figure 6 displays the data point with ID 159 from the financial domain of the development set of BIRD-Bench. It contains an error which were grouped into the ambiguous/vague question category. The challenge lies in the natural language question’s ambiguity, specifically in the phrase “List all the withdrawals...” This ambiguity revolves around determining which columns to return when executing the SQL query. List all the withdrawals in cash transactions that the client with the id 3356 makes. SELECT T4.trans_id FROM client AS T1 INNER JOIN disp AS T2 ON T1.client_id = T2.client_id INNER JOIN account AS T3 ON T2.account_id = T3.account_id INNER JOIN trans AS T4 ON T3.account_id = T4.account_id WHERE T1.client_id = 3356 AND T4.operation = 'VYBER' Gold Query Question With Ambiguity List the transaction ID of all withdrawals in cash transactions that the client with the id 3356 makes. Corrected Question Figure 6: Question, gold SQL query and a corrected version of the question corresponding to the data point with ID 159 from the development set of BIRD-Bench, showcasing an error in the ambiguous/vague category. 366 Example 3: Incorrect Gold SQL Figure 7 showcases an incorrect golden SQL query found in the data point with ID 132 of the financial domain of the development set of BIRD-Bench. The JOIN operation incorrectly matches clients and accounts by district_id. Due to the possibility of multiple clients and accounts in the same district, accounts are incorrectly associated with the wrong users. - What is the average loan amount by male borrowers? SELECT AVG(T3.amount) FROM client AS T1 INNER JOIN account AS T2 ON T1.district_id = T2.district_id INNER JOIN loan AS T3 ON T2.account_id = T3.account_id WHERE T1.gender = 'M' SELECT AVG(T1.amount) FROM loan AS T1 INNER JOIN account AS T2 ON T1.account_id = T2.account_id INNER JOIN disp AS T3 ON T2.account_id = T3.account_id INNER JOIN client AS T4 ON T3.client_id = T4.client_id WHERE T4.gender = 'M' Incorrect Gold Query Corrected Query Question Figure 7: Example of an incorrect SQL query that generates the wrong gold reference answer for the given question. The JOIN operation incorrectly matches clients and accounts by district_id. Due to the possibility of multiple clients and accounts in the same district, accounts are incorrectly associated with the wrong users. 367 Example 4: Synonyms Figure 8 demonstrates how specific wordings can complicate interpretation for an LLM. The term sum, being both a SQL keyword and a descriptor, led to the LLM’s literal interpretation and the incorrect summation of a transaction. The actual intent was to inquire about the transaction’s balance or amount. A rephrased question resulted in the LLM generating the correct SQL query, fetching the intended information, as seen in the figure. What is the sum that client number 4's account has following transaction  851? Who owns this account, a man or a woman? SELECT   SUM(trans.amount)   client.gender FROM trans  JOIN account ON trans.account_id = account.account_id  JOIN disp ON account.account_id = disp.account_id JOIN client ON disp.client_id = client.client_id WHERE trans.trans_id = 851 AND client.client_id = 4; Predicted Query (Incorrect) Original Question With SQL Keyword/Synonym What is the balance of client number 4's account following transaction 851? Who owns this account, a man or a woman? Corrected Question SELECT   trans.balance   client.gender FROM   trans   JOIN disp ON trans.account_id = disp.account_id   JOIN client ON disp.client_id = client.client_id WHERE trans.trans_id = 851 AND client.client_id = 4;  Predicted Query (Correct) Figure 8: Question from data point with ID 177 from the development set of BIRD-Bench containing a difficult synonym, a corrected version of the question with the synonym replaced and corresponding predicted SQL queries by the DIN-SQL (GPT-3.5) model described in Section 3.3. Showcases the difficulty of synonyms on model predictions. 368 Example 5: String Capitalization As a consequence of SQL being a case-sensitive language when comparing string values in a query, the way a question is formulated regarding the use of uppercase or lowercase letters when asking for a specific value affects the result. This is because the LLM will most likely use the specific entry as given when generating the query, unless it has knowledge of the case used for different entries in the database. Therefore, in Figure 9, an example is provided where the terms "East" and "North" are mentioned with initial capital letters, as is commonly the case. However, the entries for these column values are in lowercase in the database, which means the question needs to account for this for the LLM to be able to generate a correct query. The corrected question and the SQL query generated from it can also be seen in Figure 9. What was the difference in the number of crimes committed in East and North Bohemia in 1996? SELECT  SUM(IIF(A3 = 'East Bohemia', A16, 0)) - SUM(IIF(A3 = 'North Bohemia', A16, 0))  FROM district Gold Query Original Question With Dirty Values What was the difference in the number of crimes committed in east and north Bohemia in 1996? Corrected Question SELECT      SUM(IIF(A3 = 'east Bohemia', A16, 0)) - SUM(IIF(A3 = 'north Bohemia', A16, 0))  FROM district Corrected Query Figure 9: Example Ambiguous. Example 6: Database Schema Non-Alignment Incorrect Question Description What is the disposition ID of the client who made $5100 USD transaction on 1998/9/2? The question asks for a single disposition ID, which does not reflect that there is a one-to-many relation between client and disposition, and most likely it won’t be possible to return a single ID. List out the account numbers of clients who are youngest and have highest average salary? There is no information about salaries of specific clients in the database. Table 5: Examples of questions that does not map to the database schema and accompanying descriptions of why they do not. 369
2024_short
34
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 370–377 August 11-16, 2024 ©2024 Association for Computational Linguistics Dwell in the Beginning: How Language Models Embed Long Documents for Dense Retrieval João Coelhoa,b, Bruno Martinsb, João Magalhãesc, Jamie Callana, Chenyan Xionga a Language Technologies Institute, Carnegie Mellon University, United States b Instituto Superior Técnico and INESC-ID, University of Lisbon, Portugal c NOVA LINCS, NOVA School of Science and Technology, Portugal [email protected] Abstract This study investigates the existence of positional biases in Transformer-based language models for text representation learning, particularly in the context of web document retrieval. We build on previous research that demonstrated loss of information in the middle of input sequences for causal language models, extending it to the domain of embedding learning. We examine positional biases at multiple stages of the training pipeline for an encoder-decoder neural retrieval model, namely language model pre-training, contrastive pre-training, and contrastive fine-tuning. Experiments with the MSMARCO document collection reveal that after contrastive pre-training the model already generates embeddings that better capture the beginning of the input content, with fine-tuning further aggravating this effect. 1 Introduction Recent advancements have allowed Transformerbased models to handle increasingly larger context lengths, resulting in the availability of Language Models (LMs) that can accommodate input lengths reaching tens of thousands of tokens (Xiong et al., 2023). However, studies assessing how well this context is captured by causal LMs (Liu et al., 2023) have shown that models are biased to information contained at the beginning or end of the input, losing information in the middle. Instead of further analysing text generation, we extend this type of study to text representation learning, which has been a fundamental task for dense retrieval (Xiong et al., 2021; Karpukhin et al., 2020), and is also gaining attention in the context of retrieval-augmented generation (Chevalier et al., 2023; Mu et al., 2023) and recommendation systems (Doddapaneni et al., 2024). Specifically, we focus on web document retrieval, examining how well a single embedding represents a complete web document, while assessing the emergence of eventual position biases. We start by continuously pre-training and finetuning an encoder-decoder model similar to T5base (Raffel et al., 2020) but with a context length of 2048 tokens, following standard techniques to achieve a model that is representative of the state-of-the-art among the low-parameter scale. We leverage the MS-MARCO (v1) document collection (Nguyen et al., 2016), as this dataset is commonly used in retrieval evaluation benchmarks (Thakur et al., 2021; Muennighoff et al., 2023), and it is one of the major sources of training data for the fine-tuning of neural retrieval models (Zhang et al., 2023; Wang et al., 2022). We found the existence of a dwell in the beginning effect, i.e. a positional bias displayed by the model where earlier parts of the input are dominant in the embedding. We track this behavior by evaluating the model on position-aware tasks during multiple stages of its training. From our experiments, we conclude that these positional biases start emerging during unsupervised contrastive pre-training, and that the heavy reliance on MSMARCO data for fine-tuning will exacerbate this behavior. Our models and code are available in a public GitHub repository1. 2 Related Work Bi-encoders are now the state of the art approach to dense retrieval (Xiong et al., 2021; Karpukhin et al., 2020). Current standard training setups leverage the usage of contrastive loss functions and methods such as ANCE (Xiong et al., 2021) to sample hard negative examples. Other techniques that are often employed include in-domain pre-training (Gao and Callan, 2022) and retrieval-aligned pre-training (Lu et al., 2021; Xiao et al., 2022; Lee et al., 2019; Ma et al., 2022, 2024), which allow for a better fine-tuning starting point, consequently achieving stronger retrieval results. 1https://github.com/cxcscmu/ LongEmbeddingAnalsys 370 For long document retrieval, early methods dealt with the increased input length through heuristic aggregation strategies, which rely on segmenting the document into passages that are scored independently, with max-pooling being particularly effective (Dai and Callan, 2019). Instead of aggregating scores, studies like PARADE (Li et al., 2020) considered the aggregation of passage-level representations. Other authors (Boytsov et al., 2022) used Transformer architectures with sparse attention patterns (Beltagy et al., 2020; Zaheer et al., 2020) to model the long inputs more efficiently, showing that, on MS-MARCO, the gains that arise from using such models are limited when compared to simple aggregation strategies. Currently, LLaRA (Li et al., 2023) achieves stateof-the-art performance in the MS-MARCO document retrieval task, by continually pre-training LLaMA-7B (Touvron et al., 2023) with a retrievalaligned task. Models like LLaRA leverage context windows of up to 4096 tokens, relying on FlashAttention (Dao et al., 2022; Dao, 2023) for fast and exact full attention computation, together with some variation of Rotary Position Embeddings (RoPE) (Su et al., 2024) or Attention with Linear Biases (ALiBi) (Press et al., 2022). This enables stronger modeling of longer sequences, without the need of additional training, while resorting to full-attention computations. 3 Methodology This section details the training of a T5-base retriever with 2048 input length (T5-2K), adapting the T5 architecture to follow recent advancements in long-context language modeling, and following a state-of-the-art dense retrieval training pipeline. 3.1 Model Architecture We use the T5-base architecture as a backbone, replacing the positional embeddings by RoPE (Su et al., 2024). This change was motivated by RoPE’s ability to extrapolate to larger contexts, and its compatibility with FlashAttention. Specifically, we use Dynamic NTK-RoPE (Peng et al., 2024), which in theory allows for extrapolation to longer input sequences without further training. The retriever follows a tied bi-encoder architecture, i.e., the same model encodes both queries and documents. The T5 decoder is used as a pooler (Ni et al., 2022), generating a single token and considering its representation as the document embedding. 3.2 Dense Retriever Training Pipeline Language Modelling Pre-training: Starting from T5-base available at HuggingFace2, we continuously pre-train the model on 8 billion tokens from the MS-MARCO document collection, for the model to adapt to the new maximum sequence length, new positional embeddings, and MS-MARCO’s document distribution. We follow the original T5 span-corruption task, masking 15% of the input sequence, with an average corrupted span length of 3 tokens. Unsupervised Contrastive Pre-training: In order to align the model with the fine-tuning task, we perform further pre-training following the cropping technique (Izacard et al., 2022). In this task, given a document, a positive pair (s, s+) is sampled by independently cropping two random spans comprising 10 to 50% of the input. The model is trained to minimize the following contrastive loss: L = −1 n X i log ecos(f(si),f(s+ i )) ecos(f(si),f(s+ i )) + P j e cos(f(si),f(s− ij )) , (1) where each si is associated with one positive example s+ i as per the sampling technique, and negatives {s− ij} are sampled in-batch. We use a batch size of 128 leveraging GradCache (Gao et al., 2021), and cross-device negatives across 4 GPUs. The representations f(.) generated by the model are compared using the cosine similarity function. Supervised Contrastive Fine-tuning: We finally fine-tune the model for retrieval in the MSMARCO dataset for eight epochs. Both the title and body of the documents are used, as this is the default setting for the document retrieval task. We start with ANCE-MaxP negatives (Xiong et al., 2021), refreshing them every two epochs with the model under training. We follow the loss introduced in Equation 1, leveraging labeled querydocument pairs. We sample 9 negatives per query, using a batch size of 128 and in-batch negatives. Moreover, cross-device negatives are considered across 4 GPUs, which totals 5120 documents for each query in the batch. 4 Experiments This section starts by addressing the overall retrieval performance of the T5-2K model. Then, we show the dwell in the beginning behavior that is present in the model, investigating each of the training steps to identify its emergence. 2https://huggingface.co/t5-base 371 Size MRR@100 R@100 ANCE-MaxP (Xiong et al., 2021) 125M 0.384 0.906 ADORE (Zhan et al., 2021) 110M 0.405 0.919 ICT (Lee et al., 2019) 110M 0.396 0.882 SEED (Lu et al., 2021) 110M 0.396 0.902 RepLLaMA (Ma et al., 2023) 7B 0.456 T5-2K (ours) 220M 0.414 0.915 Table 1: Retrieval results on MS-MARCO documents. 4.1 Retrieval Performance Before moving to the study of the positional biases, we look into the overall performance of our model to assess its soundness, considering the official MSMARCO evaluation metrics (mean reciprocal rank and recall). For reference, Table 1 contains retrieval results on the MS-MARCO document dataset (development splits), where our model achieves comparable performance to models trained following similar pipelines. The first group references models that do not leverage pre-training tasks, while the ones in the second group incorporate them. Finally, the third group contains a model that also underwent simple fine-tuning, but has 30 times more parameters. Note that other authors have proposed heavily engineered pre-training tasks that do improve results (e.g., COSTA (Ma et al., 2022), Longtriever (Yang et al., 2023), or LLaRA (Li et al., 2023)), but that is out of scope for this work. Appendix A provides additional training details. 4.2 Impact of Relevant Passage Position For a subset of the queries in the MS-MARCO dataset (i.e., 1130 queries), we can cross-reference their relevant documents with the MS-MARCO passage collection to identify the relevant information within the document through exact matching. In a first experiment assessing the impact of the position of the relevant passage, we retrieve from the collection 11 times: First, a default run with the documents unchanged, followed by 10 runs where the documents associated with the queries have the relevant passage moved to different positions. For each document, given its length ld and the length of the relevant passage lp (both in tokens), we compute 10 sequential and uniform insertion points (Ii) for the passage, according to Ii = (i −1) ld−lp 9 , i ∈{1, ..., 10}, moving the passage from its original position to each Ii. The performance of our model after one training episode (i.e., before the first ANCE negative 1 2 3 4 5 6 7 8 9 10 Relevant Passage Position 0.325 0.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 MRR T5-2K (1 Episode) T5-2K (4 Episodes) RepLLaMA Default T5-2K (1 Episode) Default T5-2K (4 Episodes) Default RepLLaMA Figure 1: Performance of T5-2K and RepLLaMA. Full lines represent the unchanged version of the documents. Dashed lines represent the variations obtained when the relevant passages are moved to a different position. 0 1000 2000 3000 4000 5000 Relevant passage starting position (characters) 0 2500 5000 7500 10000 12500 15000 Frequency Median Figure 2: Distribution for the starting position (characters) of relevant passages within 75,000 documents from the MS-MARCO training split. refreshing) is depicted in the blue lines of Figure 1. We see that when the relevant passage is moved to the beginning of the document, the performance increases when compared to the default setting (i.e., unchanged documents). Conversely, if the passage is moved anywhere else, the performance drops. The green lines show that the same pattern also holds for RepLLaMA-7B3 (Ma et al., 2023), i.e. a version of LLaMA-2 fine-tuned for dense retrieval on MS-MARCO for one epoch. In other words, a dwell in the beginning effect is observed, where the initial positions are heavily preferred to later ones. This differs from the lost in the middle (Liu et al., 2023) phenomena, where performance would drop significantly only in middle sections, rising in the end. We also note that further fine-tuning on MS-MARCO data will aggravate the behavior, as shown by the orange lines in Figure 1, given the larger performance mismatch between the default setting and insertion positions other than the first. 3https://huggingface.co/castorini/ repllama-v1-7b-lora-doc 372 1 2 3 4 5 6 7 8 9 10 Sub-string 0.0 0.2 0.4 0.6 0.8 1.0 Cosine Similarity Decoder Pooling Average Pooling Figure 3: Cosine similarity distribution for exact matching of sub-strings in different locations, using a sample of 24,000 MS-MARCO documents, for the T5-2K model after contrastive pre-training using both decoderpooling and average-pooling. To better understand this behavior, we can look at the distribution in Figure 2, which shows that MS-MARCO documents tend to contain the relevant passage earlier in the document, with the median starting position at 746 characters. This can be impactful for the biases in Figure 1, given the lack of examples with relevant information later in the document. To further investigate this phenomenon, the next sub-sections explore the locality of the pre-training tasks to address potential impacts on long-context modeling. 4.3 Contrastive Pre-training Location Bias To better estimate positional biases after the contrastive pre-training step, we evaluate the performance of the model on exactly matching substrings from different locations. For instance, given a document d, 10 sub-strings are sampled by segmenting d in 10 sequential groups with uniform token length. In other words, the first sub-string contains the first 10% tokens of d, while the last sub-string contains the last 10% tokens. Then, the embedding generated for d is compared with the embedding of each sub-string using the cosine similarity. Figure 3 shows that the similarity values tend to decrease when the position of the sub-string moves from the beginning, and that this behavior holds for strategies that either use decoder pooling or average pooling of token representations. This indicates that the representation generated for a document is better at capturing its earlier contents. While in the previous sub-section similar behavior could be justified by the data’s underlying distribution, the pseudo-queries and documents for this task were independently sampled 1 2 3 4 5 6 7 8 9 10 0.2 0.4 0.6 0.8 1 Token window Accuracy T5-rope T5-base Figure 4: Span prediction accuracy on different zones of the input, using 7000 random 3-token spans per window. 1 2 3 4 5 6 7 8 9 10 Token window 0.0 0.2 0.4 0.6 0.8 1.0 Cosine Similarity Figure 5: Cosine similarity distribution for exact matching of sub-strings in different locations, using a sample of 24,000 MS-MARCO documents, for the T5-2K model after language model pre-training. from the same uniform distribution over the input. This suggests that the bias is intrinsic to models trained on web documents, e.g. by fitting to information distributions commonly found in real web documents that follow the inverted pyramid writing style (Koupaee and Wang, 2018), where earlier paragraphs are often more representative. Since web documents are the most common source of contrastive pre-training data (Wang et al., 2022; Izacard et al., 2022), this is problematic for tasks where the whole input must be accurately captured, as is for instance the case of retrieval augmented generation (Chevalier et al., 2023; Mu et al., 2023). 4.4 Span Corruption Location Bias Finally, we look into the language model pretraining task. We evaluate on the original task, by independently corrupting spans of 3 tokens across multiple parts of the input, divided in ten windows as per the previous experiments. Through this, we can see if the accuracy of the model varies when 373 predicting the correct spans across the different parts of the input document. Figure 4 shows uniform performance, suggesting no inherent bias in this task using RoPE. We also evaluate the original T5-base, and see that although it shows a slightly higher performance on predicting later positions, it is still rather uniform. As none of the models display the dwell in the beginning effect, we conclude that the language modeling pretraining task did not induce any biases, and that this behavior emerged as soon as the embedding task was added to the training pipeline. To further solidify this result, Figure 5 shows the evaluation of the T5-2K model after language model pre-training (but before embedding-based learning) on the embedding task from Section 4.3, showing a similar pattern to Figure 4, without a noticeable dwell in the beginning effect. 5 Conclusions and Future Work This study investigated a dwell in the beginning effect on Transformer-based models for document retrieval. Through experiments with a T5 model and RepLLaMA, we observed that the embeddings tend to favor information located at the beginning of the input, leading to decreased performance when relevant information is elsewhere in the document. We investigate each step in the training pipeline, namely language model pre-training, contrastive pre-training, and contrastive fine-tuning, showing that biases emerge in the contrastive pretraining step, and that they persist throughout the fine-tuning process. Our findings emphasize the importance of considering the quality of embeddings for long inputs, particularly in contexts where effectively capturing the entire sequence is essential for the downstream task. Moreover, our results can further justify previous research which showed limited gains on long-sequence modeling for MS-MARCO, when compared to aggregation approaches (Boytsov et al., 2022). As for future work, we note that while our experiments focused on tied encoders, a similar study can be conducted using untied weights, given the size mismatch between queries and documents. Furthermore, addressing the identified biases may involve devising more robust pre-training tasks, or curating better-distributed datasets, all while considering evaluation on appropriate retrieval benchmarks that require long-context modeling (Wang et al., 2023; Saad-Falcon et al., 2024). Limitations and Ethical Considerations All the datasets and models used in our experiments are publicly available, and we provide the source code that allows for reproduction of the results, as well as model checkpoints. By using large pre-trained language models, we acknowledge the risks associated with the presence of inherent biases embedded within the models, which may inadvertently perpetuate or amplify societal biases present in the training data. One limitation in the work reported on this paper relates to the fact that our tests have only used English data. Other languages can expose different phenomena in terms of how document-context is handled, and future work can perhaps consider other datasets such as the one from the NeuCLIR competition (Lawrie et al., 2024). Doing a similar analysis on other domains besides web documents would also be interesting, and we encourage the research community to further study documentcontext modeling in connection to different types of information retrieval tasks. Acknowledgements We thank the anonymous reviewers for their valuable comments and suggestions. This research was supported by the Portuguese Recovery and Resilience Plan through project C64500888200000055 (i.e., the Center For Responsible AI), and also by the Fundação para a Ciência e Tecnologia (FCT), specifically through the project with reference UIDB/50021/2020 (DOI: 10.54499/UIDB/50021/2020), the project with reference UIDP/04516/2020 (DOI: 10.54499/UIDB/04516/2020), and also through the Ph.D. scholarship with reference PRT/BD/153683/2021 under the Carnegie Mellon Portugal Program. References Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The Long-Document Transformer. ArXiv, abs/2004.05150. Leonid Boytsov, Tianyi Lin, Fangwei Gao, Yutian Zhao, Jeffrey Huang, and Eric Nyberg. 2022. Understanding Performance of Long-Document Ranking Models through Comprehensive Evaluation and Leaderboarding. ArXiv, abs/2207.01262. Alexis Chevalier, Alexander Wettig, Anirudh Ajith, and Danqi Chen. 2023. Adapting Language Models to 374 Compress Contexts. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2023). Zhuyun Dai and Jamie Callan. 2019. Deeper text understanding for IR with contextual neural language modeling. In International Conference on Research and Development in Information Retrieval (SIGIR 2019). Tri Dao. 2023. FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning. ArXiv, abs/2307.08691. Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022. FlashAttention: Fast and Memory-Efficient Exact Attention with IOAwareness. In Annual Conference on Neural Information Processing Systems (NeurIPS 2022). Sumanth Doddapaneni, Krishna Sayana, Ambarish Jash, Sukhdeep S. Sodhi, and Dima Kuzmin. 2024. User Embedding Model for Personalized Language Prompting. ArXiv, abs/2401.04858. Luyu Gao and Jamie Callan. 2022. Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval. In Annual Meeting of the Association for Computational Linguistics (ACL 2022). Luyu Gao, Yunyi Zhang, Jiawei Han, and Jamie Callan. 2021. Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup. In Workshop on Representation Learning for NLP. Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2022. Unsupervised Dense Information Retrieval with Contrastive Learning. Transactions on Machine Learning Research. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick S. H. Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense Passage Retrieval for Open-Domain Question Answering. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2020). Mahnaz Koupaee and William Yang Wang. 2018. WikiHow: A Large Scale Text Summarization Dataset. ArXiv, abs/1810.09305. Dawn J. Lawrie, Sean MacAvaney, James Mayfield, Paul McNamee, Douglas W. Oard, Luca Soldaini, and Eugene Yang. 2024. Overview of the TREC 2023 NeuCLIR Track. ArXiv, abs/2404.08071. Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent Retrieval for Weakly Supervised Open Domain Question Answering. In Annual Meeting of the Association for Computational Linguistics (ACL 2019). Canjia Li, Andrew Yates, Sean MacAvaney, Ben He, and Yingfei Sun. 2020. PARADE: passage representation aggregation for document reranking. ArXiv, abs/2008.09093. Chaofan Li, Zheng Liu, Shitao Xiao, and Yingxia Shao. 2023. Making Large Language Models A Better Foundation For Dense Retrieval. ArXiv, abs/2312.15503. Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023. Lost in the Middle: How Language Models Use Long Contexts. ArXiv, abs/2307.03172. Shuqi Lu, Di He, Chenyan Xiong, Guolin Ke, Waleed Malik, Zhicheng Dou, Paul Bennett, Tie-Yan Liu, and Arnold Overwijk. 2021. Less is More: Pretrain a Strong Siamese Encoder for Dense Text Retrieval Using a Weak Decode. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2021). Guangyuan Ma, Xing Wu, Zijia Lin, and Songlin Hu. 2024. Drop your Decoder: Pre-training with Bag-ofWord Prediction for Dense Passage Retrieval. ArXiv, abs/2401.11248. Xinyu Ma, Jiafeng Guo, Ruqing Zhang, Yixing Fan, and Xueqi Cheng. 2022. Pre-train a Discriminative Text Encoder for Dense Retrieval via Contrastive Span Prediction. In International Conference on Research and Development in Information Retrieval (SIGIR 2022). Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, and Jimmy Lin. 2023. Fine-Tuning LLaMA for MultiStage Text Retrieval. ArXiv, abs/2310.08319. Jesse Mu, Xiang Lisa Li, and Noah D. Goodman. 2023. Learning to Compress Prompts with Gist Tokens. ArXiv, abs/2304.08467. Niklas Muennighoff, Nouamane Tazi, Loïc Magne, and Nils Reimers. 2023. MTEB: massive text embedding benchmark. In Conference of the European Chapter of the Association for Computational Linguistics (EACL 2023). Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches. Jianmo Ni, Gustavo Hernández Ábrego, Noah Constant, Ji Ma, Keith B. Hall, Daniel Cer, and Yinfei Yang. 2022. Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models. In Findings of the Association for Computational Linguistics (ACL 2022). Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. 2024. YaRN: Efficient Context Window Extension of Large Language Models. In International Conference on Learning Representations (ICLR 2024). 375 Ofir Press, Noah A. Smith, and Mike Lewis. 2022. Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. In International Conference on Learning Representations (ICLR 2022). Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research. Jon Saad-Falcon, Daniel Y. Fu, Simran Arora, Neel Guha, and Christopher Ré. 2024. Benchmarking and Building Long-Context Retrieval Models with LoCo and M2-BERT. ArXiv, abs/2402.07440. Jianlin Su, Murtadha H. M. Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. 2024. RoFormer: Enhanced transformer with Rotary Position Embedding. Neurocomputing. Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. 2021. BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In Annual Conference on Neural Information Processing Systems (NeurIPS 2021). Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. LLaMA: Open and Efficient Foundation Language Models. ArXiv, abs/2302.13971. Kexin Wang, Nils Reimers, and Iryna Gurevych. 2023. DAPR: A benchmark on document-aware passage retrieval. ArXiv, abs/2305.13915. Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. 2022. Text Embeddings by Weakly-Supervised Contrastive Pre-training. ArXiv, abs/2212.03533. Shitao Xiao, Zheng Liu, Yingxia Shao, and Zhao Cao. 2022. RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked Auto-Encoder. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2022). Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval. In International Conference on Learning Representations (ICLR 2021). Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Malik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike Lewis, Sinong Wang, and Hao Ma. 2023. Effective Long-Context Scaling of Foundation Models. ArXiv, abs/2309.16039. Junhan Yang, Zheng Liu, Chaozhuo Li, Guangzhong Sun, and Xing Xie. 2023. Longtriever: a Pre-trained Long Text Encoder for Dense Document Retrieval. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2023). Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontañón, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. 2020. Big Bird: Transformers for Longer Sequences. In Annual Conference on Neural Information Processing Systems (NeurIPS 2020). Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, and Shaoping Ma. 2021. Optimizing Dense Retrieval Model Training with Hard Negatives. In International Conference on Research and Development in Information Retrieval (SIGIR 2021). Xin Zhang, Zehan Li, Yanzhao Zhang, Dingkun Long, Pengjun Xie, Meishan Zhang, and Min Zhang. 2023. Language Models are Universal Embedders. ArXiv, abs/2310.08232. A Training Details This appendix starts by detailing the training setup used in our experiments, and it then presents experimental results that further assess the impact of the different training stages. A.1 Hyperparameters The following subsections detail the hyperparameters used for model training. If a certain element is not stated, the default value from the HuggingFace Trainer API was used. All models were trained in the same computational infrastructure with 4 NVIDIA A100 40GB GPUs. A.1.1 Span Corruption Pre-training Optimizer AdamW Initial learning rate 1e-5 Scheduler Cosine Batch size 80 Gradient accumulation 16 Gradient clipping 1 Weight decay 0 Total steps 49152 Warm-up steps 10% Table 2: Set of hyperparameters considered for spancorruption pre-training. 376 A.1.2 Contrastive Pre-training Optimizer AdamW Initial learning rate 5e-6 Scheduler Linear Batch size 128 Gradient accumulation 1 Gradient cache chunk size 24 Hard negatives per query 0 Epochs 1 Table 3: Set of hyperparameters considered for contrastive pre-training. A.1.3 Fine-tuning Optimizer AdamW Initial learning rate 5e-6 Scheduler Linear Batch size 128 Gradient accumulation 1 Gradient cache chunk size 24 Hard negatives per query 9 Epochs 8 Table 4: Set of hyperparameters considered for final model fine-tuning. A.2 Impact of Each Training Step Table 5 aligns our training pipeline with previous work, showing the importance of the pre-training tasks, and the benefits of multiple fine-tuning steps with negative refreshing. Note that the performance without any pre-training is particularly low since the model had no previous exposure to the new rotary embeddings. LM Pre-training Contrastive Pre-training Fine-tuning MRR R@100 ✗ ✗ 1 episode 0.177 0.632 ✓ ✗ 1 episode 0.350 0.872 ✓ ✓ 1 episode 0.372 0.889 ✓ ✓ 4 episodes 0.414 0.915 Table 5: Performance on MS-MARCO for different combinations of pre-training tasks, and after fine-tuning. 377
2024_short
35
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 378–385 August 11-16, 2024 ©2024 Association for Computational Linguistics That’s Optional: A Contemporary Exploration of "that" Omission in English Subordinate Clauses Ella Rabinovich The Academic College of Tel Aviv-Yaffo, Israel [email protected] Abstract The Uniform Information Density (UID) hypothesis posits that speakers optimize the communicative properties of their utterances by avoiding spikes in information, thereby maintaining a relatively uniform information profile over time. This paper investigates the impact of UID principles on syntactic reduction, specifically focusing on the optional omission of the connector "that" in English subordinate clauses. Building upon previous research, we extend our investigation to a larger corpus of written English, utilize contemporary large language models (LLMs) and extend the informationuniformity principles by the notion of entropy, to estimate the UID manifestations in the usecase of syntactic reduction choices. 1 Introduction Exploiting the expressive richness of languages, speakers often convey the same messages in multiple ways. A body of research on uniform information density (UID) puts forward the hypothesis that speakers tend to optimize the communicative effectiveness of their utterances when faced with multiple options for structuring a message. The UID hypothesis (Frank and Jaeger, 2008; Collins, 2014; Hahn et al., 2020) suggests that speakers tend to spread information evenly throughout an utterance, avoiding large fluctuations in the perunit information content of an utterance, thereby decreasing the processing load on the listener. The UID hypothesis has been used as an explanatory principle for phonetic duration (Bell et al., 2003; Aylett and Turk, 2006), the choice between short- and long-form of words that can be used interchangeably, such as "info" and "information" (Mahowald et al., 2013), and word order patterns (Genzel and Charniak, 2002; Maurits et al., 2010; Meister et al., 2021; Clark et al., 2023). Our work studies how UID principles affect the phenomenon of syntactic reduction – the situation where a speaker has the choice of whether marking a subordinate clause in sentence with an optional subordinate conjunction (SCONJ) "that" or leave it unmarked, as in "My daughter mentioned [that] he looked good". The only study that tested the UID hypothesis computationally in the context of syntactic reduction is Levy and Jaeger (2006), followed by Jaeger (2010), who studied the effect of multiple factors on the speaker choice of explicit or implicit "that" conjunction. Investigating sentences with main clause (MC, e.g., "My daughter mentioned") and subordinate clause (SC, e.g., "[that] he looked good"), connected by the optional SCONJ, the authors found that UID optimization was the most prominent factor affecting a speaker choice of "that" omission. Specifically, Jaeger (2010) investigated 6700 sentences extracted from the SwitchBoard spoken English dataset, and operationalized the UID principle by computing the surprisal (non-predictability) of the SC opening word (SC onset) using a statistical bigram language model computed from the corpus itself. Our work studies the role of UID principle in syntactic reduction in multiple differing ways. First, we extend the investigation to a much larger corpus of informal written English collected from social media. Second, we use contemporary large language models (LLMs) to estimate the operationalizations of information uniformity in syntactic reduction, suggesting the robustness of our findings. Finally, inspired by the information-theoretic nature of UID and prior art (Maurits et al., 2010; Meister et al., 2021), we extend the SC onset surprisal UID manifestation with the notion of SC onset entropy – the information entropy of LLM distribution over SC opening word, conditioned on the main clause – factor that turns out to have a complementary and significant effect. The contribution of this work is, therefore, twofold: First, we collect and release a large and diverse corpus of nearly 100K sentences, where 378 main and subordinate clauses are connected by the optional SCONJ "that".1 Second, we go above and beyond prior work by using transformer-based LLMs (Vaswani et al., 2017), thereby providing a sound empirical evidence for UID principles associated with syntactic reduction decision, shedding a new and interesting light on the manifestation of UID in spontaneous written language. 2 Dataset 2.1 Data Collection Our dataset in this work was collected from the Reddit discussions platform. Reddit is an online community-driven platform consisting of numerous forums for news aggregation, content rating, and discussions. Communication on discussion platforms often resembles a hybrid between speech and more formal writing, and findings from spoken language may extend to the spontaneous and informal style of social media. As such, Reddit data has been shown to exhibit code-switching patterns, similar to those found in spoken language (Rabinovich et al., 2019). We, therefore, believe that this data presents a good testbed for our analysis. Data Extraction We collected 2M posts and comments by over 20K distinct redditors spanning over 5K topical threads and years 2020–2022. We then split the data into sentences and filtered out sentences shorter than five or longer than 50 words. The remaining 487,614 sentences were parsed using the SOTA benepar syntactic parser, extracting two sentence types with main and subordinate clause, possibly connected by "that": (1) Explicit usage, as in "do you agree that his suggestion sounds better?" More specifically, we identified sentences where SCONJ "that" immediately follows the main verb, as with the main verb "agree" in the example above. A set of rules was devised for identifying relevant sentences, filtering out cases where "that" was used in roles other than SCONJ, such as demonstrative determiner ("I have never been to that part of the city"), demonstrative pronoun ("that is a beautiful view"), or relative pronoun, ("Ann is on the team that lost.").2 (2) Implicit usage, as in "my brother thinks [that] partners should always choose the former alternative", where SCONJ "that" could have been used 1All data and code are available at https://github.com/ ellarabi/uid-that-sc-omission. 2Due to its much lower frequency, we leave the investigation of "that" as a relative conjunction to future work. but was deliberately omitted. The set of rules used for identifying these sentences is identical to the rules used for detection of explicit usages, except that we required the absence of "that" in the appropriate syntactic role. Appendix A.1 provides details on syntactic analysis and rules used to extract relevant sentences. Table 1 reports the details of the collected dataset. type sentences mean sent. len explicit "that" SCONJ 40,786 21.85 implicit "that" SCONJ 57,845 18.07 other "that" usages 51,802 19.57 Table 1: Dataset details: out of over 487K sentences, almost 150K contain "that" in various syntactic roles. Note the slightly higher mean sentence length in sentences with explicit "that" SCONJ compared to implicit. We return to this observation in Section 3. Evaluation A random subset of 500 sentences split equally between explicit and implicit "that" usages was selected for manual evaluation by one of the authors of this paper. The evaluator was guided to check whether omitting "that" in explicit SCONJ cases would result in equally valid, meaning-preserving utterance, and vise versa – whether adding explicit "that" in places it was omitted, would not hurt the sentence fluency and semantics. 96.4% of the first sentence set were found valid, and 95.7% of the second sentence set. Invalid cases include mainly ungrammatical utterances and sentences in languages other than English. 2.2 Data Analysis We next tokenized and lemmatized the sentences using the the spacy python package. Table 2 presents example sentences, taken verbatim from our dataset, with explicit and implicit usages of "that" conjunction. Note that sentences with the same verb lemma (e.g., "forget") show syntactic reduction in some cases but not in others. Studying "that" omission in native and learner English, Olohan and Baker (2000) found that the optional usage of "that" conjunction typically follows reporting main verbs – such as "say", "think", "suggest". Our data largely supports this observation: while the total of 434 distinct main verb lemmas were found to precede the optional "that", roughly two thirds (64.7%) of all usages (or potential usages – omissions) are covered by the top-10 most frequent lemmas in the dataset. Additionally, different verbs exhibit different distribution of ex379 explicit sentence ✓ so the people of such places are easily fooled by the extremists and think that polio vaccine is dangerous ✗ Well, I initially thought [that] it seemed somewhat credible with a large volume of sources, and while ... ✓ Have you forgotten that republicans openly admitted that their #1 priority was giving him a fight ... ? ✗ Christ, I keep forgetting [that] you guys don’t have the right to speak broadly of revolution. Table 2: Example sentences from the dataset with two verb lemmas – "think" and "forget", with explicit and implicit (in square brackets) "that" usage. The main verb is in italic and (explicit or implicit) SCONJ appears in blue. plicit and implicit usages: while "that" is omitted in the majority of cases following lemmas "think" and "guess", other lemmas, like "say", "know", "believe", and "realize" show more balanced behavior. Figure 1 presents the relative frequency of the top-10 most common lemmas in the dataset (bar height), and the split between explicit and implicit "that" SCONJ usages immediately following those main verbs. In particular, the findings in Figure 1 imply that the lemma alone does not carry sufficient predictive power about the potential syntactic reduction in subsequent subordinate clause. Figure 1: Top-10 most frequent lemmas in the data; a bar height denotes the relative ratio out of the total, and each bar is split by the relative usage of explicit and implicit "that" SCONJ. Sentences with the top-10 lemmas account for 64.7% of all sentences in the dataset. 3 Methodology We define a set of factors that we were found to affect syntactic reduction choices (Levy and Jaeger, 2006; Jaeger, 2010), and further study the magnitude of their predictive power by casting the usecase as a classification scenario. We harness the power of contemporary LLMs for reliable computation of SC onset surprisal, as well as for computation of its complementary predictor: SC onset entropy. We define the following predictors: Main clause (MC) length Previous work suggested that the conjunction is likely to be spelled out explicitly in longer sentences; in particular after a longer main clause. This predictor is computed by the number of tokens preceding the (explicit or implicit) SCONJ. As an example, in the sentence "Do you realize [that] I’ve never actually seen him at the office?", MC length will be assigned 3. Subordinate clause (SC) length Similar intuition suggests that the length of a subordinate clause (and more generally, the rest of the sentence) can be used as another predictor. In the example sentence above, SC length will be assigned 9. Main verb frequency Jaeger (2010) found negative correlation between the main clause verb frequency and the tendency to spell out "that" SCONJ. We compute the frequency of main verbs in all sentences as their relative count in the entire corpus of over 480K sentences (see Section 2). SC subject distance This predictor is defined as the number of words at the SC onset up to and including the SC subject. Multiple studies found positive correlation of this factor with the tendency to spell out SCONJ (Hawkins, 2001, 2004; Jaeger, 2010). We extract the SC subject using the nsubj annotation assigned by spacy’s dependency parser to the subordinate clause subject. SC onset information density (ID) Levy and Jaeger (2006) and Jaeger (2010) computed this factor by using the simplest possible estimation, where the information of the SC onset is only conditioned on the main verb, and is operationalized by the notion of surprisal: –log p(SC onset | main verb). All counts (and probabilities) were calculated from the dataset at hand. Harnessing the power of modern pretrained LLMs, we define this predictor as the probability of SC onset, conditioned on entire main clause, namely –log p(SC onset | MC). Notably, Levy and Jaeger (2006) trained the bigram model in a controlled setting where all "that" conjunctions had been omitted. Without this control, results may be circular, e.g., in cases where "that" is explicitly spelled out, the computation –log p(SC onset | MC) could be 380 self-evident because "that" is normally inserted between MC and SC onset (recall that SC onset denotes the opening word of the subordinate clause, "that" excluded). Since training a language model from scratch on corpora with omitted SCs is often impractical, we marginalize out the presence of "that", re-defining the SC onset surprisal to be: –log  p(SC onset | MC) + p(SC onset | MC ◦"that")  This refined definition of SC onset surprisal eliminates the need to re-train a language model on a corpus where the SC "that" had been omitted. SC onset entropy We argue that the information density of the subordinate clause onset can be extended by the complementary notion of entropy – the expected value of the surprisal across all possible SC onsets: 𝐻(𝑝)=−Σ𝑖𝑝𝑖∗log(𝑝𝑖); for a given main clause MC, 𝑝𝑖=𝑝(𝑤𝑖|𝑀𝐶), where 𝑤𝑖is the 𝑖𝑡ℎword in the model’s vocabulary 𝒱. For a certain sentence prefix, entropy calculation involves the computation of the probability distribution over the model’s vocabulary 𝒱for next word prediction. While the computation is practically impossible with a small corpus and an N-gram LM, this information is easily obtainable from pretrained LLMs. Although conceptually related, SC onset entropy and SC onset surprisal were found to be uncorrelated in our dataset: Pearson’s 𝑟of -0.02 was found between these two predictors. Other predictors Among additional factors investigated in prior studies are (1) SC onset frequency, (2) SC subject frequency, (3) the distance of the main verb from the SC onset, and (4) SC ambiguity ("garden path"). The first two factors were found to moderately correlate with SC onset surprisal (Pearson’s 𝑟=-0.57) in our experiments, and hence omitted from the predictor set – not a surprising finding given that in 84.5% of cases SC onset is also the SC subject. The third predictor turns irrelevant in our experimental setup, where SC immediately follows the main verb. Finally, and most notably, Jaeger (2010) manually annotated their sentence set for SC ambiguity ("garden path"), and found this factor non-predictive of "that" omission; we, therefore, refrain from using this predictor here due to the manual effort required for "garden path" annotation in our ample data. 4 Experimental Results and Discussion Experimental Setup We use the OPT-125m autoregressive pretrained transformer model (Zhang et al., 2022), roughly matching the performance and sizes of the GPT-3 class of models, for computation of SC onset surprisal and entropy. Given a sentence prefix, we first extract next token logits and convert them to a probability distribution over the lexicon by applying the softmax function. SC onset surprisal was computed by applying the natural log on the SC onset token probability given the relevant sentence prefix. SC onset entropy was computed by applying the entropy equation (see Section 3) on the outcome probability distribution.3 Estimating the contextual surprisal (or entropy) per word with decoder LLMs operating at the subword level is hard; we, therefore, approximate these metrics by computing the surprisal (or entropy) over the subwords. Pimentel et al. (2023) show that this is practically equivalent to computing a lower bound on the true contextual measurements. Finally, logistic regression is used as a predictive model due to its effectiveness and intrepretability. Experimental Results Our main results are presented in Table 3. We report two scenarios: (1) all main verb lemmas preceding the SC are considered, and (2) only sentences with the most-frequent "think" main verb lemma are considered. Using these two different experimental setups, we test whether observations evident for the full set of main verbs, also emerge in a single main verb scenario. All predictors are standard-scaled for comparative analysis. The effectiveness of our predictors is supported by the considerable (in particular, much higher than chance) classification accuracy in both cases: 0.63 when using all main verbs, and 0.88 when using the "think" verb lemma only. Analysis and Discussion Several observations emerge from the table: inline with prior studies, sentence length – manifested in both MC and SC – has significant positive effect on the explicit usage of "that" connecting the two clauses. One of the highest (absolute value) coefficients is assigned to SC onset surprisal, confirming the findings by Jaeger (2010). The UID hypothesis is further strengthened by the high (the highest in the all 3Experiments with larger OPT models and decoder models from additional model families resulted in similar findings, while less efficient (higher latency). We, therefore, adhere to our choice of advanced, yet relatively small, model. 381 predictor all MC main verb lemmas "think" MC main verb lemma 𝛽 [0.025 0.975] pval sig. 𝛽 [0.025 0.975] pval sig. const -0.383 -0.41 -0.35 *** -2.159 -2.25 -2.07 *** MC length (tokens) 0.302 0.28 0.33 *** 0.242 0.17 0.32 *** MC verb frequency -0.043 -0.07 -0.02 ** — — — — SC length (tokens) 0.197 0.17 0.22 *** 0.196 0.12 0.27 *** SC subject distance 0.036 0.01 0.06 ** 0.031 -0.03 0.09 SC onset surprisal 0.301 0.27 0.32 *** 0.458 0.38 0.54 *** SC onset entropy 0.432 0.41 0.46 *** 0.232 0.15 0.32 *** Table 3: Logistic regression summary. 𝛽coefficients of the scaled features mirror the sign and the relative predictor importance. 95% CIs and p-values are reported, where "***" denotes 𝑝𝑣𝑎𝑙<0.001 and "**" denotes 𝑝𝑣𝑎𝑙<0.01. The MC verb frequency predictor is irrelevant in the single-main-verb-lemma experimental scenario. Figure 2: Kernel density estimation plots: SC onset surprisal for explicit and implicit "that" usages, using the full lemma set (A) and the "think" lemma (B). SC onset entropy for explicit and implicit "that" usages, for the full lemma set (C) and "think" main verb lemma only (D). MC verb lemmas case) coefficient assigned to SC onset entropy; that is, SC onset (non-)predictability can be viewed in a more holistic manner, where both the low predictability of the specific SC onset and the high entropy of the potential sentence continuation, carry over complementary and uncorrelated predictive power on syntactic reduction decision. The overall picture remains consistent in the scenario where the single lemma "think" is considered (albeit SC subject distance shows insignificant), implying the robustness of our findings. Our main findings are further strengthened by the illustration in Figure 2. Kernel density estimation of SC onset surprisal with explicit "that" usages is shifted to the right (A), reflecting the lower predictability of SC onset in this cases compared to those where "that" was omitted. This observation stays sound when only "think" main verb is considered for experiments (B). Sub-figures C and D depict the complementary entropy plots – higher SC onset entropy in explicit "that" usages is mirrored by the right shift of the red line in both full main verb set and "think"-only cases. The definition of surprisal inherently implies the correlation of SC onset surprisal with its frequency. Indeed, these two factors exhibit moderate negative correlation for both all lemma set and "think" lemma only (Pearson’s 𝑟of -0.57 and -0.47, respectively). Replacing SC onset surprisal with its frequency resulted in a slightly weaker regression model in our case, suggesting that surprisal introduces additional predictive power beyond frequency. While surprisal and frequency are highly correlated, they are typically associated with different psycholinguistic behaviours, and we leave a more thorough investigation for future work. 5 Conclusions We study the UID hypothesis manifestation in syntactic reduction using a large, diverse and carefully compiled corpus of English sentences with explicit or implicit "that" subordinate conjunction. Harnessing the power of contemporary pretrained LLMs, we show that SC onset surprisal and entropy are the main factors affecting a speaker’s choice to spell out the optional conjunction "that". Last but not least, a large body of linguistic literature has studied the conditions under which complementizers (like "that" subordinate conjunction) can or cannot be omitted (inter alia Erteschik-Shir (1997); Ambridge and Goldberg (2008)). We believe that future work in this field should better engage with this literature, incorporating insights for more linguistically-informed approach to the task of syntactic reduction analysis. 382 6 Ethical Considerations We use publicly available data to study the manifestation of UID in syntactic reduction. The use of publicly available data from social media platforms, such as Reddit, may raise normative and ethical concerns. These concerns are extensively studied by the research community as reported in e.g., Proferes et al. (2021). Here we address two main concerns. (1) Anonymity: Data used for this research can only be associated with participants’ user IDs, which, in turn, cannot be linked to any identifiable information, or used to infer any personal or demographic trait. (2) Consent: Jagfeld et al. (2021) debated the need to obtain informed consent for using social media data mainly because it is not straightforward to determine if posts pertain to a public or private context. Ethical guidelines for social media research (Benton et al., 2017) and practice in comparable research projects (Ahmed et al., 2017), as well as Reddit’s terms of use, regard it as acceptable to waive explicit consent if users’ anonymity is protected. We did not make use of AI-assisted technologies while writing this paper. We also did not hire human annotators at any stage of the research. 7 Limitations We believe that the main limitation of this work is the relatively restrictive experimental setup of sentences used to study UID principles in syntactis reduction. As an example, additional syntactic setting of interest includes sentences where "that" is used as a relative conjunction, as in "the book [that] I read last week made me quite sad...". Due to its much lower frequency in our data, we leave the investigation of "that" omission before a relative clause to future work. The current study also limits its set of main clauses to those where the SCONJ immediately follows MC verb, not considering cases like "My boyfriend has mentioned several times [that] we should approach this guy with the offer", where the main verb "mentioned" is separated from the SC onset "we" by the "several times" phrase. However, we have reasons to believe that similar findings would be evident in these scenarios, and plan to extend the research to those cases as well. Acknowledgements We are grateful to Shuly Wintner for much advice during the early stages of this work. We are also thankful to Alon Rabinovich for his help with the annotation effort for this study. References Wasim Ahmed, Peter A Bath, and Gianluca Demartini. 2017. Using twitter as a data source: An overview of ethical, legal, and methodological challenges. The ethics of online research, 2:79–107. Ben Ambridge and Adele E Goldberg. 2008. The island status of clausal complements: Evidence in favor of an information structure explanation. Matthew Aylett and Alice Turk. 2006. Language redundancy predicts syllabic duration and the spectral characteristics of vocalic syllable nuclei. The Journal of the Acoustical Society of America, 119(5). Alan Bell, Daniel Jurafsky, Eric Fosler-Lussier, Cynthia Girand, Michelle Gregory, and Daniel Gildea. 2003. Effects of disfluencies, predictability, and utterance position on word form variation in english conversation. The Journal of the acoustical society of America, 113(2):1001–1024. Adrian Benton, Glen Coppersmith, and Mark Dredze. 2017. Ethical research protocols for social media health research. In Proceedings of the first ACL workshop on ethics in natural language processing, pages 94–102. Thomas Hikaru Clark, Clara Meister, Tiago Pimentel, Michael Hahn, Ryan Cotterell, Richard Futrell, and Roger Levy. 2023. A Cross-Linguistic Pressure for Uniform Information Density in Word Order. Transactions of the Association for Computational Linguistics, 11:1048–1065. Michael Xavier Collins. 2014. Information density and dependency length as complementary cognitive models. Journal of Psycholinguistic Research, 43. Nomi Erteschik-Shir. 1997. The dynamics of focus structure. Cambridge University Press. Austin F Frank and T Florain Jaeger. 2008. Speaking rationally: Uniform information density as an optimal strategy for language production. In Proceedings of the annual meeting of the cognitive science society, volume 30. Dmitriy Genzel and Eugene Charniak. 2002. Entropy rate constancy in text. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 199–206. Michael Hahn, Dan Jurafsky, and Richard Futrell. 2020. Universals of word order reflect optimization of grammars for efficient communication. Proceedings of the National Academy of Sciences, 117(5):2347–2353. John A Hawkins. 2001. Why are categories adjacent? Journal of linguistics, 37(1):1–34. 383 John A Hawkins. 2004. Efficiency and complexity in grammars. OUP Oxford. T Florian Jaeger. 2010. Redundancy and reduction: Speakers manage syntactic information density. Cognitive psychology, 61(1):23–62. Glorianna Jagfeld, Fiona Lobban, Paul Rayson, and Steven H Jones. 2021. Understanding who uses reddit: Profiling individuals with a selfreported bipolar disorder diagnosis. arXiv preprint arXiv:2104.11612. Roger Levy and T Jaeger. 2006. Speakers optimize information density through syntactic reduction. Advances in neural information processing systems, 19. Kyle Mahowald, Evelina Fedorenko, Steven T Piantadosi, and Edward Gibson. 2013. Info/information theory: Speakers choose shorter words in predictive contexts. Cognition, 126(2):313–318. Luke Maurits, Dan Navarro, and Amy Perfors. 2010. Why are some word orders more common than others? a uniform information density account. Advances in neural information processing systems, 23. Clara Meister, Tiago Pimentel, Patrick Haller, Lena Jäger, Ryan Cotterell, and Roger Levy. 2021. Revisiting the uniform information density hypothesis. arXiv preprint arXiv:2109.11635. Maeve Olohan and Mona Baker. 2000. Reporting that in translated english. evidence for subconscious processes of explicitation? Across languages and cultures, 1(2):141–158. Tiago Pimentel, Clara Meister, Ethan G Wilcox, Roger P Levy, and Ryan Cotterell. 2023. On the effect of anticipation on reading times. Transactions of the Association for Computational Linguistics, 11:1624– 1642. Nicholas Proferes, Naiyan Jones, Sarah Gilbert, Casey Fiesler, and Michael Zimmer. 2021. Studying reddit: A systematic overview of disciplines, approaches, methods, and ethics. Social Media+ Society, 7(2):20563051211019004. Ella Rabinovich, Masih Sultani, and Suzanne Stevenson. 2019. Codeswitch-reddit: Exploration of written multilingual discourse in online discussion forums. arXiv preprint arXiv:1908.11841. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. Opt: Open pretrained transformer language models. 384 A Appendices A.1 Identification of Sentences with Optional "that" Subordinate Conjunction Figures 3 and 4 depict two parsing trees of sentences with explicit and implicit usage of "that" SCONJ, respectively. After parsing a sentence, a set of rules was applied for identification of cases where "that" is used (or could have been used) in the role of subordinate conjunction connecting main and subordinate clause. As mentioned in Section 2, the extraction process was tuned for accurate (over 95%) performance. Figure 3: Constituency parse tree of the sentence "He’s smart enough to know that you are a good catch.". Note the main verb "know" followed by the explicit SCONJ "that" and subordinate clause "you are a good catch". Figure 4: Constituency parse tree of the sentence "yeah, that, and I think they got a lower rent price compared to the renewal downtown". Note the main verb "think" followed by the omitted SCONJ "that" and subordinate clause "they got a lower rent price compared to the renewal downtown". 385
2024_short
36
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 386–397 August 11-16, 2024 ©2024 Association for Computational Linguistics Do Large Language Models Discriminate in Hiring Decisions on the Basis of Race, Ethnicity, and Gender? Haozhe An1 Christabel Acquaye1 Colin Kai Wang2 Zongxia Li1 Rachel Rudinger1 1University of Maryland, College Park 2University of Texas at Austin {haozhe, cacquaye, zli12321, rudinger}@umd.edu, [email protected] Abstract We examine whether large language models (LLMs) exhibit race- and gender-based name discrimination in hiring decisions, similar to classic findings in the social sciences (Bertrand and Mullainathan, 2004). We design a series of templatic prompts to LLMs to write an email to a named job applicant informing them of a hiring decision. By manipulating the applicant’s first name, we measure the effect of perceived race, ethnicity, and gender on the probability that the LLM generates an acceptance or rejection email. We find that the hiring decisions of LLMs in many settings are more likely to favor White applicants over Hispanic applicants. In aggregate, the groups with the highest and lowest acceptance rates respectively are masculine White names and masculine Hispanic names. However, the comparative acceptance rates by group vary under different templatic settings, suggesting that LLMs’ raceand gender-sensitivity may be idiosyncratic and prompt-sensitive. 1 Introduction Field experiments in prior social science research (Bertrand and Mullainathan, 2004; Cotton et al., 2008; Kline et al., 2022) have demonstrated that Black- or White-sounding names play a nontrivial role in influencing the hiring decision of candidates with similar qualifications. Their results suggest that applicants with names perceived as African American encounter significantly fewer opportunities in comparison to their counterparts with names perceived as European American. Following the rapid advancement of large language models (LLMs; Touvron et al., 2023a,b; OpenAI, 2023), a number of studies have examined the ways in which LLMs exhibit human-like behaviors and cognitive biases (Aher et al., 2023; Dillion et al., 2023; Argyle et al., 2023). In this work, we pose the following question: When prompted to make hiring decisions, do LLMs exhibit discriminatory Figure 1: We study if LLMs exhibit labor market discrimination based on various first names used in the input prompts that ask a model to write an open-ended application outcome email. Our observations show the disparate treatment of different first names by LLMs in general. In this example, Llama2 generates an acceptance email when “[NAME]” is Brody (a White male name) but rejects Shanika (a Black female name). behaviors based on the race, ethnicity, and gender associated with a job applicant’s name? There are several reasons to study this question: (1) To contribute to scientific understanding of the internal, representational biases of LLMs, (2) to demonstrate the potential harms of using LLMs in real-world hiring decisions, and (3) as further validation of LLMs as a tool for social scientists to cheaply test hypotheses prior to conducting costly, real-world studies. The research question has implications for understanding both representational and allocational harms of LLMs (Barocas et al., 2017; Crawford, 2017; Blodgett et al., 2020). We design a series of prompts that ask an LLM to write an email (e.g., on behalf of a hiring manager) to inform a job applicant about the outcome of their hiring decision. In all settings, the prompt contains the instructions to the LLM and the first name of the applicant. We experiment with three additional variables: the job title (position sought), the candidate’s level of qualification, and template (para)phrasing. Crucially, all prompts do not spec386 Figure 2: Prompt construction. The Cartesian product of the three sets of elements in this figure gives rise to all our 820 templates used in the study. Both “[ROLE]” and “[NAME]” are placeholder tokens that are instantiated with some occupation and some first name, respectively, during the construction of a prompt. If a prompt contains the description of the candidate’s qualification, the sentence indicating the qualification is prepended to the base template. *When the role is not specified, the phrase “of [ROLE]” in gray is omitted. ify whether to accept or reject the applicant; thus, to fulfill the instructions, the model must choose. With large-scale analysis of these generations (over 2 million emails), we find that LLMs tend to favor White applicants in making hiring decisions. In contrast, models tend to disadvantage names associated with underrepresented groups. In particular, Hispanic names receive the least favorable treatment. While it is (hopefully) unlikely that employers would use LLMs in precisely this fashion, we believe that, by isolating the influence of names on hiring decisions, these experiments may serve as a “canary in the coalmine,” indicating the risk of possible fairness issues with the use of LLMs at other stages of the hiring pipeline, or in professional workplace settings more generally. 2 Experiment Setup To study the influence of race and gender on LLMs’ hiring decisions, we develop a set of prompt templates instructing models to write an email to a job applicant informing them of a hiring decision. Each template contains a “[NAME]” placeholder, which we substitute with first names statistically associated with a particular race or ethnicity, and gender in the United States. We then measure the average rate of acceptance within each demographic group and compare it to the average acceptance rate over all groups. This methodology of first-name substitution is well established in the social sciences and in NLP research for measuring biased or discriminatory behavior in humans or models (Greenwald et al., 1998; Bertrand and Mullainathan, 2004; Caliskan et al., 2017). Collecting first names We obtain 100 names that are most representative of each of the three races/ethnicities in our study (White, Black, and Hispanic), evenly distributed between two genders (female and male) by consulting Rosenman et al. (2023) for race/ethnicity data and the social security application dataset (SSA1) for gender statistics. As a result, we have 50 names in each intersectional demographic group and 300 names in total. Detailed name selection criteria and a complete list of first names are available in appendix A. Prompts We design 820 templates by enumerating all possible combinations of 4 qualification levels, 5 base templates, and 41 occupational roles, as shown in Fig. 2. To mitigate the model’s sensitivity to different template phrasings, we use ChatGPT 3.5 to paraphrase our initial template into four variations, resulting in five base templates. The 41 job roles include 40 occupations (38 are from WinoBias (Zhao et al., 2018) and we additionally include “CTO” and “software engineer” as they are frequently generated by Llama2 in our preliminary experiments) and 1 under-specified setting. We use under-specified inputs primarily to better isolate the influence of name demographics on hiring decisions. Including other applicant details (e.g., real-world or synthetic resumes) could confound the results or limit their generalizability, as it would introduce a large number of variables, making exhaustive and well-controlled experiments infeasible (Veldanda et al., 2023). Detailed information about template construction is illustrated in appendix B. Models We carry out our experiments using five state-of-the-art instruction-tuned generative LLMs: Mistral-Instruct-v0.1 (Jiang et al., 2023), Llama2 (Touvron et al., 2023b) with three different model sizes (7b, 13b, and 70b), and GPT-3.51https://www.ssa.gov/oact/babynames/ 387 7 Occupational Roles 41 Occupational Roles Mistral-7b Llama2-7b Llam2-13b Llama2-70b GPT-3.5 Mistral-7b Llama2-7b Llam2-13b White Female 52.61† 49.72 35.13† 26.59 27.23 54.88† 49.65 34.02† White Male 54.89† 49.70 34.69† 26.66 25.11† 57.16† 49.51 33.14† Black Female 55.36† 51.00† 33.15 28.06† 26.25 57.16† 50.70† 33.05∗ Black Male 53.89 49.99 33.42 27.23 25.29 55.90 49.45 32.46 Hispanic Female 55.03† 49.28† 32.65∗ 26.46 28.23† 56.99† 49.02 32.26 Hispanic Male 52.80† 48.56† 31.57† 26.95 24.45† 54.90† 47.36† 30.38† Max Difference 2.75 2.44 3.56 1.60 3.78 2.28 3.34 3.64 Average 54.10 49.71 33.43 26.99 26.09 56.16 49.28 32.55 Number of Emails 144000 144000 144000 48000 19200 756000 756000 756000 Table 1: Acceptance rate (%) in each model in our study. Notations: blue - significantly above average; red significantly below average; † indicates p < 0.01; ∗indicates p < 0.05 under the permutation test. Turbo (Ouyang et al., 2022). Model hyperparameters are detailed in appendix C.1. For open-source models, we execute the experiments with 3 different random seeds for reproducibility and report the average results. We note that due to limited computational resources, we run the experiments on a smaller scale for Llama2-70b and GPT-3.5Turbo, obtaining 756, 000 emails for Mistral-7b and Llama2-{7b, 13b}, 48, 000 emails for Llama270b, and 19, 200 emails for GPT-3.5. Generation validity We randomly sample 30 instances for each intersectional group and manually check the validity of the generated content in a total of 180 emails generated from Llama2-13b. An email is valid if it (1) follows a typical email communication format with fluent content and (2) clearly communicates the binary application outcome (accept or reject). By randomly sampling 180 emails per model (evenly distributed among gender and racial groups), we find that all models have high validity rates between 83% to 100% (Table 5 in appendix C.2). We also find that the validity rates for each intersectional group within a model have relatively small standard deviations (Table 6 in appendix C.2). Assuming a binomial distribution for valid email generations, we do not find statistically significant differences between any pair of groups within the same model setting (p > 0.05). These observations suggest that all intersectional groups have very similar validity rates. Email classification Our experiments require labeling over 2M emails as acceptances or rejections. To automate this, we train a support vector machine (SVM) model with TF-IDF features (Ramos et al., 2003) using 1, 200 manually annotated instances evenly distributed across gender and race/ethnicity. To further mitigate the risk of demographic bias in the classifier, applicant names are redacted during training and usage. The classifier achieves an F1 score of 0.98 on the 170 valid emails randomly sampled from Llama2-13b generations, showing that accept and reject emails are easy to distinguish. More details are described in appendix C.2. 3 Results and Discussion We examine the generated emails from a variety of LLMs and elaborate how they relate to known labor market discrimination. We present the acceptance rates for every intersectional group in different templatic settings and models in Table 1 to Table 3. To measure the statistical significance in the difference between the email outcome distributions, we conduct a permutation test between each group’s acceptance rate and the population acceptance rate. Details about the permutation test are elaborated in appendix C.3. Differences are small but statistically significant. We aggregate the acceptance over (1) a subset of 7 occupational roles2 and (2) all 41 occupational roles respectively for different models in Table 1. We observe that the absolute differences between the highest and lowest acceptance rate for different groups are generally small (between 1.60% and 3.78% across models). Despite the small magnitude, our permutation test testifies the statistical significance. A model that discriminates in a small 2The seven roles include the under-specified setting, software engineer, CTO, secretary, hairdresser, carpenter, and mechanician. We choose to experiment with these seven roles because of their strong gender association indicated in WinoBias (Zhao et al., 2018) or their frequent occurrence in Llama2 generations in our preliminary experiments. 388 Doctoral Master’s Bachelor’s High school Postsecondary No formal edu Mistral-7b White Female 61.37 56.73 55.57∗ 53.26† 56.82 53.71† White Male 62.15 58.27∗ 57.80∗ 55.28 58.44 56.65∗ Black Female 61.82 56.60 57.51 55.48 58.77∗ 56.76† Black Male 60.55 56.70 56.27 54.06 57.30 55.54 Hispanic Female 61.90 56.53 57.34 55.70∗ 58.69∗ 56.31 Hispanic Male 60.25∗ 55.50 55.07† 53.58∗ 55.85† 54.50∗ Population Avg 61.34 56.72 56.59 54.56 57.65 55.58 Llama2-7b White Female 48.82 51.37∗ 52.04 50.96 49.79 47.51 White Male 48.77 48.10 52.66 50.18 50.19 47.22 Black Female 48.83 51.77† 53.34† 51.57† 50.79† 48.76† Black Male 47.52 49.53 52.35 50.01 49.71 47.52 Hispanic Female 47.88 49.00 52.11 50.41 48.91 46.75 Hispanic Male 46.38† 46.87† 51.05† 48.47† 47.78† 44.69† Population Avg 48.03 49.44 52.26 50.27 49.53 47.07 Llama2-13b White Female 32.03† 37.47∗ 38.89† 34.86† 33.47† 31.20† White Male 30.28 35.13 37.81∗ 33.92 33.06∗ 30.52 Black Female 30.70 38.07† 37.12 34.10 32.45 30.43 Black Male 30.62 36.27 36.87 33.00 32.06 29.92 Hispanic Female 30.07 35.57 35.90∗ 33.66 31.99 29.76 Hispanic Male 27.82† 32.60† 34.58† 31.75† 29.76† 27.84† Population Avg 30.25 35.85 36.86 33.55 32.13 29.95 No. of Emails 36000 18000 180000 126000 90000 288000 Table 2: Acceptance rate (%) of each intersectional group in emails generated by three models across various minimum educational requirement for different occupational roles. but statistically significant manner can still be problematic. An absolute disadvantage of 3.78% based purely on the racial, ethnic, or gender associations of one’s name should be concerning, particularly as such differences, if systematic, can accumulate throughout a pipeline where a series of slightly discriminatory decisions are made (Alexander, 2011). Acceptance rates are uniformly lowest for Hispanic male names. Hispanic male applicants consistently receive the least favorable treatment in many settings across Mistral-7b (Tables 1, 2, 3), Llama2-{7b, 13b, 70b} (Tables 1, 2, 3), and GPT-3.5 (Table 1). Lower LLM-based acceptance rates for applicants with Hispanic names echoes prior findings of discrimination against Hispanic individuals in the labor market (Reimers, 1983; Chiswick, 1987; Cross, 1990; Kenney and Wissoker, 1994; Woods, 2000; Duncan et al., 2006). If deployed by employers for hiring decisions, LLMs could further entrench, systematize, and amplify hiring discrimination against Hispanic job applicants. Some groups exhibit higher acceptance rates. Table 1 shows that White male and Black female names receive above-average acceptance rates overall in two and three of five models tested, respectively. The trend that models often favor White male applicants reflects existing disparities in the U.S. labor market (Galgano, 2009; Ritter and Taylor, 2011; McDonald et al., 2014; Pedulla and Pager, 2019) and pose a risk of exacerbating them if LLMs are adopted for employment decisions. The not specified highly qualified somewhat qualified not qualified Mistral-7b White Female 77.30 98.47 42.90† 0.24 White Male 76.54 98.46 52.83† 0.27 Black Female 77.63∗ 99.00† 51.24 0.23 Black Male 75.57∗ 98.56 48.60 0.31 Hispanic Female 76.95 98.95† 51.13 0.30 Hispanic Male 75.49∗ 98.22† 45.11∗ 0.27 Population Avg 76.58 98.61 48.64 0.27 Llama2-7b White Female 52.14∗ 77.49 58.36 10.11 White Male 49.57† 78.15 59.25† 10.62∗ Black Female 54.64† 78.99∗ 58.60 10.30 Black Male 50.02∗ 78.74 58.64 10.05 Hispanic Female 52.44† 77.42 56.36† 9.81 Hispanic Male 47.47† 76.53† 55.66† 9.63 Population Avg 51.05 77.89 57.81 10.09 Llama2-13b White Female 33.02 62.72† 37.21† 3.17∗ White Male 30.62† 61.83 37.10† 3.19† Black Female 34.81† 61.02 33.10 2.95 Black Male 31.91 61.05 34.07 2.70 Hispanic Female 33.24∗ 60.44 32.74∗ 2.51† Hispanic Male 29.22† 58.40† 31.28† 2.61∗ Population Avg 32.14 60.91 34.25 2.86 No. of Emails 189000 189000 189000 189000 Table 3: Acceptance rate (%) of each intersectional group in emails generated by three models across different levels of qualifications stated in the prompts. results observed for Black female names are interesting as they run counter to the real-world resume study of Bertrand and Mullainathan (2004). However, when occupations are grouped by education level3 (Table 2), we observe that higher acceptance rates for Black female names on Mistral-7b only applies to occupations in the “no formal education” and “postsecondary non-degree award” categories. Llama2-70b shows least variation across demographic groups. Llama2-70b appears to exhibit the least variation in acceptance rates across groups (Table 1), with a range of 1.6% between the groups with the highest and lowest overall acceptance rates. By contrast, the corresponding ranges for Llama2-13b and GPT-3.5 are 3.56% and 3.78%, respectively. This observation may suggest that larger models could be more robust and fair in the task of generating hiring decision emails in an under-specified setting. However, it is inconclusive which exact factors contribute to the minimal variations in Llama2-70b because the model training details are not fully available to the public. Qualifications matter. In Table 3 we group results by stated qualification levels and observe a couple trends across models. When candidate qualification level is not specified, it appears that female names receive higher acceptance rates in general than male names; however, 3Data source: https://www.bls.gov/emp/tables/ education-and-training-by-occupation.htm 389 White Black Hispanic Female Male Female Male Female Male Acc. Rate (%) 25.75 21.50∗ 28.25 24.50 30.00 27.25 Table 4: Acceptance rate (%) of GPT-3.5-generated emails for the role of secretary across different intersectional groups. ∗White male candidates receive significantly lower acceptance rates for this role (p < 0.05). when candidates are described as only “somewhat qualified” or “not qualified,” White names, in particular White male names, appear most likely to receive acceptances. While our results do not offer an explanation for why these trends occur, we speculate that it could pertain to a (real or perceived) gender “confidence gap”: Partially-qualified female job seekers are less likely to apply for positions than their partially-qualified male counterparts due to lower confidence in their qualifications (Carlin et al., 2018; Sterling et al., 2020) Some models exhibit human-like gender-occupation stereotypes. We find that some models, in certain cases, exhibit human-like stereotypes when making hiring decisions for masculine or feminine job roles. For instance, Table 4 shows that, for secretary, which is a stereotypically feminine occupation (Zhao et al., 2018), GPT-3.5 generates a lower number of acceptance emails for male candidates compared to their female counterparts across racial and ethnic groups. While we observe this trend for some female- or male-dominated jobs, it may not be universally applicable to all occupational roles across models, suggesting that LLM’s gender-sensitivity may be idiosyncratic and prompt-dependent. 4 Related Work First names, demographic identities, and economic opportunities Researchers have been using first names that have strong correlation with some demographic attributes, such as gender, race/ethnicity, and age, to examine the problem of social bias in both social science studies and NLP systems (Greenwald et al., 1998; Nosek et al., 2002; Caliskan et al., 2017; An et al., 2022). Partially due to their association with demographic identities, first names often lead to inequitable distribution of economic opportunities as people build stereotypes in favor of or against names that reveal a person’s demographic identity (Bertrand and Mullainathan, 2004; Nunley et al., 2015; Goldstein and Stecklov, 2016; Ahmad, 2020). First name biases in language models While numerous recent works propose new benchmark datasets and algorithms to uncover social biases in language models (Rudinger et al., 2018; Zhao et al., 2018; Nangia et al., 2020; Nadeem et al., 2021; Parrish et al., 2022; Cheng et al., 2023; Hossain et al., 2023), some are particularly dedicated to the study of first name biases or artifacts in these models (Maudslay et al., 2019; Shwartz et al., 2020; Wolfe and Caliskan, 2021; Wang et al., 2022; Jeoung et al., 2023; Sandoval et al., 2023; Wan et al., 2023; An et al., 2023; An and Rudinger, 2023). We build upon previous research and examine the disparate treatment of names in email generation regarding job application outcomes. Auditing LLMs in hiring Several contemporaneous works (Tamkin et al., 2023; Haim et al., 2024; Gaebler et al., 2024) also examine whether LLMs treat individuals of various demographic backgrounds differently in decision-making. Most related to our paper, Veldanda et al. (2023) and Armstrong et al. (2024) generate synthetic resumes for a limited number of job categories (≤10) and uncover hiring bias either during generation or in downstream tasks (e.g., resume summarization and assessment) using a smaller set of names (≤32). In contrast, our work studies implicit hiring discrimination in LLMs by conducting large-scale experiments using 300 names and 41 occupational roles in under-specified inputs, without introducing other confounders from synthetic resumes. 5 Conclusion Through the use of 820 templates and 300 names, we generate as many as 756, 000 job application outcome notification emails per model that we use to measure LLMs’ discriminatory behavior in labor market decisions. Our analyses demonstrate the presence of such discrimination in some LLMs against some traditionally underrepresented groups, such as Hispanic, as their acceptance rates are systematically lower than the average in multiple cases. White applicants, however, are often portrayed in a more positive light with a higher chance of getting accepted. Our findings alert the community to be concerned about the implicit biases within the model as they could cause both representational and allocational harms to various demographic groups in downstream tasks. 390 Limitations Incomplete representation of demographic identities Due to the limited data availability of first names, we are only able to thoroughly study names representing three races/ethnicities (Black, White, and Hispanic) and two genders (female and male). Getting a large number of names from the underrepresented demographic groups is a common challenge in research on first name biases (An et al., 2023; An and Rudinger, 2023; Sandoval et al., 2023). In addition, it is essential to recognize that our diverse community encompasses numerous other racial, ethnic, and gender identities, not to mention various demographic attributes such as nationality, religion, disability, and many more. We acknowledge that some of these attributes are not strongly correlated with first names and thus it is less feasible to use names as a proxy to represent these demographic traits. While our study focuses on a small subset of demographic identities inferred from first names, our findings on first name biases in email generation underscore the need to use LLMs fairly and responsibly. Incomplete representation of occupations In this paper, we have studied 40 different occupational roles on a coarse-grained level. However, the 2018 Standard Occupational Classification (SOC) system4 contains 867 occupations. There remains a large number of occupational roles not being tested. It is inconclusive, although likely, that LLMs would also have differential treatment towards different first names for other occupations. Additional extensive experiments would need to be conducted in order to assess the validity of this hypothesis. A wider range of LLMs could be tested In our experiments, we have tested 5 state-of-the-art models of considerably very large model sizes (all ≥ 7b). However, the discrimination and biases in smaller language models are not studied in our work. Since these smaller models typically have weaker instruction-following abilities, our hypothesis is that they may exhibit different behavior from the larger models, especially when the input prompt states the candidate is not qualified. We leave the study of smaller models as future work. Not simulating the entire hiring process Our prompts are designed to study LLMs’ discriminatory behavior in labor market with little to no ad4https://www.bls.gov/soc/ ditional information about the applicant. This simulation is different from a realistic hiring process in real life where substantially more information about a candidate would be made available to the hiring team. Despite a much simplified processing of getting to know a job applicant, the short but focused input prompt could directly reveal the representational biases in LLMs without the distraction of additional applicant details. Finally, we note that our experiments do include specifying an applicant’s degree of qualification for the position, which can be seen as a summary judgment in place of other application details such as a resume. Ethics Statement As the widespread adoption of LLMs continues, prioritizing responsible usage of these tools becomes paramount, particularly in contexts where they are employed to allocate social resources and economic opportunities. Our study sheds light on the potential risks associated with integrating LLMs into the hiring process. Notably, these models have learned to correlate distinct first names with varying rates of job application acceptance. This underscores the necessity of vigilant consideration when deploying LLMs in decision-making processes with significant societal implications. Though we believe studying the discriminatory behavior of LLMs is an important social and scientific endeavor, our study is not without potential risk. Studies of race, ethnicity, and gender have the potential to themselves essentialize or misconstrue social categories in ways that flatten or misrepresent individual members of those groups. Additionally, while it is our belief that the harms of LLMs for hiring practices outweigh the potential benefits in part due to scalability concerns, employers and policy-makers must also weigh the harms of the alternative; in this case, human decision-making is also known to be biased. While warning of the potential harms of AI usage in decision-making is beneficial if it prevents harmful usage, there is a potential risk that the resulting stigmatization of LLMs could prevent its future adoption in settings where it could be used to advance social equality. Acknowledgements We thank the anonymous reviewers for their constructive feedback. We are grateful to Kaiyan Shi and Tianrui Guan, who helped us with data collection in the early stages of this project. 391 References Gati V Aher, Rosa I. Arriaga, and Adam Tauman Kalai. 2023. Using large language models to simulate multiple humans and replicate human subject studies. In Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 337–371. PMLR. Akhlaq Ahmad. 2020. When the name matters: An experimental investigation of ethnic discrimination in the finnish labor market. Sociological Inquiry, 90(3):468–496. Michelle Alexander. 2011. The new jim crow. Ohio St. J. Crim. L., 9:7. Haozhe An, Zongxia Li, Jieyu Zhao, and Rachel Rudinger. 2023. SODAPOP: Open-ended discovery of social biases in social commonsense reasoning models. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 1573–1596, Dubrovnik, Croatia. Association for Computational Linguistics. Haozhe An, Xiaojiang Liu, and Donald Zhang. 2022. Learning bias-reduced word embeddings using dictionary definitions. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1139– 1152, Dublin, Ireland. Association for Computational Linguistics. Haozhe An and Rachel Rudinger. 2023. Nichelle and nancy: The influence of demographic attributes and tokenization length on first name biases. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 388–401, Toronto, Canada. Association for Computational Linguistics. Lisa P Argyle, Ethan C Busby, Nancy Fulda, Joshua R Gubler, Christopher Rytting, and David Wingate. 2023. Out of one, many: Using language models to simulate human samples. Political Analysis, 31(3):337–351. Lena Armstrong, Abbey Liu, Stephen MacNeil, and Danaë Metaxa. 2024. The silicone ceiling: Auditing gpt’s race and gender biases in hiring. arXiv preprint arXiv:2405.04412. Solon Barocas, Kate Crawford, Aaron Shapiro, and Hanna Wallach. 2017. The problem with bias: Allocative versus representational harms in machine learning. In 9th Annual conference of the special interest group for computing, information and society. Marianne Bertrand and Sendhil Mullainathan. 2004. Are emily and greg more employable than lakisha and jamal? a field experiment on labor market discrimination. American Economic Review, 94(4):991–1013. Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of “bias” in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5454– 5476, Online. Association for Computational Linguistics. Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Barbara A. Carlin, Betsy D. Gelb, Jamie K. Belinne, and Latha Ramchand. 2018. Bridging the gender gap in confidence. Business Horizons, 61(5):765–774. Myra Cheng, Esin Durmus, and Dan Jurafsky. 2023. Marked personas: Using natural language prompts to measure stereotypes in language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1504–1532, Toronto, Canada. Association for Computational Linguistics. Barry R. Chiswick. 1987. The labor market status of hispanic men. Journal of American Ethnic History, 7(1):30–58. John L Cotton, Bonnie S O’neill, and Andrea Griffin. 2008. The “name game”: Affective and hiring reactions to first names. Journal of Managerial Psychology, 23(1):18–39. Kate Crawford. 2017. The trouble with bias. NeurIPS. Harry Cross. 1990. Employer hiring practices: differential treatment of hispanic and anglo job seekers. Danica Dillion, Niket Tandon, Yuling Gu, and Kurt Gray. 2023. Can ai language models replace human participants? Trends in Cognitive Sciences, 27(7):597–600. Brian Duncan, V Joseph Hotz, and Stephen J Trejo. 2006. Hispanics in the us labor market. Hispanics and the Future of America, 10:11539. Ronald Aylmer Fisher. 1928. Statistical methods for research workers. 5. Oliver and Boyd. Johann D Gaebler, Sharad Goel, Aziz Huq, and Prasanna Tambe. 2024. Auditing the use of language models to guide hiring decisions. arXiv preprint arXiv:2404.03086. Sarah Wittig Galgano. 2009. Barriers to reintegration: An audit study of the impact of race and offender status on employment opportunities for women. Social Thought & Research, 30:21–37. Joshua R. Goldstein and Guy Stecklov. 2016. From patrick to john f.: Ethnic names and occupational success in the last era of mass migration. American Sociological Review, 81(1):85–106. PMID: 27594705. Anthony G Greenwald, Debbie E McGhee, and Jordan LK Schwartz. 1998. Measuring individual differences in implicit cognition: the implicit association test. Journal of personality and social psychology, 74(6):1464—1480. 392 Amit Haim, Alejandro Salinas, and Julian Nyarko. 2024. What’s in a name? auditing large language models for race and gender bias. arXiv preprint arXiv:2402.14875. Tamanna Hossain, Sunipa Dev, and Sameer Singh. 2023. MISGENDERED: Limits of large language models in understanding pronouns. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5352–5367, Toronto, Canada. Association for Computational Linguistics. Sullam Jeoung, Jana Diesner, and Halil Kilicoglu. 2023. Examining the causal impact of first names on language models: The case of social commonsense reasoning. In Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023), pages 61–72, Toronto, Canada. Association for Computational Linguistics. Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. arXiv preprint arXiv:2310.06825. Genevieve M. Kenney and Douglas A. Wissoker. 1994. An analysis of the correlates of discrimination facing young hispanic job-seekers. The American Economic Review, 84(3):674–683. Patrick Kline, Evan K Rose, and Christopher R Walters. 2022. Systemic Discrimination Among Large U.S. Employers. The Quarterly Journal of Economics, 137(4):1963–2036. Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically ordered prompts and where to find them: Overcoming fewshot prompt order sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8086–8098, Dublin, Ireland. Association for Computational Linguistics. Rowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. It’s all in the name: Mitigating gender bias with name-based counterfactual data substitution. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5267–5275, Hong Kong, China. Association for Computational Linguistics. Steve McDonald, Nan Lin, and Dan Ao. 2014. Networks of Opportunity: Gender, Race, and Job Leads. Social Problems, 56(3):385–402. Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356–5371, Online. Association for Computational Linguistics. Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-pairs: A challenge dataset for measuring social biases in masked language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1953–1967, Online. Association for Computational Linguistics. Brian A Nosek, Mahzarin R Banaji, and Anthony G Greenwald. 2002. Harvesting implicit group attitudes and beliefs from a demonstration web site. Group Dynamics: Theory, Research, and Practice, 6(1):101. John M. Nunley, Adam Pugh, Nicholas Romero, and R. Alan Seals. 2015. Racial discrimination in the labor market for recent college graduates: Evidence from a field experiment. The B.E. Journal of Economic Analysis & Policy, 15(3):1093–1125. OpenAI. 2023. Gpt-4 technical report. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, volume 35, pages 27730–27744. Curran Associates, Inc. Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel Bowman. 2022. BBQ: A hand-built bias benchmark for question answering. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2086–2105, Dublin, Ireland. Association for Computational Linguistics. David S. Pedulla and Devah Pager. 2019. Race and networks in the job search process. American Sociological Review, 84(6):983–1012. Juan Ramos et al. 2003. Using tf-idf to determine word relevance in document queries. In Proceedings of the first instructional conference on machine learning, volume 242, pages 29–48. Citeseer. Cordelia W. Reimers. 1983. Labor market discrimination against hispanic and black men. The Review of Economics and Statistics, 65(4):570–579. Joseph A Ritter and Lowell J Taylor. 2011. Racial Disparity in Unemployment. The Review of Economics and Statistics, 93(1):30–42. Evan TR Rosenman, Santiago Olivella, and Kosuke Imai. 2023. Race and ethnicity data for first, middle, and surnames. Scientific Data, 10(1):299. 393 Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8–14, New Orleans, Louisiana. Association for Computational Linguistics. Abhilasha Sancheti and Rachel Rudinger. 2022. What do large language models learn about scripts? In Proceedings of the 11th Joint Conference on Lexical and Computational Semantics, pages 1–11, Seattle, Washington. Association for Computational Linguistics. Sandra Sandoval, Jieyu Zhao, Marine Carpuat, and Hal Daumé III. 2023. A rose by any other name would not smell as sweet: Social bias in names mistranslation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 3933–3945, Singapore. Association for Computational Linguistics. Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4222–4235, Online. Association for Computational Linguistics. Vered Shwartz, Rachel Rudinger, and Oyvind Tafjord. 2020. “you are grounded!”: Latent name artifacts in pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6850–6861, Online. Association for Computational Linguistics. Adina D. Sterling, Marissa E. Thompson, Shiya Wang, Abisola Kusimo, Shannon Gilmartin, and Sheri Sheppard. 2020. The confidence gap predicts the gender pay gap among stem graduates. Proceedings of the National Academy of Sciences, 117(48):30303– 30308. Alex Tamkin, Amanda Askell, Liane Lovitt, Esin Durmus, Nicholas Joseph, Shauna Kravec, Karina Nguyen, Jared Kaplan, and Deep Ganguli. 2023. Evaluating and mitigating discrimination in language model decisions. arXiv preprint arXiv:2312.03689. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Akshaj Kumar Veldanda, Fabian Grob, Shailja Thakur, Hammond Pearce, Benjamin Tan, Ramesh Karri, and Siddharth Garg. 2023. Are emily and greg still more employable than lakisha and jamal? investigating algorithmic hiring bias in the era of chatgpt. arXiv preprint arXiv:2310.05135. Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, and Nanyun Peng. 2023. “kelly is a warm person, joseph is a role model”: Gender biases in LLM-generated reference letters. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 3730–3748, Singapore. Association for Computational Linguistics. Jun Wang, Benjamin Rubinstein, and Trevor Cohn. 2022. Measuring and mitigating name biases in neural machine translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2576–2590, Dublin, Ireland. Association for Computational Linguistics. Robert Wolfe and Aylin Caliskan. 2021. Low frequency names exhibit bias and overfitting in contextualizing language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 518–532, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Rebecca K. Woods. 2000. An economic analysis of antihispanic discrimination in the american labor market: 1970s-1990s. International Social Science Review, 75(1/2):38–48. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20, New Orleans, Louisiana. Association for Computational Linguistics. A First Names A.1 Selection Criteria Of our name data sources, Rosenman et al. (2023) provide the racial/ethnic distribution among five categories: “White”, “Black”, “Hispanic”, “Asian”, and “Others”. This categorization of race/ethnicity primarily follows the U.S. Census Bureau’s definition of race and ethnicity. For robust results, we only include names that have more than 1,000 occurrences in the data source provided by Rosenman et al. (2023). We assign the majority race (> 50%) as the race associated with a name. No names in the dataset meet the inclusion criteria for the category “Others” and there are fewer than 15 names 394 for “Asian”. As a result, our study only involves the other three racial/ethnic categories. With reference to the SSA dataset,5 we use the majority gender (> 50%) to approximate the gender associated with a name. We only include a name in our study if it appears in both of the data sources. Within each racial and gender subgroup (e.g., Black female), we then rank the names by their percentage of the majority race and select the top 50 ones for our experiments. A.2 Names Used We list all 300 first names used in our experiments. White female names Abbey, Abby, Ansley, Bailey, Baylee, Beth, Caitlin, Carley, Carly, Colleen, Dixie, Ginger, Haley, Hayley, Heather, Holli, Holly, Jane, Jayne, Jenna, Jill, Jodi, Kaleigh, Kaley, Kari, Katharine, Kathleen, Kathryn, Kayleigh, Lauri, Laurie, Leigh, Lindsay, Lori, Luann, Lynne, Mandi, Marybeth, Mckenna, Meghan, Meredith, Misti, Molly, Patti, Sue, Susan, Susannah, Susanne, Suzanne, Svetlana White male names Bart, Beau, Braden, Bradley, Bret, Brett, Brody, Buddy, Cade, Carson, Cody, Cole, Colton, Conner, Connor, Conor, Cooper, Dalton, Dawson, Doyle, Dustin, Dusty, Gage, Graham, Grayson, Gregg, Griffin, Hayden, Heath, Holden, Hoyt, Hunter, Jack, Jody, Jon, Lane, Logan, Parker, Reed, Reid, Rhett, Rocco, Rusty, Salvatore, Scot, Scott, Stuart, Tanner, Tucker, Wyatt Black female names Amari, Aretha, Ashanti, Ayana, Ayanna, Chiquita, Demetria, Eboni, Ebony, Essence, Iesha, Imani, Jalisa, Khadijah, Kierra, Lakeisha, Lakesha, Lakeshia, Lakisha, Lashanda, Lashonda, Latanya, Latasha, Latonia, Latonya, Latoya, Latrice, Nakia, Precious, Queen, Sade, Shalonda, Shameka, Shamika, Shaneka, Shanice, Shanika, Shaniqua, Shante, Sharonda, Shawanda, Tameka, Tamia, Tamika, Tanesha, Tanika, Tawanda, Tierra, Tyesha, Valencia Black male names Akeem, Alphonso, Antwan, Cedric, Cedrick, Cornell, Cortez, Darius, Darrius, Davon, Deandre, Deangelo, Demarcus, Demario, Demetrice, Demetrius, Deonte, Deshawn, Devante, Devonte, Donte, Frantz, Jabari, Jalen, Jamaal, Jamar, Jamel, Jaquan, Jarvis, Javon, Jaylon, Jermaine, Kenyatta, Keon, Lamont, Lashawn, Malik, Marquis, Marquise, Raheem, Rashad, Roosevelt, 5https://www.ssa.gov/oact/babynames/ Shaquille, Stephon, Sylvester, Tevin, Trevon, Tyree, Tyrell, Tyrone Hispanic female names Alba, Alejandra, Alondra, Amparo, Aura, Beatriz, Belkis, Blanca, Caridad, Dayana, Dulce, Elba, Esmeralda, Flor, Graciela, Guadalupe, Haydee, Iliana, Ivelisse, Ivette, Ivonne, Juana, Julissa, Lissette, Luz, Magaly, Maribel, Maricela, Mariela, Marisol, Maritza, Mayra, Migdalia, Milagros, Mireya, Mirta, Mirtha, Nereida, Nidia, Noemi, Odalys, Paola, Rocio, Viviana, Xiomara, Yadira, Yanet, Yesenia, Zoila, Zoraida Hispanic male names Agustin, Alejandro, Alvaro, Andres, Anibal, Arnaldo, Camilo, Cesar, Diego, Edgardo, Eduardo, Efrain, Esteban, Francisco, Gerardo, German, Gilberto, Gonzalo, Guillermo, Gustavo, Hector, Heriberto, Hernan, Humberto, Jairo, Javier, Jesus, Jorge, Jose, Juan, Julio, Lazaro, Leonel, Luis, Mauricio, Miguel, Moises, Norberto, Octavio, Osvaldo, Pablo, Pedro, Rafael, Ramiro, Raul, Reinaldo, Rigoberto, Santiago, Santos, Wilfredo B Prompts We write one template to begin testing the behavior of LLMs in making hiring decisions in an underspecified context. To mitigate the model’s sensitivity to different template phrasing (Shin et al., 2020; Sancheti and Rudinger, 2022; Lu et al., 2022), we use ChatGPT 3.56 to paraphrase our first template into 4 variations, resulting in 5 base templates in total. The instruction we use for the paraphrasing task is Help me find four ways to paraphrase the following sentence. Keep the placeholder terms like "[NAME]", "pronoun_poss", and "pronoun_subj". Write an email informing [NAME] about pronoun_poss application decision for the role pronoun_subj has applied. Note that the root template used for paraphrasing is slightly different from our first root template in Fig. 2 as this one contains pronouns. We later choose to experiment with a modified template without any pronouns so that we can control any potential influence on model generation exerted by different pronouns like “she” and “he.” This would 6https://chat.openai.com/ 395 allow us to focus on studying the model behavior towards different first names. For each base template, we add additional information about the job role to probe model behavior under the influence of 40 occupations in addition to an under-specified setting. In total, we have 41 occupational roles for each template, including the one where the occupation is not specified. Furthermore, we attempt to give an LLM information about candidate qualification and test if it makes more informed decisions following this additional hint. We prepend a sentence directly describing one of the three levels of qualifications (“highly qualified,” “somewhat qualified,” and “not qualified”) to the templates for each role. As a result, we have a total number of 820 templates, as shown in Fig. 2. C Additional Experiment Setup Details C.1 Models We specify the model hyperparameters used in our paper. For fair and controlled comparisons, we keep the hyperparameters consistent across models when possible throughout our experiments. We note that the use of every model follows its original intended use because all of the selected models are specifically fine-tuned to follow human instructions like our designed prompts. Because Llama2-70b and GPT-3.5-Turbo require heavier computational cost that exceeds our budget, we run the experiments on a smaller scale by reducing the number of occupations to 7 for both, having only one random seed for Llama2-70b, and having only two templates for GPT-3.5-Turbo. In the end, we obtain 756, 000 emails for Mistral-7b and Llama2-7b, 70b, 48, 000 emails for Llama2-70b, and 19, 200 emails for GPT-3.5. Llama2 We mainly follow the hyperparameters recommended in the original Llama2 repository,7 where temperature = 0.6, top_p = 0.9, max_batch_size = 4. We set both max_seq_len and max_gen_len to be 256. The same set of hyperparameters is used for all thre model sizes (7b, 13b, and 70b). Note that even if temperature is nonzero, our experiments are reproducibility because we have set the random seed (1,42,50) to obtain the experimental results. GPT-3.5-Turbo We keep a consistent temperature with Llama2, temperature = 0.6, max_tokens 7https://github.com/facebookresearch/llama Model Validity Precision Recall F1 Accept Reject Accept Reject Llama2-7b 0.86 0.89 0.97 0.97 0.91 0.94 Llama2-13b 0.94 0.94 1.00 1.00 0.98 0.98 Llama2-70b 0.95 0.98 1.00 1.00 0.99 0.99 Mistral-7b 0.83 1.00 1.00 1.00 1.00 1.00 GPT-3.5 1.00 1.00 1.00 1.00 1.00 1.00 Table 5: Validity rate of email generation and the performance of our classifier on predicting the application outcomes indicated in the valid emails. = 256, frequency_penalty = 0.9 and presence_penalty = 1.9. We leave other hyperparameters to be default values. Mistral-Instruct-v0.1 The model size of MistralInstruct-v0.1 is 7b. We use temperature = 0.6, max_new_tokens = 256, do_sample = True, top_p = 5 as hyperparameters for generation. Note that even if temperature is non-zero, our experiments are reproducibility because we have set the random seed (1,42,50) to obtain the experimental results. Terms of use for each model We carefully follow the terms of use provided by the model authors or company. • Llama2: https://ai.meta.com/llama/ license/ • GPT-3.5-Turbo: https://openai.com/ policies/terms-of-use • Mistral-Instruct-v0.1: https://mistral. ai/terms-of-service/ Computing infrastructure For offline models (Llama2 and Mistral-Instruct-v0.1), we conduct our experiments using a mixture of NVIDIA RTX A5000 and NVIDIA RTX A6000 graphic cards. For each experiment involving Llama2, we use one A6000, two A6000, and eight A5000 GPUs respectively for each model size 7b, 13b, and 70b, and we use one A6000 GPU for Mistral-Instructv0.1. C.2 Email Classification To label the application outcome stated in the generated emails, we adopt a combination of manual and automatic annotation. We manually label 1, 200 application outcome emails in the early iterations of our experiments, evenly distributed across genders and races/ethnicities. We then train a support vector machine (SVM) model with TF-IDF features (Ramos et al., 2003) using 840 samples from 396 White Black Hispanic Std Female Male Female Male Female Male Llama2-7b 0.80 0.87 0.80 0.93 0.93 0.80 0.06 Llama2-13b 0.90 0.93 0.97 0.97 0.93 0.97 0.03 Llama2-70b 0.93 0.97 0.87 1.00 1.00 0.93 0.05 Mistral-7b 0.77 0.93 0.86 0.83 0.80 0.80 0.06 GPT-3.5 1.00 1.00 1.00 1.00 1.00 1.00 0.00 Table 6: Validity rates for each intersectional group within a model have relatively small standard deviations (Std). We do not find statistically significant differences between any pair of groups within the same model setting, as all p-values are greater than 0.05, where the null hypothesis is that the two groups share the same validity rate under a binomial distribution. the manually labeled data. We use 180 for validation, and 180 for testing. This classifier achieves 0.97 accuracy on the test set containing 180 samples, also evenly distributed across demographic groups. Given the good performance of the classifier, we use it to label other generated emails. Because the classifier is not trained on the exact phrasing of all our base templates, we further manually annotate the application decision in the same random subset used for validity analysis and check the human labels with the model predictions. The classifier performs extremely well even though the input template contains variations, achieving an F1 score as high as 0.99 for Llama2-70b, shown in Table 5. C.3 Permutation Test To measure if a group is treated significantly more or less favorably in comparison with the overall acceptance rate, we conduct an adapted version of the permutation test (Caliskan et al., 2017; An et al., 2023). Considering one demographic group A out of the whole population in our study, our null hypothesis is that A has the same acceptance rate as the global population under the same setting. We first compute d, which is the difference between the average acceptance rate of group A and that of the global population. We then permute the identity labels of the whole population, obtaining A′, which has the same cardinality as A. We find d′, the new difference between the average acceptance rate of A′ and that of the global population. The p-value is estimated by repeating the permutation step for a large number of times (5, 000 in our experiments) and calculating P(d′ > d). We note that in Table 1, we conduct separate permutation tests for each individual job first, and then combine the p-values using Fisher’s method (Fisher, 1928) to obtain the aggregate statistical significance across multiple occupational roles. 397
2024_short
37
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 398–408 August 11-16, 2024 ©2024 Association for Computational Linguistics Explainability and Hate Speech: Structured Explanations Make Social Media Moderators Faster Agostina Calabrese1* Leonardo Neves2 Neil Shah2 Maarten W. Bos2 Björn Ross1 Mirella Lapata1 Francesco Barbieri2 School of Informatics, University of Edinburgh1 Snap Inc.2 [email protected] Abstract Content moderators play a key role in keeping the conversation on social media healthy. While the high volume of content they need to judge represents a bottleneck to the moderation pipeline, no studies have explored how models could support them to make faster decisions. There is, by now, a vast body of research into detecting hate speech, sometimes explicitly motivated by a desire to help improve content moderation, but published research using real content moderators is scarce. In this work we investigate the effect of explanations on the speed of real-world moderators. Our experiments show that while generic explanations do not affect their speed and are often ignored, structured explanations lower moderators’ decision making time by 7.4%. 1 Introduction Social media provide a platform for free expression but users may abuse it and post content in violation of terms, like misinformation or hate speech. To fight these behaviours and enforce integrity on the platform, social media companies define policies that describe what content is allowed. Posts are then monitored through automatic systems that look for policy violations. While content that has been flagged by the system with high confidence is immediately removed, all other violations, including the ones reported by users, are moderated by trained human reviewers. These moderators are also responsible for reviewing user appeals and deciding when content has been flagged incorrectly. Therefore, a big challenge with enforcing integrity is the high volume of content that needs to pass the moderators’ judgment (Halevy et al., 2022). Previous work has claimed that moderators can be supported with explanations of why posts violate the policy (Calabrese et al., 2022; Nguyen et al., *This work was done while the author was an intern at Snap Inc. 2023). But while there have been studies showing the importance of explanations for users (Haimson et al., 2021; Brunk et al., 2019), the benefits of explanations for moderators have not been studied. Can explanations help moderators judge a post faster? And how much room for improvement is there? While social media share safety reports with statistics about the number and types of detected violations1, data relative to moderator performance is not publicly available. Explanations might have a larger impact on the performance of crowdworkers who have only recently been trained on a policy, but smaller effects would be expected on the speed of moderators who know the policy by heart. In this paper we conduct a study with professional moderators from an online social platform to answer the following research questions: 1. Do explanations make moderators faster? 2. Does the type of explanations matter? 3. Do moderators want explanations? While online social platforms deal with several integrity issues, academic research has focused on a few specific ones. Hate speech is one of the most studied issues, and (English) hate speech is also the focus of our study. Our experiments show that despite their already impressive performance, structured explanations (that highlight which parts of a post are harmful and why) can make experienced moderators faster by 1.34s/post without any loss in accuracy. Considering that they spend an average of 18.14s/post, that is a time reduction of 7.4%, which is a meaningful improvement considering the scale at which online social platforms operate. Generic (pre-defined) explanations on the other hand have no impact. An online survey further revealed that moderators strongly prefer structured explanations (84%). 1e.g., https://about.fb.com/news/2023/05/metas-q 1-2023-security-reports 398 In the case of generic explanations, most moderators admit to only looking at them when in doubt (80%) or ignoring them completely (12%). 2 Related Work While some researchers have looked at hate speech2 as a subjective matter (Davani et al., 2022; Basile et al., 2021), this paradigm is not suitable for the use case of content moderation, where a single decision has to be made for each post (Röttger et al., 2022). In this work we follow a prescriptive paradigm, and assume the existence of a ground truth that is determined by a policy. Explainability is a key open problem for Natural Language Processing research on hate speech (Mishra et al., 2019; Mathew et al., 2021). Well documented model failures (Sap et al., 2019; Calabrese et al., 2021), together with EU regulations on algorithmic transparency (Brunk et al., 2019), call for the design of more transparent algorithms. However, the benefits of explainability on the moderators have been understudied. Wang et al. (2023) analysed the effect of explanations on annotators, observing that wrong explanations might dangerously convince the annotators to change their mind about whether a post contains hate speech. However, the experiment was run with crowdworkers and Abercrombie et al. (2023) has found that is not uncommon for non-professional moderators to change their opinion about the toxicity of a post over time, even when no additional information is provided. To the best of our knowledge, we are the first to explore how explanations can affect moderation speed of professional moderators although the need to support them with their unmanageable workload is well-documented3. 3 Explainable Abuse Detection We hypothesise that different types of explanations might lead to different results. Mishra et al. (2019) argue that explanations should at least indicate 1) the intent of the user, 2) the words that constitute abuse, and 3) who is the target. From a computational perspective, the cheapest way to achieve this 2“Abusive speech targeting specific group characteristics, such as ethnic origin, religion, gender or sexual orientation” (Warner and Hirschberg, 2012). 3e.g., https://www.forbes.com/sites/johnkoets ier/2020/06/09/300000-facebook-content-moderat ion-mistakes-daily-report-says/?sh=524ab91354d0 and https://www.wired.co.uk/article/facebook-con tent-moderators-ireland goal is to define the task as multiple multi-class classification problems (Kirk et al., 2023; Saeidi et al., 2021; Vidgen et al., 2021b; Ousidhoum et al., 2019), where models choose between some predefined target groups (e.g., women, lgbt+) and types of abuse (e.g., threats, derogation). While the explanations provided by these approaches are limited to properties 1 and 3, some approaches have expanded the paradigm to also include rationales (i.e., spans of text from the post that suggest why a post is hateful) and satisfy property 2 (Vidgen et al., 2021a; Mathew et al., 2021). When dealing with implicit hate, where evidence cannot always be found in the exact words of a post, rationales have been replaced with free-text implied statements (ElSherief et al., 2021; Sap et al., 2020). Calabrese et al. (2022) introduce a more structured approach to explainability, where target, intent, and type of abuse are all indicated by means of tagged spans from the post. The popularity of prompt-based approaches has led to the generation of free-text explanations (Wang et al., 2023), with no guarantee that any of the above properties are satisfied. 4 Experimental Design In this study we analyse the effect explanations have on the speed of professional moderators from an online social platform with millions of users. We use the term “generic” to describe explanations that can be obtained from a multi-class classification model. For instance, for the post “immigrants are parasites”4, a generic explanation could be “Content targeting a person or group of people on the basis of their protected characteristic(s) with dehumanising speech in the form of comparisons, generalisations or unqualified behavioural statements to or about insects”5. This pre-defined explanation illustrates why the post violates the policy without reference to specific post content. “Structured” explanations are instead specific to the post, and indicate why a post violates the policy by highlighting relevant spans and specifying how they relate to the policy. In the framework introduced in Calabrese et al. (2022), the example above would be associated with a parse tree where “immigrants” is tagged as target and protected characteristic, and “are parasites” as dehumanising comparison. Our hypothesis is that structured explanations will help 4Example taken from Calabrese et al. (2022). 5https://transparency.fb.com/en-gb/policies/c ommunity-standards/hate-speech 399 Figure 1: Annotation interface for setting 2 (post+label), where moderators are shown a post and a description of the rule it is deemed to violate. We intentionally chose a generic policy paragraph for this example as we are not allowed to share the content of the internal policies. Figure 2: Annotation interface for setting 3 (post+tags), where moderators are shown the post with tagged spans as in Calabrese et al. (2022). moderators judge posts faster, while generic explanations will not impact their speed. To verify our hypothesis, we asked 25 moderators to judge posts in three settings where they were shown: 1) only the post (post-only); 2) the post and the policy rule being violated (post+policy, which we refer to as generic explanations, Figure 1) (Kirk et al., 2023); 3) the post with tagged spans as in Calabrese et al. (2022) (post+tags, that is, structured explanations, Figure 2). 4.1 Data For our experiment we used the PLEAD dataset (Calabrese et al., 2022). PLEAD contains 3,535 hateful and not-hateful posts annotated with the user intent (e.g., dehumanisation) and explanations in the form of parse trees. We include more details about PLEAD in Appendix A.1. While there exist models that can generate structured explanations, the best model available in the literature achieved a production F1-score of 52.96% (Calabrese et al., 2022). We argue that using generated explanations in our study would bias the results. If the model gives wrong explanations half the time, then that prevents us from measuring how useful correct explanations are, or what “type” of explanations is most useful. In light of this, we used gold explanations from the PLEAD dataset. Since moderators would normally check posts that are “at risk”, we reproduced their usual task by mostly sampling hateful posts. However, to keep the experiment realistic, we simulated some model errors: in each of the three settings we included posts that do not violate the policy (10%); posts that violate the policy but are shown together with wrong explanations (10%); the remaining posts are hateful (80%) and associated with the explanations from the dataset. While the simulated model accuracy is high, with 80% correct explanations and 90% correct predictions, we feared that trivial errors would still push the moderators towards ignoring the explanations (Dietvorst et al., 2015). To mitigate this issue, we first used heuristics to generate better explanations and then manually reviewed and edited the modified explanations (Appendix A.3). We sampled a batch of 100 posts for a pilot study and three batches of 800 posts for the final experiment, one for each setting. The distribution of the intents in each setting is the same as in PLEAD. 4.2 Method We recruited 25 moderators from Snapchat, an online social platform with millions of users. All moderators had experience reviewing posts with 400 abusive language (as the platform policies are wider and contain many more phenomena) and posts that only contain text (as most moderators at the platform usually deal with multimodal content). We recognise that different levels of moderators experience might lead to different results. None of our moderators were new hires. Furthermore, we used mixed-effects models to analyse our results as a way to take into account different levels of experience and therefore “baseline” speed. We asked moderators to annotate 2,400 posts, 800 for each setting, thus preventing moderators from encountering the same post twice and bias speed measurements. The order in which the settings were shown to moderators was randomised. Some moderators received setting 1 first, others received setting 2 first, etc. Each setting was shown as the first setting roughly the same number of times (respectively 8, 8 and 9). Each block of 800 posts was used for each setting a third of the time. This means that the observed results do not depend on the specific posts that occur in a block, because all blocks were used for all the settings. Posts within the same setting were also randomised, and shown to moderators in batches of 20 examples, one per page, on an internal annotation platform. Moderators did not undertake any training for this task. We asked them to judge whether a post violated the policy, underlining not to judge whether the explanation was correct. We also informed them that annotation times were being recorded. Finally, we provided moderators with one example for each scenario, to illustrate what the annotation interface would look like. We ran a pilot study with one moderator to assess the clarity of the interface and the soundness of our mapping of PLEAD annotations onto internal policy rules (Appendix A.2). Details of the pilot can be found in Appendix A.4. 4.3 Evaluation Metrics The annotation platform allowed us to record the timestamps at which posts were shown to moderators and when they moved to the next post, so for each post we stored the number of seconds it took to express a judgment. We also report moderator accuracy but do not expect an improvement from showing explanations, since these are professional moderators with a high degree of accuracy. Note also the limitation in accuracy measurements as this involves comparing the decisions of professional moderators – who are regarded by online social platforms to be the ground truth – against Slower No change Faster 7 13 5 3 13 9 Number of Moderators Generic Structured Figure 3: Effect of generic and structured explanations on the speed of each moderator (No change: |z| < 2). crowdsourced annotations. 5 Do Explanations Help Moderators? Before analysing speed, we discarded the first 20 instances (0.025%) from each setting. We did this to provide a buffer to the moderators to adapt to a new setting and corresponding interface. Additionally we discarded for each moderator all data points with annotation time more than three standard deviations away from the moderator mean6. When moderators were prompted only with the post, the fastest and slowest moderators achieved a mean annotation speed of, respectively, 6.58s/post and 45.03s/post. To study the effect of generic and structured explanations on annotation time (time) while taking into account individual differences we fitted two linear mixed effects models to the data from post-only and post+policy or post+tags, respectively. We defined the two models as follows: time ∼length + (1|moderator) time ∼setting + length + (1|moderator) where length is the length of the post, setting indicates whether the moderator was provided an explanation or not, and (1|moderator) accounts for individual differences of the moderators. We tested whether the explanations have a significant effect by testing whether the difference between the likelihood of these two models is significant using ANOVA. We found that in setting post+policy explanations did not affect the annotation time: the estimated effect is 0.02 ± 0.32 s, and is not significant (χ2(1) = 0.005, p = .94). When using structured explanations (post+tags) the estimated effect is −1.34 ± 0.32 s and is highly significant (χ2(1) = 17.808, p < .001), showing that moderators are faster with appropriate explanations. 6The number of outliers was comparable across settings. 401 We used a z-test to compare individual performances across the settings (Figure 3). When shown generic explanations 52% of the moderators registered no significant change in speed (w.r.t. setting 1), 28% had a significant loss in performance, and only 20% improved. With structured explanations instead, 36% of the moderators had a significant improvement, 52% of the moderators registered no significant change, and 12% performed worse than without explanations7. We examined whether the different impact that explanations had on moderators was due to the experimental design by testing for correlations between said impact and the order in which the settings were shown to the moderators. With structured explanations, all moderators who registered a loss in performance were shown this setting first and the Pearson correlation between the impact (represented as -1 for loss, 0 for no change, and 1 for improvement) and the round in which setting 3 was shown is .66 (p < .001). However, the same trend was not observed for generic explanations. Moderators who registered a loss in performance were shown post+policy as either first or last, and the correlation score is .41 (p = .04) (Appendix B). We hypothesise that the posts from PLEAD might have been very different in language and topics from the ones moderators usually review, and therefore annotations in the first batch required moderators some extra adjustment time (regardless of the setting). However, the different trends observed for post+policy and post+tags demonstrate that the improvement recorded with structured explanations is not only related to the experimental design. Moreover, post+tags is the setting that was shown as first 1 time more than the other settings (9 instead of 8), and 2 of the corresponding 9 moderators still registered a significant improvement. We did not observe any correlation between the impact of explanations and the specific sample of 800 posts that was selected for each setting (-.06 for setting 2 and .09 for setting 3) (Appendix C). Finally, we looked at accuracy to ensure that faster annotation did not come at the price of more mistakes. In post-only, the highest and lowest recorded accuracy scores were 92.13% and 73.13%. We compared the accuracy of moderators across scenarios with a z-test between the accuracy of all moderators in setting 1 and 2 or 3. For both generic and structured explanations we did not observe a 7One of these three moderators declared in the follow-on survey to have ignored the explanations. significant change (z < 2), not even when measuring accuracy only on not-hateful posts or hateful posts with wrong explanations (Appendix D). 6 Do Moderators Want Explanations? After the experiment was over, we asked the 25 moderators to complete a brief survey. A strong preference was expressed for the setting with structured explanations (84%), while 8% had no preference and 8% preferred generic explanations (Appendix E). When prompted with generic explanations, only 8% of the moderators consistently took them into account, while 80% only looked at the explanations when in doubt and 12% ignored them. The picture changes for structured explanations, where 60% of the moderators used them consistently, 32% looked at them when in doubt, and 8% ignored them. 48% of the moderators declared that the posts shown in this study were different from the ones they usually moderate. They differed in the use of abbreviations, slang and jargon, but also in topics, as the policy covers many phenomena and hate speech is not the most frequent. This supports our hypothesis that moderators required some extra adjustment time in the first setting. 7 Conclusions In this work we investigated the impact of explainable NLP models on the decision speed of social media moderators. Our experiments showed that explanations make moderators faster, but only when presented in the appropriate format. Generic explanations have no impact on decision time and are likely to be ignored, while structured explanations made moderators faster by 1.34 s/instance. A follow-on survey further revealed that moderators prefer structured explanations over generic or none. These results were obtained simulating a model accuracy of 80%, with 10% of the posts misclassified as policy violations, and 10% correctly classified but associated with wrong explanations. Such accuracy is beyond the capabilities of available models, and yet resulted in criticism from the moderators who spotted the inaccuracies. We hope this study can encourage researchers to improve abuse detection models that produce structured explanations. 8 Limitations In this work we focused on hate speech, but there may be other content forbidden by a platform’s 402 terms that this work did not test. We focused on textual content and limited the study to English posts. These choices were merely driven by the lack of explainable multimodal and multilingual datasets for the task of integrity, or hate speech detection. Restricting the scope to English hate speech allowed us to compare the effects of different types of explanations on the same posts. We hope that the results reported in this study can promote the collection of structured explanations for new and existing multimodal or multilingual datasets. 9 Ethical Considerations All the annotations in this study were produced by content moderators regularly employed at an online social platform. Although the posts they were asked to judge came from a public dataset and are different in style from the ones they usually review, dealing with hate speech is part of their role and they have been trained for handling such content. No user data from said platform was used in this study, and all annotations of the public posts have been released in anonymised format8 to protect the identity of the moderators. We did not collect personal information about the moderators to protect their privacy, as 1) we are analyzing hate speech in a prescriptive paradigm that assumes the existence of a single ground truth and therefore it makes it less relevant to consider the demographics of individual annotators; 2) it would require asking platform employees for their protected characteristics. Acknowledgements We would like to thank Maryna Diakonova and the 25 Snapchat moderators who participated in our study. This work was supported in part by Huawei and the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh, School of Informatics. Lapata gratefully acknowledges the support of the UK Engineering and Physical Sciences Research Council (grant EP/W002876/1) and the European Research Council (award 681760). 8https://github.com/Ago3/structured_explanati ons_make_moderators_faster References Gavin Abercrombie, Dirk Hovy, and Vinodkumar Prabhakaran. 2023. Temporal and second language influence on intra-annotator agreement and stability in hate speech labelling. In Proceedings of the 17th Linguistic Annotation Workshop (LAW-XVII), pages 96–103. Valerio Basile, Michael Fell, Tommaso Fornaciari, Dirk Hovy, Silviu Paun, Barbara Plank, Massimo Poesio, Alexandra Uma, et al. 2021. We need to consider disagreement in evaluation. In Proceedings of the 1st workshop on benchmarking: past, present and future, pages 15–21. Association for Computational Linguistics. Jens Brunk, Jana Mattern, and Dennis M. Riehle. 2019. Effect of transparency and trust on acceptance of automatic online comment moderation systems. In 21st IEEE Conference on Business Informatics, CBI 2019, Moscow, Russia, July 15-17, 2019, Volume 1 Research Papers, pages 429–435. IEEE. Agostina Calabrese, Michele Bevilacqua, Björn Ross, Rocco Tripodi, and Roberto Navigli. 2021. AAA: fair evaluation for abuse detection systems wanted. In WebSci ’21: 13th ACM Web Science Conference 2021, Virtual Event, United Kingdom, June 21-25, 2021, pages 243–252. ACM. Agostina Calabrese, Björn Ross, and Mirella Lapata. 2022. Explainable abuse detection as intent classification and slot filling. Trans. Assoc. Comput. Linguistics, 10:1440–1454. Aida Mostafazadeh Davani, Mark Díaz, and Vinodkumar Prabhakaran. 2022. Dealing with disagreements: Looking beyond the majority vote in subjective annotations. Trans. Assoc. Comput. Linguistics, 10:92– 110. Berkeley J Dietvorst, Joseph P Simmons, and Cade Massey. 2015. Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1):114. Mai ElSherief, Caleb Ziems, David Muchlinski, Vaishnavi Anupindi, Jordyn Seybolt, Munmun De Choudhury, and Diyi Yang. 2021. Latent hatred: A benchmark for understanding implicit hate speech. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 711 November, 2021, pages 345–363. Association for Computational Linguistics. Oliver L. Haimson, Daniel Delmonaco, Peipei Nie, and Andrea Wegner. 2021. Disproportionate removals and differing content moderation experiences for conservative, transgender, and black social media users: Marginalization and moderation gray areas. Proc. ACM Hum. Comput. Interact., 5(CSCW2):466:1– 466:35. 403 Alon Y. Halevy, Cristian Canton-Ferrer, Hao Ma, Umut Ozertem, Patrick Pantel, Marzieh Saeidi, Fabrizio Silvestri, and Ves Stoyanov. 2022. Preserving integrity in online social networks. Commun. ACM, 65(2):92–98. Hannah Kirk, Wenjie Yin, Bertie Vidgen, and Paul Röttger. 2023. Semeval-2023 task 10: Explainable detection of online sexism. In Proceedings of the The 17th International Workshop on Semantic Evaluation, SemEval@ACL 2023, Toronto, Canada, 13-14 July 2023, pages 2193–2210. Association for Computational Linguistics. Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2021. Hatexplain: A benchmark dataset for explainable hate speech detection. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 14867–14875. AAAI Press. Pushkar Mishra, Helen Yannakoudakis, and Ekaterina Shutova. 2019. Tackling online abuse: A survey of automated abuse detection methods. CoRR, abs/1908.06024. Tin Nguyen, Jiannan Xu, Aayushi Roy, Hal Daumé III, and Marine Carpuat. 2023. Towards conceptualization of "fair explanation": Disparate impacts of antiasian hate speech explanations on content moderators. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 9696– 9717. Association for Computational Linguistics. Nedjma Ousidhoum, Zizheng Lin, Hongming Zhang, Yangqiu Song, and Dit-Yan Yeung. 2019. Multilingual and multi-aspect hate speech analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 4674–4683. Association for Computational Linguistics. Paul Röttger, Bertie Vidgen, Dirk Hovy, and Janet B. Pierrehumbert. 2022. Two contrasting data annotation paradigms for subjective NLP tasks. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 175–190. Association for Computational Linguistics. Marzieh Saeidi, Majid Yazdani, and Andreas Vlachos. 2021. Cross-policy compliance detection via question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 8622–8632. Association for Computational Linguistics. Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 1668–1678. Association for Computational Linguistics. Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power implications of language. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5477–5490. Association for Computational Linguistics. Bertie Vidgen, Dong Nguyen, Helen Z. Margetts, Patrícia G. C. Rossini, and Rebekah Tromble. 2021a. Introducing CAD: the contextual abuse dataset. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 2289–2303. Association for Computational Linguistics. Bertie Vidgen, Tristan Thrush, Zeerak Waseem, and Douwe Kiela. 2021b. Learning from the worst: Dynamically generated datasets to improve online hate detection. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 1667–1682. Association for Computational Linguistics. Han Wang, Ming Shan Hee, Md. Rabiul Awal, Kenny Tsu Wei Choo, and Roy Ka-Wei Lee. 2023. Evaluating GPT-3 generated explanations for hateful content moderation. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI 2023, 19th-25th August 2023, Macao, SAR, China, pages 6255–6263. ijcai.org. William Warner and Julia Hirschberg. 2012. Detecting hate speech on the world wide web. In Proceedings of the second workshop on language in social media, pages 19–26. A Experimental Design A.1 PLEAD PLEAD is an extension of the LFTW dataset (Vidgen et al., 2021b) where the hateful and not-hateful posts have been enriched with span-level annotations for the task of intent classification and slot filling. Slots represent properties like “target” and 404 “protected characteristic”, while intents are policy rules or guidelines (e.g., “dehumanisation”). PLEAD contains 3,535 posts, 25% of which are not-hateful, while the remaining posts correspond to the intents of dehumanisation (25%), threatening (17%), derogation (28%) and support of hate crimes (5%). A.2 Policy Adaptation PLEAD was annotated using the codebook for hate speech annotations designed by the Alan Turing Institute (Vidgen et al., 2021b), and although everything that is labelled as hate speech in PLEAD also violates social media policies9, the converse does not apply. Specifically, threats and harassment are not allowed by social media even when targeted at groups that are not protected. Therefore we manually reviewed all the not-hateful posts containing threats or derogatory expressions in the parse tree and labelled as policy violations all the posts in which such expressions are targeted at people. For the second setting, where posts are shown together with a description of the violated rule, we adapted the wording in the explanations to match the internal policy the moderators are familiar with. A.3 Error Simulation To simulate model errors we tweaked some of the parse trees from PLEAD. Not-hateful posts are labelled as such when they lack at least one tag in the parse tree to violate the policy (e.g., they do not contain a reference to a protected group) or when a span of text tagged as negative stance is present (e.g. they quote a hateful expression only to disagree with it). For the 10% of the posts that we sampled among the not-hateful ones, we either hallucinated new tagged spans, or deleted a negative stance tag. To prevent the moderators from associating obviously inaccurate explanations with the not-hateful class, we also simulated mistakes in the explanations of 10% of the hateful posts. For these instances we dropped one tagged span from the parse tree, and hallucinated a new one to keep a policy violation. We first used heuristics to generate better explanations by only selecting noun phrases when hallucinating tags like target and verb phrases for, e.g., threat. We then manually reviewed and edited the modified explanations. 9e.g., https://transparency.fb.com/policies/com munity-standards/ or https://values.snap.com/en-G B/privacy/transparency/community-guidelines Examples of wrong explanations are shown in Table 1. A.4 Pilot Study We ran a pilot study with one of the moderators to assess the clarity of the interface and the soundness of our mapping of PLEAD annotations onto internal policy rules. We intentionally decided against asking more of the moderators to take the pilot, to avoid learning effects that could affect the final results. The pilot moderator was shown the same 100 posts in each setting, and achieved an accuracy of 93% in all of them. This suggests that the interface did not confuse the moderator into judging the coherence of the explanations instead of the posts themselves, and that the mapping between the policies was accurate. Since the posts were the same, it is not meaningful to compare the speed across the settings. The moderator started from setting 3 (posts + structured explanations) and took on average 13.11 seconds per instance. The re-annotation of the same posts in the following settings was faster, as expected. B Effect of Settings Order Slower No change Faster 6 3 0 0 3 4 1 7 1 Number of Moderators First Second Third Figure 5: Effect of generic explanations on the speed of individual moderators, grouped depending on which round they were shown this setting (No change: |z| < 2). We tested for correlations between the impact that explanations had on moderators speed and the order in which the settings were shown to the moderators. Figure 6 shows that with structured explanations, all moderators who registered a loss in performance were shown this setting first. However, the same trend was not observed for generic explanations, where moderators who registered a 405 Original Explanation Wrong Explanation My agenda is buying a new car and driving everywhere I can to see the world with my trans boyfriend apart from [ProtectedCharacteristic muslim] countries, they are full of [Target, DehumanisingComparison terrorists]. And we just bought one yesterday so be careful! Agenda in motion! My agenda is buying a new car and driving everywhere I can to see the world with [Target, ProtectedCharacteristic my trans boyfriend] apart from muslim countries, they are full of [DehumanisingComparison terrorists]. And we just bought one yesterday so be careful! Agenda in motion! [Target, ProtectedCharacteristic Women] like me [DehumanisingComparison are scum]? [NegativeStance Shut up] [Target, ProtectedCharacteristic Women] like me [DehumanisingComparison are scum]? Shut up Table 1: Example of hateful post (first row) and not-hateful post (second row) with the original (left column) and generated wrong (right column) explanation. Slower No change Faster 3 4 2 0 9 0 0 0 7 Number of Moderators First Second Third Figure 6: Effect of structured explanations on the speed of individual moderators, grouped depending on which round they were shown this setting (No change: |z| < 2). loss in performance were shown post+policy as either first or last (Figure 5). C Effect of Post Samples We tested for correlations between the impact that explanations had on moderators speed and the specific sample of 800 posts that was selected for each setting. As Figure 7 and 8 show no clear pattern emerged, and the correlation between impact and sample was -.06 for post+label and .09 for post+tags. D Accuracy We compared the accuracy of moderators across scenarios with a z-test between the accuracy of all moderators in setting 1 (post-only) and 2 (post+policy) or 3 (post+label). For both generic and structured explanations we did not observe a significant change (z < 2, Figure 9), not even when measuring accuracy only on not-hateful posts (Figure 10) or hateful posts with wrong explanations (Figure 11). Slower No change Faster 2 4 2 2 5 1 3 4 2 Number of Moderators Sample 1 Sample 2 Sample 3 Figure 7: Effect of generic explanations on the speed of individual moderators, grouped depending on which sample of 800 posts was used for this setting (No change: |z| < 2). E Moderators’ Preference Figure 12 summarises the moderators’ preferences among the three settings. Only 8% of the moderators expressed a preference for generic explanations, and this is coherent to the level of engagement that this type of explanations registered (Figure 13). 84% of the moderators expressed a preference for the structured explanations, with only 8% who declared to have ignored the explanations during the annotation (Figure 14). The criticisms raised about these explanations concerned their accuracy and the need to sometimes still read the whole post to grasp the context in which the highlighted expressions were used. Overall moderators did not think the design of the structured explanations could be further improved to optimise their decision speed. They stressed the importance of using the explanations as a guide while still reading the posts for context, leaving no margin for improvement on this metric. When asked what the most common reasons were for them to be unsure about how to judge a post during their regular job, they indicated slang, unknown words/symbols and the lack of cultural 406 Slower No change Faster 2 4 3 0 5 3 1 4 3 Number of Moderators Sample 1 Sample 2 Sample 3 Figure 8: Effect of structured explanations on the speed of individual moderators, grouped depending on which sample of 800 posts was used for this setting (No change: |z| < 2). 0 5 10 15 20 25 0 20 40 60 80 100 Moderator ID Accuracy (%) None Generic Structured Figure 9: Accuracy score achieved by each moderator with no, generic or structured explanations on the 3 different samples of 800 posts. context. Combining structured explanations with additional free-text explanations could be a way to support moderators when judging complex posts, improving their accuracy (but not speed). 0 5 10 15 20 25 0 20 40 60 80 100 Moderator ID Accuracy (%) None Generic Structured Figure 10: Accuracy score achieved by each moderator with no, generic or structured explanations on the 80 not-hateful instances of the 3 different samples. 0 5 10 15 20 25 0 20 40 60 80 100 Moderator ID Accuracy (%) None Generic Structured Figure 11: Accuracy score achieved by each moderator with no, generic or structured explanations on the 80 hateful instances of the 3 different samples that were shown with wrong explanations. 407 No preference 8% Generic 8% Structured 84% Figure 12: We asked the 25 moderators whether they preferred the setting with generic explanations, structured explanations, or had no preference. The great majority preferred the setting with structured explanations. No 12% Only when in doubt 80% Yes 8% Figure 13: We asked the 25 moderators whether they used the generic explanations or ignored them. 80% of the moderators declared to have used the explanations only when in doubt, and a further 12% ignored the explanations. Only when in doubt 32% Yes 60% No 8% Figure 14: We asked the 25 moderators whether they used the structured explanations or ignored them. 60% of the moderators declared to have used the explanations consistently, and a further 32% relied on them when in doubt. 408
2024_short
38
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 409–424 August 11-16, 2024 ©2024 Association for Computational Linguistics Born Differently Makes a Difference: Counterfactual Study of Bias in Biography Generation from a Data-to-Text Perspective Biaoyan Fang and Ritvik Dinesh and Xiang Dai and Sarvnaz Karimi CSIRO Data61 Sydney, Australia {byron.fang;dai.dai;sarvnaz.karimi}@csiro.au Abstract How do personal attributes affect biography generation? Addressing this question requires an identical pair of biographies where only the personal attributes of interest are different. However, it is rare in the real world. To address this, we propose a counterfactual methodology from a data-to-text perspective, manipulating the personal attributes of interest while keeping the co-occurring attributes unchanged. We first validate that the fine-tuned Flan-T5 model generates the biographies based on the given attributes. This work expands the analysis of gender-centered bias in text generation. Our results confirm the well-known bias in gender and also show the bias in regions, in both individual and its related co-occurring attributes in semantic machining and sentiment. 1 Introduction To what extent do personal attributes affect biography content? Biography consists of the facts of personal attributes (Bamman and Smith, 2014). Current research has shown that biographies from Wikipedia reflect bias from society (Hube, 2017), such as well-known bias in gender (Graells-Garrido et al., 2015; Wagner et al., 2015; Konieczny and Klein, 2018; Tripodi, 2023; Reagle and Rhue, 2011) and culture (Samoilenko and Yasseri, 2014; Beytía, 2020; Baltz, 2022). However, personal attributes are compounded. For instance, religions could be prevalent based on geography (Buttimer, 2006). This results in the challenge of isolating co-occurring attributes and evaluating the effect of personal attributes alone. Answering this question directly would require paired-wise comparisons of biographies that are identical except for the particular personal attribute of interest (Field et al., 2022; Fang et al., 2023). It would allow us to measure the causal effect of the attribute value (treatment) on biography text (outcome) (Holland, 1986; Pearl, 2009). However, having such identical biographies is rare and nearly impossible. Figure 1: An example from the Synthbio dataset (Yuan et al., 2021). We measure semantic matching and sentiment in the true and generated biography (top-right) based on the personal attributes (top-left). Counterfactuals (bottom-right) replace the personal attribute (male, top-left) with a different one (female, bottom-left). Additionally, Wikipedia biographies mostly consist of notable people.1 Large language models (LLMs) have shown the capability of remembering training data (Roberts et al., 2020; Li and Flanigan, 2023) and generating factual biographies based on only names of celebrities (Maudslay et al., 2019; Yuan et al., 2021). In light of these observations, we propose a counterfactual methodology based on a data-to-text framework. We formulate the task as generating biographies by given attributes (Figure 1, top-left → top-right). By doing so, we maintain a controllable setting, enforcing biography generation focusing on the given attributes, thus allowing us to study the effect of individual personal attributes. To mitigate the effect of celebrities, we do our analysis on carefully designed fictional biographies, the SynthBio dataset (Yuan et al., 2021), where fictional names and related personal attributes are controlled by human-LLMs collaboration. Since personal attributes are compounded and diverse, we consider two universal types of personal attributes, i.e., gender and region. We evaluate the generated biographies from two dimensions: se1https://en.wikipedia.org/wiki/Wikipedia:Generally_ notable_people 409 mantic matching (Rebuffel et al., 2021), evaluating how the biography correctly represents the meaning in the attributes; and, sentiment (Gatti et al., 2015), measuring how positive or negative the tone of the text is. We first show a significant difference among generated biographies from different gender and region groups in both semantic matching and sentiment (Section 3). We further perform counterfactual analysis by explicitly manipulating the personal attributes of interest (Section 4). We compare the generated biographies (Figure 1, top-right vs., bottom-right, respectively) from true attributes (male, top-left) vs. manipulated attributes (female, bottom-left). We ask how would the generated biographies change if the given personal attributes were changed? We show that disentangling individual and related co-occurring personal attributes, LLMs finetuned on the Wikibio dataset (Lebret et al., 2016) encode gender and region bias in semantic matching and sentiment, prompting further research in biography generation going beyond gendercentered (Liang et al., 2021), and general quality evaluations, e.g., ROUGE (Lin, 2004). 2 Methodology Data We use the WikiBio dataset (Lebret et al., 2016) for training, consisting of 728,321 biographies from real English Wikipedia pages where the infobox and first paragraph from the articles are provided. On average, each infobox contains 12.5 personal attributes. We explicitly add the gender label (male, female or non-binary/identifiable), inferring from the pronouns in the paragraph (DeArteaga et al., 2019), to the infobox. We remove the biographies where the nationality is not available. To mitigate the cross-contamination of training and evaluation sets (Roberts et al., 2020; Li and Flanigan, 2023), we use the Synthbio dataset (Yuan et al., 2021) for evaluation, which is a synthetic dataset consisting of structured attributes—which we refer as true attributes—describing fictional individuals. It consists of 2,237 infoboxes and each infobox has on average 19 personal attributes and multiple fictional biographies. The comparison of the Wikibio and Synthbio datasets is shown in Table 1. Personal Attributes of Interest We study the impact of two common personal attributes:2 (1) Gen2Attribute distributions are shown in Appendix A Wikibio Synthbio Number of Infoboxs 105,469 2,237 Number of Biographies 105,469 4,270 Avg. #attributes/Infobox 12.1 19.0 Avg. #sentences/Biography 4.3 7.0 Avg. #words/Biography 101.7 110.3 Table 1: Statistics of the Wikibio and Synthbio datasets. For the Wikibio dataset, we consider the training partition and filter out the infoboxs that do not have name and nationality attributes. der. Following the gender attributes in the Synthbio dataset, we consider male, female, and non-binary; and, (2) Region. Inspired by Min et al. (2023), we manually map the 40 nationalities to 6 regions based on Wikipedia continent categories:3 North America (NA), Europe (EU), Middle East (ME), Asia–Pacific (AP), South/Latin America (SA), and Africa (AF).4 Semantic Matching and Sentiment We study the generated biographies from two dimensions: (1) Semantic Matching. We use Data-QuestEval (Rebuffel et al., 2021), a reference-free semantic evaluator curated for data-to-text evaluation developed in a QA format. Specifically, this metric adopted T5 (Kale and Rastogi, 2020) for QG/QA models on both data and text. It measures the answer correctness given the text and generates questions from data, and vice versa. and, (2) Sentiment. Since recent sentiment evaluators are deployed for social media text (Hutto and Gilbert, 2014; Camachocollados et al., 2022) which is not suitable for our task, we use a lexical-based method, obtaining the sentiment score by retrieving SentiWords (Gatti et al., 2015), a dictionary associating positive or negative scores with approximately 155,000 words. We calculate the sentiment score of the biography by averaging the associated sentiment scores for each word. In line with the study of sentiment, we additionally experiment with the regard evaluation (Sheng et al., 2019), a metric measuring if the regard towards a particular identity/demographic group is positive or negative. We observe similar patterns to that of sentiment (Appendix F). 3https://simple.wikipedia.org/wiki/List_of_countries_by _continentsa 4The nationality-region table is provided in Appendix B. 410 Attributes True Masked Counterfactual Raw(/Selected) Gender Male 0.999 0.963 0.991 Female 0.972 0.514 0.978 Non-Binary 0.837 0.057 0.824 Overall 0.936 0.509 0.931 Region Europe 0.837 0.732 0.488/0.770 South/L. America 0.674 0.618 0.234/ Africa 0.805 0.573 0.432/0.856 Middle East 0.527 0.420 0.090/ Asia–Pacific 0.854 0.742 0.586/0.819 North America 0.939 0.833 0.740/ Overall 0.804 0.684 0.459/0.809 Table 2: Results of inferring personal attribute of interest from generated biographies. Biography Generation Our biography data-totext task can be formulated as: Bio(m, co(m)) = fgen(m, co(m)), (1) where biography is generated by the model fgen given the personal attribute of interest (m) and the co-occurring attributes (co(m)). We use FlanT5-base (Chung et al., 2022), an instruction finetuned model, to generate biographies. Following Yuan et al. (2021), we construct the infobox as the data-to-text format described in Kale and Rastogi (2020)5 and finetune Flan-T5-base on WikiBio for 10,000 steps on one P100 GPU, with a batch size of 8, to instruct the model to generate biography based on given attributes. To generate biographies on the Synthbio, we use a beam search of 5. 3 True Attributed Biography Generation First, we validate that the fine-tuned Flan-T5 model generates biographies based on the given personal attributes. To explore the effect of personal attributes, we compare the semantic matching and sentiment on the generated biographies with true attributes (Equation (1)) against those without given the particular attribute (Masked), i.e., Bio(ϕ, co(m)) = fgen(ϕ, co(m)). (2) Model Validation Our fine-tuned Flan-T5 model outperforms the T5 model (Raffel et al., 2020) reported in the Synthbio dataset (Yuan et al., 2021), 5The detailed construction is provided in Appendix C. (a) Semantic Matching (b) Sentiment Figure 2: Semantic matching and sentiment for different attribute groups. Gender: (M=Male, F=Female, NB=Non-Binary); For true attributed biography (purple bars), pairwise significant differences are reported according to Welch’s t-test at p<0.1 (*) and p<0.05 (**). with a RougeL score of 26.4 (vs., 22.6) and a PARENT-F score (Dhingra et al., 2019) of 0.114 (vs., 0.049). We first validate whether the personal attribute of interest can be inferred from the biographies. Specifically, for gender, we use the pronouns as the proxy of gender (De-Arteaga et al., 2019) and compare it against the given gender attribute. For the region, since there is no direct method to predict the nationality from the biography, we consider whether the nationality or related country name is mentioned in the biography as the proxy of the nationality encoded in the biography. We do not train a classifier for nationality as the biography contains rich personal information—the classifier may remember the training instances instead of the nationality signals. We then group the results for nationality based on the region. As shown in Table 2 (Column: True), for gender, we achieve higher than 0.8 accuracy across gender groups, confirming that the given gender is encoded in generated biographies. However, the results in region groups vary. To ensure the generation quality for our analysis and obtain a sufficient amount of data for the analysis, we consider regions with scores higher than 0.75 based on our empirical experience where similar patterns are observed with different thresholds among different region groups: EU, AF, AP, and NA. True Attributed Biography Do LLMs generate different biographies for different gender and nationality groups? Figure 2 shows that generated biographies are significantly different among different gender groups (purple bars, gender) in semantic 411 matching and sentiment.6 For region, we observe significant differences in some region groups, e.g., AF vs., AP in both measurements, indicating the potential bias among region groups. However, we do not observe constant significant differences for any particular region. True vs., Masked Attributed Biography To study the effect of individual personal attributes, we evaluate the semantic matching and sentiment of the generated biographies where given identical attributes but without attributes of interest (Figure 2, green bars). Compared to truly attributed biographies (Figure 2, purple bars), we do not observe significant differences in gender and region. Given that the model mostly cannot infer the masked attributes from the generated text (Table 2, Column: Masked), this indicates that co-occurring attributes also have a strong influence on the biography generation. Masking the personal attributes alone is not effective in understanding the influence of individual personal attributes. 4 Counterfactual Attributed Generation We apply our counterfactual methodology based on our fine-tuned Flan-T5 model. We manipulate only the personal attributes of interest and keep the cooccurring attribute unchanged to study the effect of individual attributes. Specifically, we change the personal attribute (Figure 1, male, top-left) to a different attribute (Figure 1, female, bottom-left) and compare the true (Equation (1)) and counterfactual attributed biographies (Figure 1, top-right vs., bottom-right, respectively), formulating as: Bio(f, co(m)) = fgen(f, co(m)), do(m →f), where do(m →f) denotes the do operator (Pearl, 2009), e.g., in Figure 1, changing the personal attribute male (m) to female (f). We first investigate whether the counterfactual biographies encode the desired attributes via the same validation described in Section 3.7 Table 2 (Counterfactual) shows that generated biographies adjust to the given counterfactual gender attributes. However, we observe that overall 45.9% biographies explicitly mention counterfactual nationalities. To ensure counterfactual biographies quality 6We conducted a preliminary qualitative analysis on the correlation between the length of generated biographies and evaluation scores in Appendix G and we do not find a strong correlation among them. 7Example pairs are in Appendix E. (a) Semantic Matching (b) Sentiment Figure 3: Semantic matching and sentiment for different attribute groups in counterfactual attributed biographies. Different colors and shapes represent different individual personal attributes, and co-occurring attributes, respectively. For brevity, we only show the pairwise significant differences related to groups male and Europe. and obtain a sufficient amount of data for the analysis, we select nationalities that have a score larger than 0.75 for the analysis based on our empirical experience where similar patterns are observed with different thresholds among different region groups (details in Appendix D), resulting in a score of 80.9% (Table 2, Counterfactual-Selected). The semantic matching and sentiment on counterfactual results are shown in Figure 3. We observe similar patterns among the personal attributes of interest. For the sake of brevity, we only show the t-test results about two groups: male and Europe. A full pair-wise comparison is listed in Appendix H. We first ask to what extent the individual personal attributes affect the generated biographies in semantic matching and sentiment. We compare the results where co-occurring attributes are the same but with different individual personal attributes (Figure 3, bars with different colours but the same shapes). For gender, semantic matching is significantly different when given the same co-occurring attributes but different genders, e.g., given male attribute achieve lower semantic matching scores compared to female attribute, M, co(M) (blue, slash) vs., F, co(M) (red, slash). But we do not observe such in sentiment. We find a significant difference in some region groups in both measurements, e.g., AF, co(EU) (red, slash) vs., AP, co(EU) (brown, slash). However, the difference is not consistent among all region attributes. We further investigate the effect of the cooccurring attributes in biography generation. We do so by comparing the biographies given the same individual personal attributes but different cooccurring attributes (Figure 3, bars with different 412 shapes but the same colour). We find a significant difference towards different co-occurring attributes of the gender groups in both semantic matching and sentiment, e.g., M, co(M) (blue, slash) vs., M, co(F) (blue, dot), echoing the finding in Section 3. A significant difference is also observed for some regions in sentiment. However, we do not find such a pattern in semantic matching. 5 Discussion To what extent do personal attributes affect biography content? We answer with a counterfactual methodology, comparing the generated biographies based on manipulating the personal attribute of interest while keeping the co-occurring attributes unchanged. Using LLMs, we disentangle the effect of individual and related co-occurring attributes in biography generation. We utilize a syntheticconstructed biography dataset to mitigate the effect of names and balance the attribute distribution. We find that (1) gender and its co-occurring attributes significantly impact semantic matching and sentiments. Generated biographies from male and male-related co-occurring attributes have a higher sentiment score but are less aligned with the given attributes; (2) there is a significant difference in some region groups and their co-occurring attributes in both measurements. Yet the pattern is not consistent among the region groups; and, (3) manipulating personal attributes of interest only does not resolve the bias in biography generation as the related co-occurring also significantly impacts results. Our study extends bias in text generation (e.g., Sap et al. (2020); Sun et al. (2019); Blodgett et al. (2020); Narayanan Venkit et al. (2023)) and leveraging LLMs for causal inference (e.g., Fang et al. (2023); Feder et al. (2022); Keith et al. (2020); Daoud et al. (2022)) research on a new perspective, i.e., data-to-text, and go beyond heavily gendercentered studies. With the controllable setting formulated in a data-to-text framework, we go further from group disparity on the observant text data and explore the causal effect of the individual and its co-occurring attributes. Our counterfactual methodology can be extended to other personal attributes, e.g., regard (Sheng et al., 2019) (Appendix F) and religion (Buttimer, 2006), and other evaluation dimensions, e.g., readability (Kincaid et al., 1975) and diversity (Alihosseini et al., 2019). 6 Ethical Discussion Our study is based on a synthetic-constructed biography dataset and we analyzed the bias at the group level. Our proposed method aims to uncover the bias in biography generation and can be applied to real biographies such as Wikipedia Biography. However, we do not target nor encourage to target specific individuals or names. We categorize the gender based on the given category from the Synthbio dataset. We acknowledge that the category of gender does not represent all identified gender types. Particularly, non-binary does not reflect the actual gender identification of the biography. Additionally, although our experiment shows evidence of bias in the region, we only consider a selected set of nationalities for each region, i.e., it only partially represents the region. The advanced development of LLMs allows us to study the counterfactual scenarios of the case. However, LLMs have been shown to be biased (Delobelle et al., 2022; Nadeem et al., 2021; Watson et al., 2023). Apart from the inherited bias from the Wikibio dataset, the usage of the counterfactual method could potentially introduce undetected biases and risks, such as reinforcing stereotypes or perpetuating harmful biases. Data generated from such methods should be used with care. For instance, the generated biographies should only be used for bias analysis at the group level. Similarly, the data should be only used for augmenting the training data, instead of replacing it, and only to mitigate the bias. We do not encourage the other usages. For copyright, the Wikibio dataset is under license CC BY-SA 4.0 DEED8 and the Synthbio dataset in under license Apache 2.0.9 The usage of the Flan-T5 model is also under license Apache 2.0. 7 Limitations We use Flan-T5 for our experiments. There is room for exploring more advanced LLMs for biography generations, e.g., Llama models (Touvron et al., 2023), phi models (Li et al., 2023), or models curated for the data-to-text task (Li et al., 2024; An et al., 2022; Chen et al., 2020) For studying whether generated biographies encode provided nationality information, we use a 8https://creativecommons.org/licenses/by-sa/4.0/ 9https://en.wikipedia.org/wiki/Apache_License 413 rule-based method, explicitly matching the nationality keywords with the biographies. It could measure the generation quality to some extent (e.g., in Appendix E). However, employing a better nationality classifier could further enhance our data filtering process and generation quality. Our study requires reference-free evaluators as the counterfactual results do not contain corresponding ground-true text. Although DataQuestEval (Rebuffel et al., 2021) has shown to be effective in evaluating semantic matching in the Wikibio dataset and our analysis data, Synthbio, follow the same structure as Wikibio, this evaluator might still introduce undesired harms in comparing the counterfactual performances. Similarly, we use a rule-based method to measure the sentiment of the biography, i.e., SentiWords (Gatti et al., 2015), which has also shown to be suitable for general use. Subtle or contextual changes in sentiment can not be captured by our sentiment evaluator. Having human annotation would further enhance the analysis of the bias and the alignment study between automatic evaluations and human annotation would be an interesting further direction in the context of fairness in biography generation. Additionally, although we conducted a primarily qualitative analysis on the correlation between the length of generated biographies and evaluation scores (Appendix G), further in-depth analysis is needed to understand how the choice of words affects semantic matching and sentiment. In counterfactual data-to-text biography generation, one key factor is to maintain the coherence of the personal attributes. Our experiment considers two universal personal attributes and flipping these two attributes generally would not conflict with other attributes. However, to expand our framework to other personal attributes, a careful design of attribute manipulation is needed. One possible solution is to follow the attribute construction process described in the Synthbio dataset (Yuan et al., 2021), only making a minimal change in the related co-occurring attributes. We use the SynthBio dataset for our bias analysis. The synthetic-constructed infobox is carefully created via human-AI collaboration, which provides a balanced distribution covering a limited set of attributes. Although it is beneficial as a starting point for analysis bias in data-to-text biography generation, this dataset does not fully capture the complexity and diversity of real-world biographies. The relationship between personal attributes of interest and cooccurring attributes could be expanded. For example, names could strongly influence biography generation in the real world. Deepening the understanding of the correlation of attributes is one of the directions to further this work. References Danial Alihosseini, Ehsan Montahaei, and Mahdieh Soleymani Baghshah. 2019. Jointly measuring diversity and quality in text generation models. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 90–98, Minneapolis, Minnesota. Association for Computational Linguistics. Chenxin An, Jiangtao Feng, Kai Lv, Lingpeng Kong, Xipeng Qiu, and Xuanjing Huang. 2022. Cont: Contrastive neural text generation. Advances in Neural Information Processing Systems, 35:2197–2210. Samuel Baltz. 2022. Reducing bias in wikipedia’s coverage of political scientists. PS: Political Science & Politics, 55(2):439–444. David Bamman and Noah A. Smith. 2014. Unsupervised Discovery of Biographical Structure from Text. Transactions of the Association for Computational Linguistics, 2:363–376. Pablo Beytía. 2020. The positioning matters: Estimating geographical bias in the multilingual record of biographies on wikipedia. In Companion Proceedings of the Web Conference 2020, WWW ’20, page 806–810, New York, NY, USA. Association for Computing Machinery. Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of “bias” in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5454– 5476, Online. Association for Computational Linguistics. Anne Buttimer. 2006. Afterword: Reflections on geography, religion, and belief systems. Annals of the Association of American Geographers, 96(1):197– 202. Jose Camacho-collados, Kiamehr Rezaee, Talayeh Riahi, Asahi Ushio, Daniel Loureiro, Dimosthenis Antypas, Joanne Boisson, Luis Espinosa Anke, Fangyu Liu, and Eugenio Martínez Cámara. 2022. TweetNLP: Cutting-edge natural language processing for social media. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–49, Abu Dhabi, UAE. Association for Computational Linguistics. 414 Wenhu Chen, Yu Su, Xifeng Yan, and William Yang Wang. 2020. KGPT: Knowledge-grounded pretraining for data-to-text generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8635– 8648, Online. Association for Computational Linguistics. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Adel Daoud, Connor Jerzak, and Richard Johansson. 2022. Conceptualizing treatment leakage in textbased causal inference. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5638–5645, Seattle, United States. Association for Computational Linguistics. Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. In proceedings of the Conference on Fairness, Accountability, and Transparency, pages 120–128. Pieter Delobelle, Ewoenam Tokpo, Toon Calders, and Bettina Berendt. 2022. Measuring fairness with biased rulers: A comparative study on bias metrics for pre-trained language models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1693–1706, Seattle, United States. Association for Computational Linguistics. Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, MingWei Chang, Dipanjan Das, and William Cohen. 2019. Handling divergent reference texts when evaluating table-to-text generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4884–4895, Florence, Italy. Association for Computational Linguistics. Biaoyan Fang, Trevor Cohn, Timothy Baldwin, and Lea Frermann. 2023. It’s not only what you say, it’s also who it’s said to: Counterfactual analysis of interactive behavior in the courtroom. In Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 2: Short Papers), pages 197–207, Nusa Dua, Bali. Association for Computational Linguistics. Amir Feder, Katherine A. Keith, Emaad Manzoor, Reid Pryzant, Dhanya Sridhar, Zach Wood-Doughty, Jacob Eisenstein, Justin Grimmer, Roi Reichart, Margaret E. Roberts, Brandon M. Stewart, Victor Veitch, and Diyi Yang. 2022. Causal inference in natural language processing: Estimation, prediction, interpretation and beyond. Transactions of the Association for Computational Linguistics, 10:1138–1158. Anjalie Field, Chan Young Park, Kevin Z. Lin, and Yulia Tsvetkov. 2022. Controlled analyses of social biases in wikipedia bios. In Proceedings of the ACM Web Conference 2022, WWW ’22, page 2624–2635, New York, NY, USA. Association for Computing Machinery. Lorenzo Gatti, Marco Guerini, and Marco Turchi. 2015. Sentiwords: Deriving a high precision and high coverage lexicon for sentiment analysis. IEEE Transactions on Affective Computing, 7(4):409–421. Eduardo Graells-Garrido, Mounia Lalmas, and Filippo Menczer. 2015. First women, second sex: Gender bias in Wikipedia. In Proceedings of the 26th ACM conference on hypertext & social media, pages 165– 174. Paul W Holland. 1986. Statistics and causal inference. Journal of the American statistical Association, 81(396):945–960. Christoph Hube. 2017. Bias in wikipedia. In Proceedings of the 26th International Conference on World Wide Web Companion, WWW ’17 Companion, page 717–721, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee. Clayton Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Proceedings of the international AAAI conference on web and social media, volume 8, pages 216–225. Mihir Kale and Abhinav Rastogi. 2020. Text-to-text pre-training for data-to-text tasks. In Proceedings of the 13th International Conference on Natural Language Generation, pages 97–102, Dublin, Ireland. Association for Computational Linguistics. Katherine Keith, David Jensen, and Brendan O’Connor. 2020. Text and causal inference: A review of using text to remove confounding from causal estimates. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5332– 5344, Online. Association for Computational Linguistics. J Peter Kincaid, Robert P Fishburne Jr, Richard L Rogers, and Brad S Chissom. 1975. Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Piotr Konieczny and Maximilian Klein. 2018. Gender gap through time and space: A journey through wikipedia biographies via the wikidata human gender indicator. New Media & Society, 20(12):4608–4633. 415 Rémi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1203–1213, Austin, Texas. Association for Computational Linguistics. Changmao Li and Jeffrey Flanigan. 2023. Task contamination: Language models may not be few-shot anymore. arXiv preprint arXiv:2312.16337. Shujie Li, Liang Li, Ruiying Geng, Min Yang, Binhua Li, Guanghu Yuan, Wanwei He, Shao Yuan, Can Ma, Fei Huang, et al. 2024. Unifying structured data as graph for data-to-text pre-training. arXiv preprint arXiv:2401.01183. Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, and Yin Tat Lee. 2023. Textbooks are all you need II: phi-1.5 technical report. arXiv preprint arXiv:2309.05463. Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2021. Towards understanding and mitigating social biases in language models. In International Conference on Machine Learning, pages 6565–6576. PMLR. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Rowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. It’s all in the name: Mitigating gender bias with name-based counterfactual data substitution. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5267–5275, Hong Kong, China. Association for Computational Linguistics. Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Koh, Mohit Iyyer, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2023. FActScore: Fine-grained atomic evaluation of factual precision in long form text generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12076–12100, Singapore. Association for Computational Linguistics. Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356–5371, Online. Association for Computational Linguistics. Pranav Narayanan Venkit, Sanjana Gautam, Ruchi Panchanadikar, Ting-Hao Huang, and Shomir Wilson. 2023. Nationality bias in text generation. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 116–122, Dubrovnik, Croatia. Association for Computational Linguistics. Judea Pearl. 2009. Causal inference in statistics: An overview. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551. Joseph Reagle and Lauren Rhue. 2011. Gender bias in wikipedia and britannica. International Journal of Communication, 5:21. Clement Rebuffel, Thomas Scialom, Laure Soulier, Benjamin Piwowarski, Sylvain Lamprier, Jacopo Staiano, Geoffrey Scoutheeten, and Patrick Gallinari. 2021. Data-QuestEval: A referenceless metric for data-to-text semantic evaluation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8029–8036, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418–5426, Online. Association for Computational Linguistics. Anna Samoilenko and Taha Yasseri. 2014. The distorted mirror of wikipedia: a quantitative analysis of wikipedia coverage of academics. EPJ data science, 3:1–11. Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power implications of language. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5477–5490, Online. Association for Computational Linguistics. Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407– 3412, Hong Kong, China. Association for Computational Linguistics. Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating gender bias in natural language processing: Literature review. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1630–1640, Florence, Italy. Association for Computational Linguistics. 416 Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Francesca Tripodi. 2023. Ms. categorized: Gender, notability, and inequality on wikipedia. New Media & Society, 25(7):1687–1707. Claudia Wagner, David Garcia, Mohsen Jadidi, and Markus Strohmaier. 2015. It’s a man’s Wikipedia? Assessing gender inequality in an online encyclopedia. In Proceedings of the international AAAI conference on web and social media, volume 9, pages 454–463. Julia Watson, Barend Beekhuizen, and Suzanne Stevenson. 2023. What social attitudes about gender does BERT encode? leveraging insights from psycholinguistics. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6790–6809, Toronto, Canada. Association for Computational Linguistics. Ann Yuan, Daphne Ippolito, Vitaly Nikolaev, Chris Callison-Burch, Andy Coenen, and Sebastian Gehrmann. 2021. Synthbio: A case study in humanai collaborative curation of text datasets. In 35th Conference on Neural Information Processing Systems (NeurIPS 2021) Track on Datasets and Benchmarks. A Attribute Distributions Figure 4 shows the label distributions of gender and region on the Synthbio dataset. B Nationality-Region Table Table 3 provides the mapping from nationality to its region. C Input Construction To ensure the model generates biographies based on the personal attributes of interest. We reorder the attribute list in the input, moving name, gender, and nationality to the top 3 attributes in order. Following the data-to-text formate in (Kale and Rastogi, 2020), we construct the input as "generate the biography based on name: <name> | gender: <gender> | nationality: <nationality> | [...]", where "[...]" denotes the rest of attributes in the infobox following the format "attribute: <attribute_value>". D Detailed Validation Whether Biography Encodes Desired Nationality Table 4 shows the results of inferring nationality from generated biographies. Male Female Non-Binary 0 200 400 600 800 Count (a) Gender EU AP AF SA ME NA 0 200 400 600 800 Count (b) Region Figure 4: Gender and Region distributions on the Synthbio dataset. Region: (EU = Europe, AF = Africa, AP = Asia–Pacific, SA = South/Latin America, ME = Middle East, NA = North America). E Generated Samples We provide two examples including human-written, true attributed generated, and counterfactual attributed generated biographies. Table 5 and Table 6 generate biographies involving a male Kyrgyzstani individual and a female German individual, respectively. For each biography, we provide two counterfactual biographies where we manipulate gender and nationality. F Experiment with Regard Metric To further investigate the regard vs., sentiment metrics, we compute the regard scores10 on the true attributed generated biographies. As shown in Table 7, under the label “positive”, measuring to what extent the text is positively inclined towards a demographic, we observe similar patterns to that of sentiment. G Qualitative Evaluation We conducted a preliminary analysis of the generated texts and found that the length of generated text varies, especially in gender groups. We measure the correlation between the generated length and the evaluation metrics on the true attributed biography. As shown in Table 8, although we find a positive correlation in text length and evaluators in regions, we do not observe such strong evidence in gender given the length variance in different gender groups. Exploring other latent factors that can potentially impact the bias in biography generation would be an interesting further direction. 10https://huggingface.co/spaces/evaluatemeasurement/regard 417 Nationality Region American North America German Europe Andorran Europe Turkish Europe Albanian Europe Czech Europe French Europe British Europe Lithuanian Europe Greenlandic Europe Swedish Europe Latvian Europe Georgia Europe Swiss Europe Austrian Europe Russian Europe Slovakian Europe Jordanian Middle East Qatari Middle East Indonesian Asia–Pacific Sri Lankan Asia–Pacific South Korean Asia–Pacific Burmese Asia–Pacific Kazakhstani Asia–Pacific Samoan Asia–Pacific Japanese Asia–Pacific Laotian Asia–Pacific Kyrgyzstani Asia–Pacific Chinese Asia–Pacific Costa Rican South/Latin America Venezuelan South/Latin America Dominican South/Latin America Guatemalan South/Latin America Brazilian South/Latin America Zimbabwean Africa Algerian Africa Congolese Africa Kenyan Africa Gabonese Africa South African Africa Table 3: Mapping nationality to its corresponding region. True Counterfactual American 0.939 0.740 German 0.953 0.514 Andorran 0.871 0.558 Turkish 0.950 0.334 Albanian 0.817 0.529 Czech 0.674 0.179 French 1.000 0.627 British 0.850 0.623 Lithuanian 0.857 0.347 Greenlandic 0.929 0.779 Swedish 0.967 0.760 Latvian 0.707 0.280 Georgia 0.439 0.281 Swiss 0.947 0.653 Austrian 0.902 0.466 Russian 0.963 0.658 Slovakian 0.565 0.223 Jordanian 0.443 0.149 Qatari 0.627 0.030 Indonesian 0.651 0.383 Sri Lankan 0.900 0.717 South Korean 0.949 0.730 Burmese 0.917 0.593 Kazakhstani 0.512 0.127 Samoan 0.980 0.899 Japanese 0.966 0.563 Laotian 0.921 0.758 Kyrgyzstani 0.776 0.289 Chinese 0.920 0.801 Costa Rican 0.303 0.070 Venezuelan 0.829 0.396 Dominican 0.600 0.246 Guatemalan 0.794 0.093 Brazilian 0.931 0.362 Zimbabwean 0.790 0.330 Algerian 0.691 0.236 Congolese 0.762 0.266 Kenyan 0.770 0.234 Gabonese 0.906 0.670 South African 0.927 0.856 Table 4: Results of inferring nationality from generated biographies. 418 Attributes: name: Alibek Kulibaliev | gender: male | nationality: Kyrgyzstani | birth_date: 10 February 1947 | birth_place: Kirovskoe, Kyrgyzstan | death_date: 7 May 2015 | death_place: Bishkek, Kyrgyzstan | sport: wrestling | country: Kyrgyzstan | hometown: Bishchek, Kyrgyzstan | citizenship: Russian | education: Ivano-Frankivsk National Technical University of Oil and Gas – master’s | event: freestyle wrestling | position: heavyweight | years_active: 1970-1986 | retired: 1986 | height: 6’3in | weight: 286lb | coach: Ahmet Bilalov | national_team: Kyrgyzstan | worlds: 1974, 1979, 1982, 1986, 1989 – gold medal | olympics: 1972, 1976, 1980, 1984, 1988 – gold medal | mother: Raisa Kurmanova | children: Aleksander, Kirill, Margarita, Dmitri Human-written: Alibek Kulibaliev was born on February 10, 1947 in Kirovskoe, Kyrgyzstan. He was a Kyrgyzstani-born Soviet wrestler who competed in the heavyweight division. He won gold medals in freestyle wrestling in the 1974, 1979, 1982, 1986, and 1989 World Championships, as well as the 1972, 1976, 1980, 1984, 1988 Olympic gold medals. Throughout his career he was coached by Ahmet Bilalov. His height is 6’3in and weight is about 286 lb. He was retired in 1986. Alibek Kulibaliev educated in Ivano-Frankivsk National Technical University of Oil and Gas – master’s. His hometown is Bishchek, Kyrgyzstan. He is the son of Raisa Kurmanova and he has four kids Aleksander, Kirill, Margarita, Dmitri. He was died 7 May 2015 at Bishkek, Kyrgyzstan. Generation: Alibek vladimirovich kulibaliev -lrb- 10 february 1947 – 7 may 2015, in bishkek, kyrgyzstan – 7 april 2015 in kharkiv, soviet union – was a russian wrestler. he competed in the freestyle wrestling event at the 1972 summer olympics, where he won the gold medal, and the silver medal at the 1976 summer universiade. in the 1980s he became a member of the kashmir national wrestling team, which won the bronze medal in the men’s freestyle event. after retiring from wrestling in 1986, his son, aleksander, was also a wrestler for the ukrainian national team. his brother, the former uzbekistani wrestler aleksandr, also wrestled for the national team in 1972, 1976, 1980, 1984, 1988, 1989, 1992, 1993, 1994, 1995, 1996, 1997, 1998, Counterfactual attributes: name: Alibek Kulibaliev | gender: female | nationality: Kyrgyzstani | birth_date: 10 February 1947 | [...] Counterfactual Generation: Alibek kulibaliev (10 february 1947 – 7 may 2015), in bishkek, kyrgyzstan, died in a car accident. she was a former freestyle wrestler, who competed at the 1972 summer olympics, and won the gold medal in the women’s freestyle wrestling event. Counterfactual attributes: name: Alibek Kulibaliev | gender: male | nationality: Swedish | birth_date: 10 February 1947 | [...] Counterfactual Generation: Alibek vladimirovich kulibaliev (10 february 1947 – 7 may 2015, in bishkek, kyrgyzstan ) was a swedish wrestler. he competed in the freestyle wrestling event at the 1972 summer olympics, where he won the gold medal, and the silver medal at the 1988 summer universiade. in the 1980s he became a member of the russian national wrestling team, which won the bronze medal in the heavyweight event. after the 1980 world championships in russia he returned to the ukraine team. during the 1990s a number of swiss wrestlers were killed in a car crash in sydney, australia. his son, aleksander, was also a wrestler in the u.s., but he was unable to compete due to a medical condition. as a result of his injuries he died in the early 1990s of a heart attack Table 5: Example 1 Human-written and generated biographies. The true personal attributes are male and Kyrgyzstani (top). The counterfactual attributes are female (middle) and Swedish (bottom). 419 Attributes: name: Ottoline Helga Helga Werther | gender: female | nationality: German | birth_date: 07 July 1898 | birth_place: Cologne, Germany | death_date: 28 April 1991 | death_place: Cologne, Germany | resting_place: University of Cologne cemetary | alma_mater: University of Cologne | education: PhD in art history | occupation: author | notable_works: The Invention of Nature | language: German | genre: biography | awards: Pulitzer Prize for Biography | mother: Anka Werther | father: Heinrich Werther | partner: Franz Kafka | children: none Human-written: Ottoline Helga Helga Werther was a German author best known for her book The Invention of Nature. Born on July 7, 1898 in Cologne, Germany to Anka and Heinrich Werther, Ottoline obtained a PhD in art history at the University of Cologne. She was a recipient of a Pulitzer Prize. Ottoline was married to Franz Kafka, with whom she had no children. Ottoline died on 28 April 1991 in Cologne, Germany and was laid to rest at the University of Cologne cemetary.’, ’Ottoline was born on July 7, 1898 in Cologne, Germany to Anka and Heinrich Werther, Ottoline studied PhD in art history at the University of Cologne. Ottoline was a German author best known for her biography . Ottoline was married to Franz Kafka. Her notable works include The Invention of Nature and received Pulitzer Prize for Biography. Ottoline was died on 28 April 1991 in Cologne, Germany and was cremated in University of Cologne cemetary. Generation: Ottoline helga (7 july 1898 – 28 april 1991) was a german author. she was born in cologne, germany, and studied art history at the university of klagenfurt. her first book, the invention of nature ”, was published in 1931. it won the pulitzer prize for biography. Counterfactual attributes: name: Ottoline Helga Helga Werther | gender: male | nationality: German | birth_date: 07 July 1898 | [...] Counterfactual Generation: Ottoline helga (7 july 1898 – 28 april 1991), in cemetary of the university of cologne, died in a car accident. he was a german author, best known for his book the invention of nature ”, which won the Pulitzer prize for biography in 1939. the book was adapted into a film, and was screened at the berlin international film festival. it was the first film to be screened in germany, the first in the united states. in 1931, he wrote a book titled nature : a history of science, art, literature, philosophy, religion, politics, history, culture, society, economy, science and the arts” and was awarded the pulitzer prize in biography for his work. his work has been translated into several languages, including english, french, italian, spanish, german and italian. Werther was born in 1898 in the city of klagen Counterfactual Attributes: name: Ottoline Helga Helga Werther | gender: female | nationality: Japanese | birth_date: 07 July 1898 | [...] Counterfactual Generation: Ottoline helga (7 july 1898 – 28 april 1991) was a german-born japanese author. she was born in cologne, germany, and studied art history at the university of kobe. her first book, the invention of nature ”, was published in 1931. it was followed by a biography of her husband, Franz kafka. Table 6: Example 2 Human-written and generated biographies. The true personal attributes are female and German (top). The counterfactual attributes are male (middle) and Japanese (bottom). 420 Positive Neutral Negative Other Gender Male 0.71 0.10 0.08 0.11 Female 0.63 0.18 0.08 0.11 Non-Binary 0.54 0.27 0.09 0.11 Region Europe 0.66 0.18 0.07 0.10 Africa 0.65 0.14 0.09 0.12 Asia–Pacific 0.53 0.20 0.14 0.13 North America 0.77 0.08 0.03 0.13 Table 7: Regard scores for different attribute groups. H A full Pair-Wise Comparison on Counterfactual Generation Table 9 and Table 10 show Welch’s t-test results for counterfactual gender and nationality generations on semantic matching, respectively. Table 11 and Table 12 show Welch’s t-test results for counterfactual gender and nationality generations on sentiment, respectively. 421 Ave. Words Semantic Matching Sentiment Score Pearson R Score Pearson R Gender Male 155.75 0.407 0.00 (p=0.96) 0.041 0.13 (p=0.00) Female 56.01 0.451 -0.11 (p=0.00) 0.036 0.11 (p=0.00) Non-Binary 44.24 0.435 -0.18 (p=0.00) 0.025 0.15 (p=0.00) Region Europe 86.04 0.431 -0.34 (p=0.00) 0.035 0.28 (p=0.00) Africa 84.12 0.438 -0.32 (p=0.00) 0.036 0.19 (p=0.00) Asia–Pacific 83.18 0.429 -0.25(p=0.00) 0.033 0.13 (p=0.00) North America 85.62 0.410 -0.18 (p=0.16) 0.031 0.42 (p=0.00) Table 8: Correlations between generated length and evaluation scores on the true attributed biography generation. Pearson R represents the Pearson R correlation between the generated length (Ave. Words) and evaluation (i.e., Semantic Matching and Sentiment). p-value male, co(male) vs, male, co(female) 0.0 male, co(male) vs, male, co(non-binary) 0.0 male, co(male) vs, female, co(female) 0.0 male, co(male) vs, female, co(non-binary) 0.0 male, co(male) vs, non-binary, co(female) 0.0 male, co(male) vs, non-binary, co(non-binary) 0.0 male, co(female) vs, female, co(male) 0.0 male, co(female) vs, female, co(female) 0.0 male, co(female) vs, female, co(non-binary) 0.0 male, co(female) vs, non-binary, co(male) 0.0 male, co(female) vs, non-binary, co(female) 0.047 male, co(non-binary) vs, female, co(male) 0.0 male, co(non-binary) vs, female, co(female) 0.0 male, co(non-binary) vs, female, co(non-binary) 0.0 male, co(non-binary) vs, non-binary, co(male) 0.0 female, co(male) vs, female, co(female) 0.0 female, co(male) vs, female, co(non-binary) 0.0 female, co(male) vs, non-binary, co(male) 0.025 female, co(male) vs, non-binary, co(female) 0.0 female, co(male) vs, non-binary, co(non-binary) 0.0 female, co(female) vs, non-binary, co(male) 0.0 female, co(female) vs, non-binary, co(female) 0.0 female, co(female) vs, non-binary, co(non-binary) 0.0 female, co(non-binary) vs, non-binary, co(male) 0.0 female, co(non-binary) vs, non-binary, co(female) 0.015 female, co(non-binary) vs, non-binary, co(non-binary) 0.0 non-binary, co(male) vs, non-binary, co(female) 0.0 non-binary, co(male) vs, non-binary, co(non-binary) 0.0 Table 9: Welch’s t-test results for counterfactual gender generations on semantic matching. We only show the results where p<0.1. 422 p-value Europe, co(Europe) vs, Asia–Pacific, co(Europe) 0.076 Europe, co(Europe) vs, Asia–Pacific, co(Asia–Pacific) 0.09 Europe, co(Africa) vs, Asia–Pacific, co(Europe) 0.036 Europe, co(Africa) vs, Asia–Pacific, co(Asia–Pacific) 0.043 Europe, co(Asia–Pacific) vs, Asia–Pacific, co(Europe) 0.013 Europe, co(Asia–Pacific) vs, Asia–Pacific, co(Africa) 0.09 Europe, co(Asia–Pacific) vs, Asia–Pacific, co(Asia–Pacific) 0.013 Africa, co(Europe) vs, Asia–Pacific, co(Europe) 0.064 Africa, co(Europe) vs, Asia–Pacific, co(Asia–Pacific) 0.077 Africa, co(Asia–Pacific) vs, Asia–Pacific, co(Europe) 0.004 Africa, co(Asia–Pacific) vs, Asia–Pacific, co(Africa) 0.031 Africa, co(Asia–Pacific) vs, Asia–Pacific, co(Asia–Pacific) 0.004 Table 10: Welch’s t-test results for counterfactual nationality generations on semantic matching. We only show the results where p<0.1. p-value male, co(male) vs, male, co(female) 0.0 male, co(male) vs, male, co(non-binary) 0.0 male, co(male) vs, female, co(female) 0.0 male, co(male) vs, female, co(non-binary) 0.0 male, co(male) vs, non-binary, co(female) 0.0 male, co(male) vs, non-binary, co(non-binary) 0.0 male, co(female) vs, male, co(non-binary) 0.0 male, co(female) vs, female, co(male) 0.0 male, co(female) vs, female, co(non-binary) 0.0 male, co(female) vs, non-binary, co(male) 0.0 male, co(female) vs, non-binary, co(non-binary) 0.0 male, co(non-binary) vs, female, co(male) 0.0 male, co(non-binary) vs, female, co(female) 0.0 male, co(non-binary) vs, female, co(non-binary) 0.011 male, co(non-binary) vs, non-binary, co(male) 0.0 male, co(non-binary) vs, non-binary, co(female) 0.0 male, co(non-binary) vs, non-binary, co(non-binary) 0.096 female, co(male) vs, female, co(female) 0.0 female, co(male) vs, female, co(non-binary) 0.0 female, co(male) vs, non-binary, co(female) 0.0 female, co(male) vs, non-binary, co(non-binary) 0.0 female, co(female) vs, female, co(non-binary) 0.0 female, co(female) vs, non-binary, co(male) 0.0 female, co(female) vs, non-binary, co(non-binary) 0.0 female, co(non-binary) vs, non-binary, co(male) 0.0 female, co(non-binary) vs, non-binary, co(female) 0.0 non-binary, co(male) vs, non-binary, co(female) 0.0 non-binary, co(male) vs, non-binary, co(non-binary) 0.0 non-binary, co(female) vs, non-binary, co(non-binary) 0.0 Table 11: Welch’s t-test results for counterfactual gender generations on sentiment. We only show the results where p<0.1. 423 p-value Europe, co(Europe) vs, Europe, co(Africa) 0.026 Europe, co(Europe) vs, Africa, co(Europe) 0.001 Europe, co(Europe) vs, Africa, co(Africa) 0.001 Europe, co(Europe) vs, Africa, co(Asia–Pacific) 0.0 Europe, co(Europe) vs, Asia–Pacific, co(Africa) 0.0 Europe, co(Europe) vs, Asia–Pacific, co(Asia–Pacific) 0.044 Europe, co(Africa) vs, Europe, co(Asia–Pacific) 0.046 Europe, co(Africa) vs, Asia–Pacific, co(Europe) 0.028 Europe, co(Asia–Pacific) vs, Africa, co(Europe) 0.002 Europe, co(Asia–Pacific) vs, Africa, co(Africa) 0.002 Europe, co(Asia–Pacific) vs, Africa, co(Asia–Pacific) 0.0 Europe, co(Asia–Pacific) vs, Asia–Pacific, co(Africa) 0.0 Europe, co(Asia–Pacific) vs, Asia–Pacific, co(Asia–Pacific) 0.085 Africa, co(Europe) vs, Asia–Pacific, co(Europe) 0.001 Africa, co(Europe) vs, Asia–Pacific, co(Asia–Pacific) 0.025 Africa, co(Africa) vs, Asia–Pacific, co(Europe) 0.001 Africa, co(Africa) vs, Asia–Pacific, co(Asia–Pacific) 0.011 Africa, co(Asia–Pacific) vs, Asia–Pacific, co(Europe) 0.0 Africa, co(Asia–Pacific) vs, Asia–Pacific, co(Asia–Pacific) 0.001 Asia–Pacific, co(Europe) vs, Asia–Pacific, co(Africa) 0.0 Asia–Pacific, co(Europe) vs, Asia–Pacific, co(Asia–Pacific) 0.037 Asia–Pacific, co(Africa) vs, Asia–Pacific, co(Asia–Pacific) 0.001 Table 12: Welch’s t-test results for counterfactual nationality generations on sentiment. We only show the results where p<0.1. 424
2024_short
39
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 47–55 August 11-16, 2024 ©2024 Association for Computational Linguistics Resisting the Lure of the Skyline: Grounding Practices in Active Learning for Morphological Inflection Saliha Murado˘glu| Michael Ginno Miikka Silfverbergp Mans Huldeno |The Australian National University (ANU) oUniversity of Colorado Boulder pUniversity of British Columbia Firstname.Lastname@ {|anu.edu.au, ocolorado.edu, pubc.ca} Abstract Active learning (AL) aims to reduce the burden of annotation by selecting informative unannotated samples for model building. In this paper, we explore the importance of conscious experimental design in the language documentation and description setting, particularly the distribution of the unannotated sample pool. We focus on the task of morphological inflection using a Transformer model. We propose context motivated benchmarks: a baseline and skyline. The baseline describes the frequency weighted distribution encountered in natural speech. We simulate this using Wikipedia texts. The skyline defines the more common approach, uniform sampling from a large, balanced corpus (UniMorph, in our case), which often yields mixed results. We note the unrealistic nature of this unannotated pool. When these factors are considered, our results show a clear benefit to targeted sampling. 1 Introduction Active learning (AL) (Cohn et al., 1996) is a data annotation approach, where the aim is to direct annotation effort at examples that are maximally helpful for model performance. Most active learning work in NLP involves pool-based active learning (McCallum et al., 1998) where a small seed training set is used to create an initial model, and additional examples are selected and annotated from a large pool of unannotated data. Several selection strategies exist, including confidencebased (Lewis, 1995; Cohn et al., 1996; Muradoglu and Hulden, 2022), diversity-based (Brinker, 2003; Sener and Savarese, 2018; Yuan et al., 2020) and committee-based approaches (Liere and Tadepalli, 1997; Farouk Abdel Hady and Schwenker, 2010); these approaches aim to outperform a uniform random selection baseline. AL is often advocated as a method to rapidly improve model performance in low-resource settings (Baldridge and Palmer, 2009; Ambati, 2012; ara fin kat que sah tur tyv zul 0.00 0.25 0.50 0.75 1.00 Accuracy Languages Figure 1: Accuracies reported across the eight languages considered, for the seed, LMC(LEMMA,MSD), LMC(WORDFORM, MSD) and TMC experiments. The maroon lines mark the Frequency Stratified (Baseline) accuracy, and the blue lines mark the Uniform sampling (Skyline) accuracy. Grießhaber et al., 2020), where limited annotation capacity needs to be directed intelligently. Nevertheless, AL performance is inconsistent in practice and both success-stories and failures are reported in the literature (Settles et al., 2008; Baldridge and Palmer, 2009; Althammer et al., 2023), demonstrating that it is non-trivial to beat a uniform random selection baseline. Language documentation is a natural application for active learning. Approximately half the world’s languages face the grim forecast of extinction, with 47 around 35–42% of these still substantially undocumented (Krauss, 1992; Wurm, 2001; Bianco, 2002; Crystal, 2002; Austin and Sallabank, 2011; Seifart et al., 2018). However, data for training automated systems is often limited, and additional annotation bears a high opportunity cost, limited not only by resources but also native speaker availability. Simulated active learning The gold standard of active learning experiments for language documentation is the use of human annotators in a genuine low-resource setting, as in studies such as Baldridge and Palmer (2009). However, for practical reasons, most AL research uses simulated active learning, where a small seed training set is sampled from a large existing annotated dataset, and the remaining annotated examples represent the pool from which new examples are selected. While this approach allows for experimentation without costly manual annotations, it introduces a number of confounding factors which can complicate interpretation of results. Baldridge and Palmer (2009) note that unit annotation cost is generally assumed in simulated active learning experiments, but this approach can be unrealistic when selection strategies tend to choose ambiguous examples that are harder, and therefore slower, to annotate. In a similar vein, Margatina and Aletras (2023) argue that in simulations, the unannotated pool tends to be carefully curated and preprocessed (as it is formed from an existing annotated training set). These pools often display unrealistic distributions of classes and lexical and structural diversity, which can be a highly inaccurate reflection of data in the wild, where noise, irrelevant examples and repetitions abound. To ensure validity of the results of simulated active learning experiments (particularly for low-resource settings), it is important to mimic a setting with limited lexical diversity and characteristic class imbalance, as is present in natural language datasets. Active learning for morphology In this paper, we analyze pool-based active learning for language documentation, focusing on models for morphological inflection. We first argue that existing typelevel morphological resources (such as Unimorph, Batsuren et al. 2022) are a poor representation of a realistic unannotated pool in language documentation settings, unless some notion of lexical frequency is injected into the data. We then present experiments on morphological inflection, which demonstrate that the composition of the unannotated pool is highly influential for performance in simulated active learning experiments. We employ two selection criteria: transformer model confidence as previously investigated by Muradoglu and Hulden (2022) and a novel language model-based selection criterion. Given a carefully designed, frequency stratified, pool of unannotated examples mimicking naturalistic text, these methods can beat a uniform random baseline by a sizable margin. However, given a naïvely constructed, unannotated pool (based on the Unimorph database), neither of the methods confers an advantage over the baseline. 2 Data We conduct experiments on the UniMorph database of inflection tables (Batsuren et al., 2022)1 on a typologically diverse set of eight languages: Arapaho (arp), Finnish (fin), Georgian (kat), Quechua (que), Sakha (sah), Turkish (tur), Tuvan (tyv) and Zulu (zul). Our choice of languages is motivated by a balance between morphological complexity, data availability (both UniMorph and Wikipedia) and endangerment classification according to UNESCO Atlas of the World’s Languages in Danger (Moseley, 2010). Where possible, we have attempted to maximise the diversity of our subject languages. Across 8 languages, 6 language families2. Further, three of the languages considered (sah, tyv and arp) are considered endangered. We exclusively include adjectives and nouns in our experiments.3 This simplifies analysis while still representing substantial morphological diversity as nouns make up a sizable portion of text cross-linguistically (Hudson, 1994; Liang and Liu, 2013). To model word frequencies, we extract the Wikipedias for each language and form the intersection of word types present in UniMorph (U) and Wikipedia (W): U ∩W. We also retain the much larger part of the UniMorph database U \ W, representing types not found in the Wikipedia. Data sampling is visualized in Figure 2.4 Our development and initial seed training set are formed by sampling (without replacement) 500 and 1,000 1Released under the CC BY-SA 3.0 license 2Uralic, Kartvelian, Turkic (South Siberian, North Siberia, Western Oghuz), Quechuan, Algonquian, Bantu. 3If these inflect identically, we combine them into a category of nominals. See Table 4 for details. 4All data and code will be made available at https://github.com/michaelpginn/ active-learning-for-morphology/. Code released under the MIT license. 48 U ∩ W Dev Sample 500 forms randomly Seed Train Set Test Sample enough lemmas to create a test of length approximately 1,000 Transformer model scores Skyline Sample 500 forms randomly Language model scores Wiki Uniform Sample 500 forms randomly Baseline Sample the 500 most frequent forms Oracle Sample 500 of the incorrect forms TMC Sample 500 of the lowest confidence forms LMC(Lemma,MSD) Sample 500 of the highest scored LMC(Wordform,MSD) Sample 500 of the highest scored v Wikipedia (W) UniMorph (U) Sample 4 lemma paradigms Sample the 1,000 most Frequent forms + Augmented Samples U \ W Figure 2: Overview of data sampling, where U and W notes the UniMorph and Wikipedia databases respectively, U ∩W denotes the intersection and U \ W the difference. Arrows note a sampling without replacement. forms, respectively, from U ∩W (with their UniMorph lemmata and MSDs). For each language, we additionally supplement the seed training set with four complete inflection paradigms extracted from U \ W to ensure that all inflections are covered by the seed training data5. From the remaining types in U \ W, we then sample 1,000 for testing. Thus, we ensure that there is no overlap between the data splits. Using each of the different active learning selection strategies presented below in Section 3, we sample an additional 500 training examples. For our baseline method, those are sampled from U ∩ W, while for all the other methods, additional data comes from U \ W. 3 Experimental Setup We perform experiments on the word inflection task (Cotterell et al., 2016; Goldman et al., 2023) with datasets consisting of triplets ⟨lexeme, MSD, inflected form⟩, e.g. ⟨smile, V;PST, smiled⟩. Models are trained to predict the correct inflected form based on the lemma and MSD. We train transformer (Vaswani et al., 2017) inflection models 5In a language documentation setting, this information could be supplied by the linguist. using fairseq (Ott et al., 2019).6 In all experiments, we apply data augmentation using the lemma-copy mechanism (Liu and Hulden, 2022). We initially train models on the seed training set and use various sampling strategies to select 500 additional examples from the unused pool, evaluating the change in inflection performance when training on the augmented set. The test and development sets, disjoint with all training data, remain unchanged through this process. We experiment with the following strategies: Frequency Stratified (Baseline) We use word frequencies from Wikipedia to perform weighted random sampling from the pool U ∩W. This method serves as a linguistically motivated, realistic baseline, accounting for the Zipfian nature of language, and approximating realistic lexical diversity and the naturalistic distribution of inflected forms. Wiki Uniform We additionally report results on a baseline which samples from U ∩W without frequency weighting. Uniform sampling (Skyline) Our second baseline (which we call Skyline, as it is near-unbeatable) uses uniform sampling without word frequency information from U \ W. This setting is unrealistic in a language documentation setting—due to the lexical diversity and balanced class distribution of the samples, rare paradigm slots are over-represented. Oracle Inspired by Muradoglu and Hulden (2022), we sample forms which the model fails to inflect correctly. Since this requires knowledge of gold standard forms, the method can only be used for comparison. This strategy mimics feedback from a linguist or language expert. In many cases, there are more than 500 incorrectly inflected forms to choose from. When this happens, we select maximally erroneous examples, that is, the examples with the greatest Levenshtein distance to the gold standard form.7 In contrast, when there are fewer than 500 incorrectly inflected forms, we augment the set using correctly inflected forms with the lowest confidence. Transformer model confidence (TMC) Again following Muradoglu and Hulden (2022), we train an initial inflection model on the seed training set. We use this model to make predictions and select the examples with the lowest confidence scores. 6Our model and training hyperparameters follow Liu and Hulden (2020), described in Appendix A. 7This can be thought of as maximizing the informativity of the examples. 49 Language model confidence scores (LMC) We train two character-level language models (LM) over lemma+MSD and wordform+MSD sequences (respectively) from the seed training set. This means that our LMs return probabilities for sequences like walk+V+PAST and walked+V+PAST. We use the LMs to select examples with low probability or, equivalently, high negative log-likelihood (NLL).8 We experiment with using NLL from either the input lemma or the predicted inflected forms (not gold forms), and term these approaches LMC(LEMMA,MSD) and LMC(WORDFORM,MSD), respectively. 4 Results and Discussion Experiment ∆accuracy Baseline 0.067 Wiki Uniform 0.122 LMC(lemma,MSD) 0.124 Oracle 0.193 LMC(Wordform,MSD) 0.230 TMC 0.247 Skyline 0.298 Table 1: Average change in accuracy observed across each sampling strategy. Table 1 reports the average change in accuracy from the seed models for each sampling strategy. The two benchmarks provide upper and lower limits for sample selection. The baseline underperforms on average, an expected result given the Zipfian nature of language. As the sampling strategy is dependent on natural texts, the samples have less diverse lemmas and MSDs. Meanwhile, the skyline outperforms every other strategy for five of the eight languages; again, this result is unsurprising, as the UniMorph database provides highly diverse examples. However, it is nearly impossible to replicate this approach, which treats all words equally regardless of rarity, in a realistic setting. While the WIKI UNIFORM strategy shows greater average improvements than the baseline, the results across languages are mixed9. For example, while Finnish shows a 28.8% accuracy gain, performance on Quechua decreases by 0.05%. 8This approach is inspired by the observation that novel words are often inflected based on analogy to know words (Skousen, 1990; Derwing and Skousen, 1994; Prasada and Pinker, 1993). The LMC approach aims to seek out examples which are not represented by the seed training set. 9See Table 5 for details. It is surprising that the oracle, intended to mimic a language expert, is outperformed by the either the TMC or LMC(WORDFORM, MSD) strategies for six of the languages considered. For almost all of the languages examined, the Levenshtein distance is the primary weighing factor10. The edit distance fails to consider the diversity of vocabulary or MSD. Compound words can also skew the Levenshtein distance significantly. For example, for the Turkish compound otomatik bilet makinası (“automatic ticket machine”), if the model does not capture the space between otomatik and bilet, though characters are merely shifted to the left, the Levenshtein distance is artificially high. 4.1 Edit Diversity Figure 3: Change in edit diversity (H), compared to the base train set, for each sampling method. While the baseline method leads to reduced edit diversity, most of the sampling methods instead result in increased diversity. We seek to understand the effects of the various sampling strategies by estimating the relative edit diversity for each sample. For each dataset, we enumerate the edits (insertion, deletion, or replacement of subwords) needed to transform each lexeme to the inflected word. We collect edits of the same type and subword to give an edit distribution. Using this distribution, we compute entropy, which is higher for a distribution with a more diverse set of edits, and lower when the dataset is dominated by a few frequent edits. We provide the entropy, relative to the base training set, in Figure 3. We observe that the strategies that sample from Wikipedia (which tend to be less successful) have lower entropy on average, while the Oracle, TMC, and skyline samples (which are more successful) 10Since there are more than 500 incorrect predictions for the remaining U \ W dataset. The only exception is Georgian, with < 500 incorrect predictions. 50 have higher entropy. We also find correlations between lower cross-entropy with the test set and better performance (see section 4.1.1). The distinction between the naïve UniMorph pool and the frequency stratified sampling is mirrored in the language documentation and description (LDD) community with the elicitation or naturalistic speech debate. Chelliah (2001) notes that ‘language description based solely on textual data results in patchy and incomplete descriptions’. Similarly, Evans (2008) highlights the necessity of both linguistic phenomena targeting elicitation and observed communicative events11 (often narratives, conversations, etc.). 4.1.1 Cross-entropy and performance Figure 4: Regression between accuracy and crossentropy for various sampling strategies on Turkish inflection. We compute the cross-entropy between the test set edit distribution and each of the sampled sets. We find that across languages, increased crossentropy, which indicates that the sampled set is more dissimilar from the test set, tends to correlate with decreased performance. For example, Figure 4 plots the performance and cross entropy for the various sampling strategies for Turkish. This is an intuitive result, confirming the importance of sampling a training set that is similar in distribution to the target test set. We run linear regression for each language and report the slopes and R2 values. It is clear that in most cases, reducing crossentropy by choosing a sampling strategy that approximates the test distribution is beneficial to performance. However, since the test distribution is not necessarily known in real-world active learning scenarios, this remains a difficult task to solve. 11Himmelmann (1998) distinguishes these categories further, with a third ‘Staged communicative events’. This refers to tasks that are prompted for linguistic purposes, such as a picture task. Language Slope R2 arp -0.25 0.381 fin -0.03 0.148 kat -0.05 0.731** que -0.49 0.512* sah -0.06 0.716** tur -0.01 0.842** tyv -0.05 0.321 zul -0.08 0.847** Table 2: Linear regressions for each language between cross-entropy of sampled sets with test sets (x) and accuracy on the test set (y). * indicates significance with n = 8 and p < 0.05, ** indicates significance with p < 0.01. 5 Conclusion Computational methods can aid language documentation and description projects by processing and analyzing recorded data. Active learning approaches can greatly aid in the rapid development of robust automated systems by focusing annotation on highly beneficial samples, but existing research on simulated AL often makes unrealistic assumptions. We compare a standard approach (skyline), where data is sampled from unrealistic linguistic resources, an approach based on naturalistic word frequencies (baseline), and a number of strategies motivated by encouraging lexical diversity. Our skyline and baseline approaches serve as analogs to elicitation and naturalistic recording. We find that the skyline approach is difficult to beat, but as few languages have sufficient corpora with complete, diverse paradigms, we argue this approach is an unrealistic baseline for AL. Meanwhile, we find clear benefits from targeted sampling strategies, with inflection model confidence (TMC) and character LM scores (LMC(WORDFORM, MSD)) yielding the greatest improvements. 6 Limitations Three of our eight languages are members of the Turkic language family. Despite our best efforts, it was not possible to have a set of languages that covered a significant range of typological features, particularly pertaining to phonology and morphology. In most cases, either the existing Wikipedia was too small or there were issues with orthography that did not map neatly with the UniMorph database. This is a limitation of the study presented 51 and remains an intended future rectification for the authors. It is important to note that the style and register of Wikipedia is limited. As such, certain MSDs are underrepresented or over-represented, compared with natural speech. Our experiments use Wikipedia articles to simulate texts/recordings of language, a limited approximation of the natural setting that does not cover a broad range of genres. However, constructing a representative corpus in the language documentation context is an almost impossible endeavour. 7 Ethics Statement If our results do not hold across a wide variety of languages, our suggested AL approaches may result in annotator effort that is not beneficial to the model. This would be a significant opportunity cost, particularly in the case of languages which are considered critically endangered. Automated systems for inflection and language documentation are limited in scope and carry some degree of error. While they can greatly aid in documentation projects, they should not be used to entirely replace human annotators and linguists in the documentation, study, and preservation of languages. Particularly for Indigenous and endangered languages, care should be taken to use data and automated systems in a way consistent with the desires of the language community (Schwartz, 2022). Finally, training models carries an unavoidable environmental cost (Bender et al., 2021). While our research uses small models, we strive to ensure the benefits outweigh these costs. References Sophia Althammer, Guido Zuccon, Sebastian Hofstätter, Suzan Verberne, and Allan Hanbury. 2023. Annotating data for fine-tuning a neural ranker? current active learning strategies are not better than random selection. In Proceedings of the Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region, pages 139–149. Vamshi Ambati. 2012. Active learning and crowdsourcing for machine translation in low resource scenarios. Ph.D. thesis, Carnegie Mellon University. Peter Austin and Julia Sallabank. 2011. The Cambridge handbook of endangered languages. Cambridge University Press. Jason Baldridge and Alexis Palmer. 2009. How well does active learning actually work? time-based evaluation of cost-reduction strategies for language documentation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 296–305. Khuyagbaatar Batsuren, Omer Goldman, Salam Khalifa, Nizar Habash, Witold Kiera´s, Gábor Bella, Brian Leonard, Garrett Nicolai, Kyle Gorman, Yustinus Ghanggo Ate, Maria Ryskina, Sabrina Mielke, Elena Budianskaya, Charbel El-Khaissi, Tiago Pimentel, Michael Gasser, William Abbott Lane, Mohit Raj, Matt Coler, Jaime Rafael Montoya Samame, Delio Siticonatzi Camaiteri, Esaú Zumaeta Rojas, Didier López Francis, Arturo Oncevay, Juan López Bautista, Gema Celeste Silva Villegas, Lucas Torroba Hennigen, Adam Ek, David Guriel, Peter Dirix, Jean-Philippe Bernardy, Andrey Scherbakov, Aziyana Bayyr-ool, Antonios Anastasopoulos, Roberto Zariquiey, Karina Sheifer, Sofya Ganieva, Hilaria Cruz, Ritván Karahóˇga, Stella Markantonatou, George Pavlidis, Matvey Plugaryov, Elena Klyachko, Ali Salehi, Candy Angulo, Jatayu Baxi, Andrew Krizhanovsky, Natalia Krizhanovskaya, Elizabeth Salesky, Clara Vania, Sardana Ivanova, Jennifer White, Rowan Hall Maudslay, Josef Valvoda, Ran Zmigrod, Paula Czarnowska, Irene Nikkarinen, Aelita Salchak, Brijesh Bhatt, Christopher Straughn, Zoey Liu, Jonathan North Washington, Yuval Pinter, Duygu Ataman, Marcin Wolinski, Totok Suhardijanto, Anna Yablonskaya, Niklas Stoehr, Hossep Dolatian, Zahroh Nuriah, Shyam Ratan, Francis M. Tyers, Edoardo M. Ponti, Grant Aiton, Aryaman Arora, Richard J. Hatcher, Ritesh Kumar, Jeremiah Young, Daria Rodionova, Anastasia Yemelina, Taras Andrushko, Igor Marchenko, Polina Mashkovtseva, Alexandra Serova, Emily Prud’hommeaux, Maria Nepomniashchaya, Fausto Giunchiglia, Eleanor Chodroff, Mans Hulden, Miikka Silfverberg, Arya D. McCarthy, David Yarowsky, Ryan Cotterell, Reut Tsarfaty, and Ekaterina Vylomova. 2022. UniMorph 4.0: Universal Morphology. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 840–855, Marseille, France. European Language Resources Association. Emily M. Bender, Timnit Gebru, Angelina McMillanMajor, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, page 610–623, New York, NY, USA. Association for Computing Machinery. Joseph Lo Bianco. 2002. Real world language politics and policy. Language policy: Lessons from global models. Monterey, California: Monterey Institute of International Studies. Klaus Brinker. 2003. Incorporating diversity in active learning with support vector machines. In Proceedings of the 20th international conference on machine learning (ICML-03), pages 59–66. 52 Shobhana L. Chelliah. 2001. The role of text collection and elicitation in linguistic fieldwork, page 152–165. Cambridge University Press. David A Cohn, Zoubin Ghahramani, and Michael I Jordan. 1996. Active learning with statistical models. Journal of artificial intelligence research, 4:129–145. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared Task— Morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 10–22, Berlin, Germany. Association for Computational Linguistics. David Crystal. 2002. Language Death. Canto. Cambridge University Press. Bruce L Derwing and Royal Skousen. 1994. Productivity and the english past tense. The reality of linguistic rules, Amsterdam/Philadelphia, John Benjamins Publishing Company, pages 193–218. Nicholas Evans. 2008. Review of gippert, jost, nikolaus himmelmann and ulrike mosel (eds.), essentials of language documentation. Language Documentation & Conservation, 2:340–350. Mohamed Farouk Abdel Hady and Friedhelm Schwenker. 2010. Combining committee-based semi-supervised learning and active learning. Journal of Computer Science and Technology, 25(4):681–698. Omer Goldman, Khuyagbaatar Batsuren, Salam Khalifa, Aryaman Arora, Garrett Nicolai, Reut Tsarfaty, and Ekaterina Vylomova. 2023. SIGMORPHON– UniMorph 2023 shared task 0: Typologically diverse morphological inflection. In Proceedings of the 20th SIGMORPHON workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 117–125, Toronto, Canada. Association for Computational Linguistics. Daniel Grießhaber, Johannes Maucher, and Ngoc Thang Vu. 2020. Fine-tuning bert for low-resource natural language understanding via active learning. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1158–1171. Nikolaus P Himmelmann. 1998. Documentary and descriptive linguistics. Linguistics, 36(1):161–196. Richard Hudson. 1994. About 37% of word-tokens are nouns. Language, 70(2):331–339. Michael Krauss. 1992. The world’s languages in crisis. Language, 68(1):4–10. David D Lewis. 1995. A sequential algorithm for training text classifiers: Corrigendum and additional data. In Acm Sigir Forum, volume 29, pages 13–19. ACM New York, NY, USA. Junying Liang and Haitao Liu. 2013. Noun distribution in natural languages. Pozna´n Studies in Contemporary Linguistics, 49(4):509–529. Ray Liere and Prasad Tadepalli. 1997. Active learning with committees for text categorization. In AAAI/IAAI, pages 591–596. Citeseer. Ling Liu and Mans Hulden. 2020. Leveraging principal parts for morphological inflection. In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 153–161, Online. Association for Computational Linguistics. Ling Liu and Mans Hulden. 2022. Can a transformer pass the wug test? tuning copying bias in neural morphological inflection models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 739–749, Dublin, Ireland. Association for Computational Linguistics. Katerina Margatina and Nikolaos Aletras. 2023. On the limitations of simulating active learning. In Findings of the Association for Computational Linguistics: ACL 2023, pages 4402–4419, Toronto, Canada. Association for Computational Linguistics. Andrew McCallum, Kamal Nigam, et al. 1998. Employing em and pool-based active learning for text classification. In ICML, volume 98, pages 350–358. Citeseer. Christopher Moseley. 2010. Atlas of the World’s Languages in Danger. Unesco. Saliha Muradoglu and Mans Hulden. 2022. Eeny, meeny, miny, moe. how to choose data for morphological inflection. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 7294–7303, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics. Sandeep Prasada and Steven Pinker. 1993. Generalisation of regular and irregular morphological patterns. Language and cognitive processes, 8(1):1–56. Lane Schwartz. 2022. Primum Non Nocere: Before working with Indigenous data, the ACL must confront ongoing colonialism. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 724–731, Dublin, Ireland. Association for Computational Linguistics. 53 Frank Seifart, Nicholas Evans, Harald Hammarström, and Stephen C Levinson. 2018. Language documentation twenty-five years on. Language, 94(4):e324– e345. Ozan Sener and Silvio Savarese. 2018. Active learning for convolutional neural networks: A core-set approach. In International Conference on Learning Representations. Burr Settles, Mark Craven, and Lewis Friedland. 2008. Active learning with real annotation costs. In Proceedings of the NIPS workshop on cost-sensitive learning, volume 1. Vancouver, CA:. Royal Skousen. 1990. Analogical Modeling of Language. Springer Netherlands. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Stephen A Wurm. 2001. Atlas of the World’s Languages in Danger of Disappearing. Unesco. Michelle Yuan, Hsuan-Tien Lin, and Jordan BoydGraber. 2020. Cold-start active learning through selfsupervised language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7935–7948. A Model details Across preliminary experiments and the runs listed in this paper, training took around 1,500 compute hours. We ran experiments on the UBC computing cluster and Google Colab. Models were trained with the hyperparameters listed in Table 3. Models had around 10M parameters. Hyperparameter Value Encoder/Decoder layers 4 Encoder/Decoder attention heads 4 Optimization Adam Embedding size 256 Hidden layer size 1024 Learning rate 0.001 Batch Size 400 Label Smoothing 0.1 Gradient clip threshold 1.0 Warmup updates 1000 Max updates 6000 Table 3: Our hyperparameters follow the setup described by Liu and Hulden (2020). B Data Composition Information about the composition for each language is given in Table 4. C Language-Specific Model Accuracies Accuracy scores are reported in Table 5 54 Language POS present N=adj Wikipedia sample Four lemma tables size COPY size Total training set size Test size tyv N ? 1000 336 10 1346 1008 ara A,N N 1000 320 10 1330 1147 kat N Y 1000 64 65 1129 1040 que N Y 1000 768 5 1773 1152 zul A,N N 1000 236 20 1256 992 sah N ? 1000 350 10 1360 1092 tur A,N N 1000 216 30 1246 1068 fin N Y 1000 104 40 1140 1092 Table 4: Seed training and test set composition for each language. The wikipedia sample refers to the frequency weighted sample taken from Wikipedia. The four lemma table size describes the added full paradigms from the Unimorph database. Copy size denotes the number of unique lemma found in the test size. The test size varies for each language as the paradigm sizes differ (and thus the number of lemma). Language Seed ± std Skyline ± std Wiki Uniform ± std Baseline ± std Oracle ± std TMC ± std tyv 0.416 0.028 0.909 0.013 0.686 0.020 0.561 0.034 0.733 0.036 0.711 0.033 ara 0.256 0.007 0.616 0.031 0.336 0.010 0.318 0.019 0.556 0.023 0.448 0.023 kat 0.870 0.023 0.931 0.004 0.926 0.018 0.896 0.020 0.949 0.014 0.922 0.029 que 0.514 0.032 0.820 0.021 0.509 0.023 0.604 0.031 0.811 0.027 0.786 0.012 zul 0.391 0.044 0.791 0.025 0.400 0.016 0.424 0.019 0.576 0.025 0.763 0.026 sah 0.664 0.062 0.972 0.011 0.863 0.021 0.728 0.026 0.912 0.021 0.972 0.004 tur 0.100 0.021 0.227 0.026 0.177 0.026 0.142 0.015 0.167 0.018 0.251 0.030 fin 0.402 0.028 0.727 0.047 0.690 0.020 0.473 0.032 0.453 0.050 0.732 0.024 Language Seed ± std LMC (WF,MSD) ± std LMC (Lem,MSD) ± std LMC(WF) ± std LMC(Lem) ± std tyv 0.416 0.028 0.751 0.027 0.580 0.031 0.695 0.045 0.571 0.014 ara 0.256 0.007 0.600 0.024 0.520 0.042 0.498 0.040 0.372 0.009 kat 0.870 0.023 0.936 0.011 0.918 0.009 0.931 0.010 0.908 0.028 que 0.514 0.032 0.749 0.042 0.707 0.038 0.654 0.044 0.688 0.055 zul 0.391 0.044 0.695 0.029 0.547 0.031 0.625 0.019 0.571 0.021 sah 0.664 0.062 0.861 0.035 0.737 0.019 0.865 0.023 0.787 0.017 tur 0.100 0.021 0.197 0.032 0.171 0.016 0.197 0.037 0.154 0.025 fin 0.402 0.028 0.664 0.038 0.421 0.040 0.683 0.037 0.472 0.042 Table 5: Model accuracies for all sampling strategies considered. The reported standard deviation is calculated across five equal partitions on the test set. TMC = "Transformer Model Confidence", LMC = "Language model confidence", WF = "Wordform", and Lem = "Lemma". 55
2024_short
4
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 425–434 August 11-16, 2024 ©2024 Association for Computational Linguistics Sign Language Translation with Sentence Embedding Supervision Yasser Hamidullah and Josef van Genabith and Cristina España-Bonet {yasser.hamidullah,Josef.van_Genabith,cristinae}@dfki.de German Research Center for Artificial Intelligence (DFKI GmbH) Saarland Informatics Campus, Saarbrücken, Germany Abstract State-of-the-art sign language translation (SLT) systems facilitate the learning process through gloss annotations, either in an end2end manner or by involving an intermediate step. Unfortunately, gloss labelled sign language data is usually not available at scale and, when available, gloss annotations widely differ from dataset to dataset. We present a novel approach using sentence embeddings of the target sentences at training time that take the role of glosses. The new kind of supervision does not need any manual annotation but it is learned on raw textual data. As our approach easily facilitates multilinguality, we evaluate it on datasets covering German (PHOENIX-2014T) and American (How2Sign) sign languages and experiment with mono- and multilingual sentence embeddings and translation systems. Our approach significantly outperforms other gloss-free approaches, setting the new state-of-the-art for data sets where glosses are not available and when no additional SLT datasets are used for pretraining, diminishing the gap between glossfree and gloss-dependent systems. 1 Introduction Sign Language Translation (SLT) aims at generating text from sign language videos. There are several approaches to SLT reported in the literature, with sign2text and sign2gloss2text the most widely used. While sign2text directly translates video into text with or without the help of glosses (Camgöz et al., 2018), sign2gloss2text passes through an intermediate gloss step before translation into spoken language text (Ormel et al., 2010). That is, sign2gloss2text breaks down the problem into two independent sub-problems using glosses as a pivot language. A gloss is a textual label associated with a sign, and, although human signers do not in general use them, performance in automatic SLT has long been upper bounded by the gloss supervision and their use as an intermediate representation (Camgöz et al., 2018). The advantage of translation without glosses is that collecting data is much easier. Even though translation results are better for approaches that use glosses as intermediate representation (Chen et al., 2022a,b), this comes at the cost of annotating all the video data with glosses which is a time consuming manual task. For many data sets glosses are simply not available. On the plus side, with gloss supervision-based SLT architectures, one can take full advantage of the maturity of text2text machine translation between glosses and spoken language text. In this work, we present a novel approach sign2(sem+text), a model that gets rid of glosses and adds supervision through sentence embeddings, SEM, pretrained on raw text and finetuned for sign language. Our experimental results demonstrate the strength of the novel approach on both standard small datasets with gloss annotation and larger datasets without. In the latter case, we achieve state-of-the-art results for the American Sign Language (ASL) dataset How2Sign when no additional SLT datasets are used1 improving over Tarrés et al. (2023) by 4 BLEU points. For German Sign Language (DGS), our new approach achieves translation quality scores between the previous best glossfree system (Zhou et al., 2023) and the current state-of-the-art using glosses (Chen et al., 2022b) on the PHOENIX-2014T dataset. Our code and models are publicly available.2 2 Related Work Camgöz et al. (2018) proposed three formalisations considering SLT as a seq2seq problem that converts a sequence of signs into a sequence of words: (i) sign2text, a model that encodes video frames using pretrained 2D CNNs as spatial features and then 1Uthus et al. (2023) and Rust et al. (2024) obtain better results by using the YouTube-ASL dataset for pretraining. 2https://github.com/yhamidullah/sem-slt 425 Figure 1: sign2sem and sem2text independent modules for the SLT task (left plot). End2end architectures: pipeline system sign2sem2text and multitask system sign2(sem+text) (right plot). uses an RNN to generate the text; (ii) gloss2text, a model learning the translation between a sequence of textual glosses and fluent spoken language text; and (iii) a sign2gloss2text model that adds an extra intermediate gloss layer between the video and output text levels of a sign2text architecture to provide additional gloss supervision using a CTC loss. In follow-up work, Camgöz et al. (2020) proposed an architecture for joint learning continuous sign language recognition (CSLR) and SLT which uses the same input as Camgöz et al. (2018); Zhou et al. (2021), that is, pretrained visual features, but a transformer (Vaswani et al., 2017) for text generation. Camgöz et al. (2020) conjectured that gloss2text results with ground-truth glosses provide an upper bound for SLT. Supporting this assumption, their translation quality on PHOENIX2014T as measured by BLEU (Papineni et al., 2002) achieved 24.5 on gloss2text and 21.8 on sign2text. Yin and Read (2020) used a different visual representation with a multi-cue network (Zhou et al., 2020) to encode videos. Cues included face, hands and pose besides the full frame. With a BLEU score of 24.0 they impoved over sign2gloss2text Camgöz et al. (2020) and concluded that their visual representation was better than the spatial frame embeddings used by the Camgöz et al. (2020). Chen et al. (2022a,b) used both pretraining of a network based on S3Ds (Xie et al., 2018) on action recognition for CSLR (sign2gloss) and pretraining of a textual transformer (gloss2text) with mBART-25 (Liu et al., 2020). Both types of pretraining are progressively adapted to the domain of the task by adding data closer to the domain. An additional mapping network between the vision and language parts allows Chen et al. (2022a) to build an end2end sign2text model relying on internal gloss supervision. To the best of our knowledge, Chen et al. (2022b) is the current state of the art for both sign2text (BLEU=28.95) and sign2gloss2text (BLEU=26.71), all on the PHOENIX-2014T data set. Over the last few years, several gloss-free models have emerged (Li et al., 2020; Zhao et al., 2022; Yin et al., 2023). Zhou et al. (2023) obtains the current state-of-the-art in this category by utilising visual-language pretraining following CLIP (Radford et al., 2021). On the datasets (Camgöz et al. (2018); Zhou et al. (2021)) where the two approaches can be compared, translation quality diminishes by up to 7 BLEU points when the glosses are not used (Yin et al., 2023; Zhou et al., 2023). Tarrés et al. (2023) uses the How2Sign dataset (Duarte et al. (2021)) (where no gloss information is available) with I3D (Carreira and Zisserman (2017)) features for video representations and a Transformer. Uthus et al. (2023) introduces a new dataset, YouTube-ASL, 10 times larger than the previous one (Duarte et al. (2021)), and uses 2D pose estimation and pretraining to improve on Tarrés et al. (2023) best results on How2Sign (BLEU=8.09 vs BLEU=12.4). Simultaneously with our work, Rust et al. (2024) pretrains a self-supervised and privacy-aware visual model on YouTube-ASL to achieve the new state-of-theart performance on How2Sign (BLEU=15.5). 3 SEM-based Architectures In our work we build two systems that revolve around textual sentence embeddings, SEM, as depicted in Figure 1. The figure presents two independent modules sign2sem and sem2text (left plots) that we later combine in sign2sem2text and sign2(sem+text) in an end2end setting (right plot). 426 • sign2sem Module This module predicts an intermediate SEM vector. Given a set of frames (video) features, sign2sem produces a vector representing the sentence signed in the video using a transformer encoder. Pretraining the visual feature sentence embedding model on text. We follow Reimers and Gurevych (2019) and train a Siamese network with twin subnetworks 1 and 2. We compute the loss as the minimum squared error (MSE): Le = 1 N N X i=1 (cos(S1,i, S2,i) −cos(E1,i, E2,i))2 where N is the batch size, and S and E contain the target text SEM vectors and the predicted output SEM vectors respectively. In our experiments, the target SEM vector is given by sBERT (Reimers and Gurevych, 2019) here and in our models below. • sem2text Module This module is responsible for the text reconstruction from sentence embeddings SEM. It produces the text translation of the video features encoded in a given SEM vector. The core sem2text model is a transformer model; we compare encoder–decoder and only decoder systems for the task: - Encoder–decoder (SLTr): this version uses the sign language transformer (SLTr) architecture as in Camgöz et al. (2020). We use a transformer base with a linear projection from the SEM vector input instead of the usual word embedding layer. - Decoder only with pretrained mBART: this version uses a pre-trained mBART-25 decoder and a linear layer to project the SEM vectors into the mBART model dimensions. Pretraining We train both transformers (SLTr from scratch and the already pretrained mBART25) with Wikipedia data and then finetune them on the SL datasets. We compute the translation output loss as the cross-entropy: Lo = CE(T, O) = −1 N N X i=1 M X j=1 (Tij · log(Oij)) where N is the batch size, M the vocabulary size, T is the target text and O is the output text. After pretraining each component (sign2sem and sem2text), we combine them together for end2end training. We explore two approaches: an approach that only uses the output loss Lo, sign2sem2text, and an approach that integrates an additional supervision loss Le, sign2(sem+text). • sign2sem2text is a simple pipeline combination of sign2sem and sem2text where the output SEM of the first module is used as input by the second module to obtain the final text prediction. The two pretrained modules (with both variants SLTr and mBART) are put together and trained in an end2end manner without any intermediate supervision. This formalisation is the sentence embedding equivalent to the sign2gloss2text approach. • sign2(sem+text) performs translation using the same components as sign2sem2text. However, it uses the sign2sem SEM output as additional intermediate supervision using MSE loss computed against the target text SEM in a multitask learning approach. Both, Le (sentence embedding) and Lo (output text), are used jointly to train the model. For SLTr, we take the SEM before the tanh and pooling (see Figure 1 (left–middle)), and project it into the SLTr model dimension. The supervision is applied after the SLTr encoder. For mBART, the supervision happens right before the mBART. Our architectures can be trained both monolingually and mutilingually simply by using multilingual embeddings and merging multilingual training data. 4 Experimental Settings We use two diverse (language and domain) datasets for our experiments: RWTH-PHOENIX-2014T (Camgöz et al., 2018) 11 hours of weather forecast videos from 9 signers. Signers use German Sign Language and both transcriptions and glosses are available. How2Sign (Duarte et al., 2021) 80 hours of instructional videos with speech and transcriptions and their corresponding American Sign Language videos (glosses unavailable) from 11 signers. Detailed statistics for each dataset are provided in Appendix A. We preprocess the textual part of the datasets in a way that allows us to compare to the results obtained by Camgöz et al. (2018). We tokenise and lowercase the input for both training and evaluation. We apply BPE (Sennrich et al., 2016) with a vocabulary size of 1500 for Phoenix2014T and 5000 for How2Sign. When pretraining sem2text SLTr, we use a shared (en–de) vocabulary size of 32000. In cases where we use pretrained models, we keep the tokenisation of the model. 427 PHOENIX-2014T (DGS) How2Sign (ASL) BLEUval BLEU chrF BLEURT BLEUval BLEU chrF BLEURT SLTr sign2sem2text - mono 14.22 13.4±1.4 33.5±1.5 0.379±0.016 6.69 5.7±0.4 21.2±0.4 0.382±0.005 sign2sem2text - multi 13.05 12.7±1.3 32.3±1.3 0.343±0.014 6.48 6.4±0.4 22.0±0.5 0.403±0.006 sign2(sem+text) - mono 19.10 18.8±1.7 40.1±1.5 0.437±0.016 10.41 9.5±0.5 27.4±0.5 0.445±0.006 sign2(sem+text) - multi 17.03 16.6±1.6 37.9±1.5 0.412±0.016 7.85 7.8±0.4 25.4±0.5 0.430±0.006 mBART sign2sem2text - mono 16.67 17.3±1.6 38.2±1.5 0.434±0.016 9.32 9.8±0.5 31.2±0.5 0.477±0.006 sign2sem2text - multi 16.91 16.5±1.6 37.3±1.5 0.425±0.016 9.11 9.6±0.5 31.2±0.5 0.475±0.006 sign2(sem+text) - mono 24.07 24.2±1.9 46.3±1.6 0.483±0.017 12.20 11.7±0.5 32.0±0.5 0.487±0.006 sign2(sem+text) - multi 24.12 24.1±1.9 46.1±1.6 0.481±0.017 12.34 12.0±0.5 31.8±0.5 0.483±0.006 Table 1: Translation performance of our models on validation (val) and test. Best models at 95% confidence level are highlighted. Previous state-of-the-art for gloss-free systems is BLEU=21.44 for PHOENIX (Zhou et al., 2023) and 8.03 for How2Sign (Tarrés et al., 2023). Chen et al. (2022b) achieves 28.95 on PHOENIX with their gloss-assisted system sign2text and 26.71 with sign2gloss2text. Rust et al. (2024) achieves 15.5 on How2Sign pretraining with YouTube-ASL. For video files, we extract frames using ffmpeg. We normalise the images, and resize them to 224x224. In this step, we initially obtain frame features from a pretrained model (Tan and Le, 2019), which does not contain gloss information. We then apply pooling to remove the spatial dimensions, followed by batch normalisation with ReLU, following the approach outlined by Camgöz et al. (2020). This generic approach facilitates the combination of datasets in the multilingual setting. We use two multilingual pretrained models that cover both German and English, sBERT (Reimers and Gurevych, 2019)3 for sentence embeddings and mBART (Liu et al., 2020)4 as a language model. For further pretraining we use 26 million sentences per language from the English and German Wikipedia dumps extracted with Wikitailor (España-Bonet et al., 2023). Following Müller et al. (2022) and Müller et al. (2023), we evaluate the models using three common automatic metrics in machine translation: BLEU (Papineni et al., 2002), chrF (Popovi´c, 2015) and BLEURT (Sellam et al., 2020). Specifics can be found in Appendix C. In all cases, we estimate 95% confidence intervals (CI) via bootstrap resampling (Koehn, 2004) with 1000 samples. 5 Results and Discussion Table 1 presents the results for our models and variants. Two major trends are observed: (i) massive pretraining of the sem2text module (mBART vs SLTr) significantly improves the results, confirming the observations by Chen et al. (2022a) 3We use all-MiniLM-L12-v2 model with 384 dimensions. 4We use mBART-25 1024 dimensions. and (ii) the multitask approach sign2(sem+text) is better than the pipeline approach sign2sem2text. These findings hold for all three evaluation metrics at 95% confidence level. Potentially beneficial effects of multilinguality are less evident. Monolingual and multilingual approaches are not distinguishable within the 95% CIs, possibly due to large differences in the domain of the datasets preventing effective transfer between languages. Our best system, sign2(sem+text) with the pretrained text decoder, achieves state-of-the-art results on How2Sign when no additional SLT dataset is used for pretraining, improving from 8 to 12 BLEU points over Tarrés et al. (2023). For PHOENIX-2014T, we surpass all previous glossfree approaches (24 vs 21 BLEU), but we are still below the best approach that uses glosses (Chen et al., 2022b) (24 vs 29 BLEU). Reconstruction quality: sem2text. In our approach, sentence embeddings take the role of manually produced glosses in previous work. Our sem2text translation module defines the upper-bound results for the full system as gloss2text did in previous work. Our best sign2(sem+text) models with mBART produce a reconstruction score of BLEU 38.0±2.4/23.3±1.0, chrF 57.5±1.9/43.9±0.9 and BLEURT 0.588±0.019/0.571±0.008 for PHOENIX-2014T/How2Sign (see Table 2). Where the comparison with glosses is available (PHOENIX), we improve over gloss2text by up to 10 BLEU points. We hypothesise that a sentence is better represented by its embedding than by a string of glosses and this explains why the translation 428 Figure 2: Average BLEURT score on different token length intervals on PHOENIX-2014T and How2Sign test. PHOENIX-2014T (DGS) BLEUval BLEU chrF BLEURT Camgöz (2018) 20.16 19.26 – – Chen (2022a) 27.61 28.39 – – SLTr - mono 31.53 30.1±2.0 52.2±1.7 0.526±0.018 SLTr - multi 29.20 31.3±2.1 52.9±1.7 0.530±0.018 mBART - mono 37.11 38.0±2.4 57.5±1.9 0.588±0.019 mBART - multi 36.91 37.5±2.3 57.4±1.8 0.584±0.018 How2Sign (ASL) BLEUval BLEU chrF BLEURT SLTr - mono 13.24 14.6±0.6 34.3±0.6 0.489±0.006 SLTr - multi 16.17 16.4±0.7 36.5±0.7 0.529±0.007 mBART - mono 23.04 22.8±1.0 43.3±0.9 0.577±0.008 mBART - multi 24.60 23.3±1.0 43.9±0.9 0.571±0.008 Table 2: Reconstruction quality for the sem2text subtask of our models and gloss2text state-of-the-art on validation (val) and test. Best models at 95% confidence level are highlighted. quality for sem2text is higher than for gloss2text. If these components (sem2text and gloss2text) are the upper-bound to the end2end sign2text translation, SEM-based systems are potentially at an advantage. These results, together with the fact that SEM models can be applied to raw data without annotations, highlight the promising future prospects of, especially, sign2(sem+text). SEM-based vs gloss-based SLT. For comparison purposes, we integrate SEM supervision in a state-of-the-art gloss-based SLT system, Signjoey (Camgöz et al., 2020), by replacing their gloss supervision by SEM supervision. We perform no pretraining and train the two systems under the same conditions. We observe that convergence with SEM is faster and requires less than half of the iterations to finish (5k vs 12k) using the same setting and resources. The detailed training evolution is shown in Appendix D. Translation quality vs output length. Figure 2 shows the token length distribution of PHOENIX2014T and How2Sign along with the average BLEURT score on each interval. The equivalent plots for chrF and BLEU are in Figures 4 and 5 in Appendix E respectively. In the PHOENIX test set, almost 90% of the sentences contain 20 tokens or less, while the number decreases to 60% for How2Sign. The 10-20 token range is the one with the best scores. While the drop in performance in translation quality for long sentences is smaller in How2Sign, the difference in the distribution affects the global quality. 6 Conclusions We present a new approach to sign language translation using automatically computed sentence embeddings instead of manual gloss labels as intermediate representation with (sign2(sem+text)) and without (sign2sem2text) SEM supervision. We outperform the state-of-the-art of gloss-free SLT when no additional SLT datasets are used for pretraining, closing the gap to gloss-based SLT. According to the upper-bound set by sem2text translation quality, there is still room for improvement for the end2end SEM-based SLT models. In this work, we limited ourselves to existing visual feature extractors, in the future we plan to train a SEM-based visual feature extractor on SL datasets in order to get closer to our sem2text upper-bound and match gloss-based performance. Limitations Our SL datasets cover American English and German. Sentence embeddings for these languages are good quality as lots of textual data is available 429 for pre-training. It remains to be studied how the quality of the embeddings affects the final translation quality. This is important for low-resourced languages, i.e. languages with limited amounts of monolingual text data but, to the best of our knowledge, no public sign language data set exists for them. Acknowledgements This work has been funded by by BMBF (German Federal Ministry of Education and Research) via the project SocialWear (grant no. 01IW20002). References Necati Cihan Camgöz, Simon Hadfield, Oscar Koller, Hermann Ney, and Richard Bowden. 2018. Neural sign language translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7784–7793. Necati Cihan Camgöz, Oscar Koller, Simon Hadfield, and Richard Bowden. 2020. Sign language transformers: Joint end-to-end sign language recognition and translation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 10020–10030. Computer Vision Foundation / IEEE. J. Carreira and A. Zisserman. 2017. Quo vadis, action recognition? a new model and the kinetics dataset. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4724–4733, Los Alamitos, CA, USA. IEEE Computer Society. Yutong Chen, Fangyun Wei, Xiao Sun, Zhirong Wu, and Stephen Lin. 2022a. A simple multi-modality transfer learning baseline for sign language translation. In CVPR, pages 5110–5120. IEEE. Yutong Chen, Ronglai Zuo, Fangyun Wei, Yu Wu, Shujie Liu, and Brian Mak. 2022b. Two-stream network for sign language recognition and translation. Advances in Neural Information Processing Systems, 35:17043–17056. Amanda Duarte, Shruti Palaskar, Lucas Ventura, Deepti Ghadiyaram, Kenneth DeHaan, Florian Metze, Jordi Torres, and Xavier Giro-i Nieto. 2021. How2Sign: A Large-Scale Multimodal Dataset for Continuous American Sign Language. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2735–2744. Cristina España-Bonet, Alberto Barrón-Cedeño, and Lluís Màrquez. 2023. Tailoring and Evaluating the Wikipedia for in-Domain Comparable Corpora Extraction. Knowledge and Information Systems, 65(3):1365–1397. Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, Eleftherios Avramidis, Tom Kocmi, George Foster, Alon Lavie, and André F. T. Martins. 2022. Results of wmt22 metrics shared task: Stop using bleu – neural metrics are better and more robust. In Proceedings of the Seventh Conference on Machine Translation, pages 46–68, Abu Dhabi. Association for Computational Linguistics. Tom Kocmi, Christian Federmann, Roman Grundkiewicz, Marcin Junczys-Dowmunt, Hitokazu Matsushita, and Arul Menezes. 2021. To ship or not to ship: An extensive evaluation of automatic metrics for machine translation. In Proceedings of the Sixth Conference on Machine Translation, pages 478–494, Online. Association for Computational Linguistics. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388–395, Barcelona, Spain. Association for Computational Linguistics. Dongxu Li, Chenchen Xu, Xin Yu, Kaihao Zhang, Ben Swift, Hanna Suominen, and Hongdong Li. 2020. TSPNet: Hierarchical Feature Learning via Temporal Semantic Pyramid for Sign Language Translation. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS’20, Red Hook, NY, USA. Curran Associates Inc. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726–742. Mathias Müller, Malihe Alikhani, Eleftherios Avramidis, Richard Bowden, Annelies Braffort, Necati Cihan Camgöz, Sarah Ebling, Cristina España-Bonet, Anne Göhring, Roman Grundkiewicz, Mert Inan, Zifan Jiang, Oscar Koller, Amit Moryossef, Annette Rios, Dimitar Shterionov, Sandra Sidler-Miserez, Katja Tissi, and Davy Van Landuyt. 2023. Findings of the second WMT shared task on sign language translation (WMTSLT23). In Proceedings of the Eighth Conference on Machine Translation, pages 68–94, Singapore. Association for Computational Linguistics. Mathias Müller, Sarah Ebling, Eleftherios Avramidis, Alessia Battisti, Michèle Berger, Richard Bowden, Annelies Braffort, Necati Cihan Camgöz, Cristina España-Bonet, Roman Grundkiewicz, Zifan Jiang, Oscar Koller, Amit Moryossef, Regula Perrollaz, Sabine Reinhard, Annette Rios, Dimitar Shterionov, Sandra Sidler-Miserez, Katja Tissi, and Davy Van Landuyt. 2022. Findings of the First WMT Shared Task on Sign Language Translation (WMTSLT22). In Proceedings of the Seventh Conference on Machine Translation, pages 744–772, Abu Dhabi. Association for Computational Linguistics. Ellen Ormel, Onno Crasborn, Els van der Kooij, Lianne van Dijken, Ellen Yassine Nauta, Jens Forster, and 430 Daniel Stein. 2010. Glossing a multi-purpose sign language corpus. In Proceedings of the LREC2010 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies, pages 186–191, Valletta, Malta. European Language Resources Association (ELRA). Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Maja Popovi´c. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Belgium, Brussels. Association for Computational Linguistics. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Phillip Rust, Bowen Shi, Skyler Wang, Necati Cihan Camgöz, and Jean Maillard. 2024. Towards privacyaware sign language translation at scale. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881–7892, Online. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. Mingxing Tan and Quoc Le. 2019. EfficientNet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 6105– 6114. PMLR. Laia Tarrés, Gerard I. Gállego, Amanda Duarte, Jordi Torres, and Xavier Giró-i Nieto. 2023. Sign language translation from instructional videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 5625–5635. Dave Uthus, Garrett Tanzer, and Manfred Georg. 2023. YouTube-ASL: A Large-Scale, Open-Domain American Sign Language-English Parallel Corpus. In Advances in Neural Information Processing Systems, volume 36, pages 29029–29047. Curran Associates, Inc. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface’s transformers: State-ofthe-art natural language processing. arXiv preprint arXiv:1910.03771. Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, and Kevin Murphy. 2018. Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In ECCV (15), volume 11219 of Lecture Notes in Computer Science, pages 318–335. Springer. A. Yin, T. Zhong, L. Tang, W. Jin, T. Jin, and Z. Zhao. 2023. Gloss attention for gloss-free sign language translation. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2551–2562, Los Alamitos, CA, USA. IEEE Computer Society. Kayo Yin and Jesse Read. 2020. Better sign language translation with STMC-transformer. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5975–5989, Barcelona, Spain (Online). International Committee on Computational Linguistics. Jian Zhao, Weizhen Qi, Wengang Zhou, Nan Duan, Ming Zhou, and Houqiang Li. 2022. Conditional sentence generation and cross-modal reranking for sign language translation. IEEE Transactions on Multimedia, 24:2662–2672. Benjia Zhou, Zhigang Chen, Albert Clapés, Jun Wan, Yanyan Liang, Sergio Escalera, Zhen Lei, and Du Zhang. 2023. Gloss-free sign language translation: Improving from visual-language pretraining. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 20871– 20881. 431 Hao Zhou, Wen gang Zhou, Weizhen Qi, Junfu Pu, and Houqiang Li. 2021. Improving sign language translation with monolingual data by sign back-translation. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1316–1325. Hao Zhou, Wengang Zhou, Yun Zhou, and Houqiang Li. 2020. Spatial-temporal multi-cue network for continuous sign language recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 13009–13016. 432 A Datasets Statistics Table 3 summarises the statistics for the corpora used in the experiments. Phoenix-2014T How2Sign Src. Lang. German Am. English Tgt. Lang DGS ASL Hours 11 80 Signers 9 11 Sentences 7000 35191 Val. Size 540 1741 Test Size 629 2322 Table 3: Statistics of the corpora used in the experiments. Source (Src.Lang.) and target (Tgt.Lang.) refer to the direction in which the corpora were created; all our experiments involve sign2text. B Infrastructure and Network Hyperparameters We implement our SLT framework using PyTorch, and libraries from sBERT (Reimers and Gurevych, 2019) and Huggingface (Wolf et al., 2019). Our code is publicly available at Github.5 Tables 4, 5 and 6 show the hyperparameters and training times for the sign2sem and sem2text with SLTr and mBART transformers respectively. We run our experiments using 8 A100-80GB GPUs. For sign2sem2text and sign2(sem+text), each experiment runs for 72 hours and the configurations are inherited from the standalone modules sign2sem and sem2text. Parameter Value model all-MiniLM-L12-v2 batch_size_per_device 16 learning_rate 1e-5 input_projection_dim 1024 scheduler warmuplinear Training time 72 hours (5 GPU) Table 4: Hyperparameters for the sign2sem module, we use the defaults of sBERT trainer for the rest. C Automatic Evaluation Following Müller et al. (2022) and Müller et al. (2023), we evaluate the models using three common automatic metrics in machine translation: BLEU (Papineni et al., 2002), chrF (Popovi´c, 2015) and BLEURT (Sellam et al., 2020). Notice that even though other semantic metrics based on embeddings might correlate better with human judge5https://github.com/yhamidullah/sem-slt Parameter Value num_encoder_layers 3 num_decoder_layers 3 d_model 512 ff_size 2048 input_projection_dim 1024 batch_size_per_device_train 32 batch_size_per_device_val 32 learning_rate 1e-5 lr_scheduler reduceLROnPlateau freeze_word_embeddings True Training time 1 hour (1GPU) Table 5: Hyperparameters for the sem2text module with SLTr transformer, the rest are inherited from Camgöz et al. (2020). Parameter Value input_projection_dim 1024 batch_size_per_device_train 4 batch_size_per_device_val 4 learning_rate 1e-5 fp16 True freeze_word_embeddings True Training time 156 hours (8 GPUs) Table 6: Hyperparameters for the sem2text module with mBART decoder, the rest are inherited from the Huggingface trainer default values. ments (Kocmi et al., 2021; Freitag et al., 2022), they cannot be used for sign language translation because the source is video and not text. We use sacreBLEU (Post, 2018) for BLEU6 and chrF7 and the python library for BLEURT.8 Previous work starting with Camgöz et al. (2018) does mainly report only BLEU scores, but they do not specify the BLEU variant used or the signature in sacreBLEU. Therefore, comparisons among systems might not be strictly fair. D Gloss-based vs SEM-based Systems’ Training Performance Figure 3 shows the training evolution for a simple SLT system with no additional supervision (top), additional gloss supervision (middle) and SEM supervision (bottom) implemented in the Signjoey framework (Camgöz et al., 2020) and trained on PHOENIX-2014T. We use the best hypeparameters in Camgöz et al. (2020) and add our SEM supervision as a replacement of their recognition loss. The three plots in Figure 3 include a red line at 6BLEU|nrefs:1|bs:1000|seed:16|case: mixed|eff:no|tok:13a|smooth:exp|version: 2.4.0 7chrF2|nrefs:1|bs:1000|seed:16|case: mixed|eff:yes|nc:6|nw:0|space:no|version: 2.4.0 8BLEURT v0.0.2 using checkpoint BLEURT-20. 433 Figure 3: Validation BLEU on PHOENIX without supervision (top plot), with gloss supervision (middle plot) and with SEM supervision (bottom plot). translation quality BLEU=20 for reference. The first thing to notice is that both supervision methods reach the red line, but the one lacking any additional supervision lays behind. Second, we observe that the system with the additional SEM supervision reaches BLEU=20 earlier than the system with glosses: the gloss system needs 12k to finish and only 5k iterations are needed in the case of SEM. In both cases, we use early stopping with BLEU patience 7. Finally, notice that the gloss and SEM systems achieve the same translation quality but one does not need any data annotation with SEM. E Ablation Study on Sentence Length Following the analysis of Section 5, we include the translation quality scores BLEU and chrF per sentence length. Figure 4: Variation of the average BLEU score on different token length intervals on PHOENIX-2014T (top) and How2Sign (bottom) test sets. Figure 5: Variation of the average chrF score on different token length intervals on PHOENIX-2014T (top) and How2Sign (bottom) test sets. 434
2024_short
40
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 435–444 August 11-16, 2024 ©2024 Association for Computational Linguistics STREAM: Simplified Topic Retrieval, Exploration, and Analysis Module Anton Frederik Thielmann and Arik Reuter and Benjamin Säfken Institute of Mathematics Clausthal University of Technology Christoph Weisser and Manish Kumar BASF Ludwigshafen, Germany Gillian Kant Centre for Statistics University of Göttingen Abstract Topic modeling is a widely used technique to analyze large document corpora. With the evergrowing emergence of scientific contributions in the field, non-technical users may often use the simplest available software module, independent of whether there are potentially better models available. We present a Simplified Topic Retrieval, Exploration, and Analysis Module (STREAM) for user-friendly topic modelling and especially subsequent interactive topic visualization and analysis. For better topic analysis, we implement multiple intruderword based topic evaluation metrics. Additionally, we publicize multiple new datasets that can extend the so far very limited number of publicly available benchmark datasets in topic modeling. We integrate downstream interpretable analysis modules to enable users to easily analyse the created topics in downstream tasks together with additional tabular information. The code is available at the following link: https://github.com/AnFreTh/ STREAM 1 Introduction Identifying latent topics within extensive text corpora is a fundamental task in the field of Natural Language Processing (NLP) and has been of larger scientific interest since the early 2000s (Hofmann, 2001; Blei et al., 2003). Especially with the emergence of contextualized embeddings, extraction algorithms and topic models continue to evolve and achieve increasingly impressive results in terms of topic coherence (Larochelle and Lauly, 2012; Srivastava and Sutton, 2017; Chien et al., 2018; Wang et al., 2019; Dieng et al., 2020). Even, methodologically simpler methods achieve state-ofthe-art results by leveraging document and wordembeddings (Sia et al., 2020; Grootendorst, 2022; Angelov, 2020). The publication of open source software like Gensim ( ˇReh˚uˇrek and Sojka, 2010), the Natural Language Tool Kit (nltk) (Bird et al., 2009) or SpaCy (Vasiliev, 2020) have enabled researchers to apply such models in various fields, including education (Grani´c and Maranguni´c, 2019), offsite construction (Liu et al., 2019), bioinformatics (Liu et al., 2016), communication sciences (Maier et al., 2018), finance (Thormann et al., 2021) and numerous other applications (e.g., (Hall et al., 2008; Daud et al., 2010; Boyd-Graber et al., 2017; Kant et al., 2022; Thielmann et al., 2021; Hannigan et al., 2019; Tillmann et al., 2022)). The OCTIS (optimizing and comparing topic models is simple) (Terragni et al., 2021a) framework in particular has found favor in the scientific community and made fitting and evaluating sophisticated topic models easy and efficient. However, OCTIS lacks the methodologically simpler yet very performant models such as clustering based topic extraction (Sia et al., 2020; Angelov, 2020) and the user-centric implementation of BERTopic (Grootendorst, 2022). Especially the user-friendly implementation and visualization possibilities of BERTopic allow non-technical users to easily analyze their document corpora and visualize their results which has led to a variety of use cases especially in the social sciences (e.g. (Falkenberg et al., 2022; Jeon et al., 2023; Zankadi et al., 2023)). We thus contribute the STREAM (Simplified Topic Retrieval Exploration and Analysis Module) software package. It gets its acronym not only from the easy to use, user-centric topic modelling, evaluation and exploration implementation but also from the integration of downSTREAM models to analyze topic contributions to regression or classification problems. The core of the STREAM package is built on top of the OCTIS framework and allows seamless integration of all of OCTIS’ multitude of models, datasets, evaluation metrics and hyperparameter optimization techniques. 435 1.1 Contributions The contributions of STREAM can be summarized as follows: • STREAM integrates multiple clustering based topic models into the OCTIS framework (see the Appendix for a full list of all available models). • Through interactive visualization methods, STREAM allows easy exploration and analysis of all models. • We publicize multiple multi-modal datasets to enable researchers to compare their models beyond the standard topic modeling datasets, such as 20NewsGroups and Reuters (Mitchell, 1999; Lewis, 1997). • STREAM integrates interpretable downstream modeling by introducing a Neural Additive Topic Model (NAM) (Agarwal et al., 2021) that incorporates the documents topicprevalences along further structural variables into an interpretable downstream regression or classification model. 2 Model Fitting and OCTIS Integration STREAM is effectively built upon the core concepts of the OCTIS package and inherits from the AbstractModel, AbstractMetric and OctisDataset classes. Thus, all models, evaluation metrics, visualization functions, datasets and downstream models are perfectly integrable with all of OCTIS’ models and metrics. Datasets Creating custom datasets including tabular data is as simple as running the following few lines of code:   from stream . data_utils import TMDataset df = pd . read_csv("your_data.csv") dataset = TMDataset ( ) dataset . create_load_save_dataset( data=df , dataset_name="your_name" , save_dir="save directory" , doc_column="text" , #column name where documents are stored label_column="popularity" )   All textual data is preprocessed according to the users specifications of the preprocessing pipeline and therefore, e.g., lower cased, stopwords removed and lemmatized. In the specified directory, the necessary files and a .csv file storing the tabular data are saved. Model fitting Fitting a model (here e.g. a simple Kmeans clustering topic model) can subsequently be done simply by running the following code:   from stream . models import KmeansTM model = KmeansTM(num_topics=20) model_output = model . train_model(dataset)   Depending on the model, the hyperparameters can easily be adjusted. Note, that all STREAM datasets are fully usable with all OCTIS models and users can thus easily fit e.g. a LDA (Blei et al., 2003) or ETM (Dieng et al., 2020) on the TMDataset class. Evaluation STREAM offers multiple new, intruder-word based topic evaluation metrics (Thielmann et al., 2024b) alongside classical NPMI coherence scores (Lau et al., 2014), computed over the complete documents and not over sliding windows, and also Embedding based Coherence metrics (Terragni et al., 2021b). See the Appendix for an overview over all available metrics. The evaluation of a model can thus be done by simply running:   from stream . metrics import ISIM metric = ISIM(dataset) metric . score(model_output)   2.1 Available Datasets In addition to the implemented models, metrics and downstream tasks, we publicize multiple datasets suited for topic model comparison. • Multiple Spotify datasets comprised of the songs’ lyrics and various tabular features, such as the popularity, danceability or acousticness of the songs. • A new Reddit dataset, which is filtered for "Gamestop" (GME) from the Subreddit "r/wallstreetbets". The data is taken from the thread "What are your moves tomorrow?". It is covering the time around the GME short squeeze of 2021. • A new Stocktwits dataset also filtered for "Gamestop" (GME). It is covering the time around the GME short squeeze of 2021. • In addition, we upload the preprocessed Reuters and Poliblogs (Roberts et al., 2018) datasets that are well suited for comparing topic model outputs. 436 x1 f1(x1) x2(tab) f2(x2) xK(top) fK(xK) Visual Interpretability Topic Analysis β + PJ j=1 fj(xj(tab)) + PK k=1 fk(xk(top)) β Loss x1(tab) Normalization · · ·· · · · · · · · · xJ(tab) Normalization · · ·· · · · · · x1 x2 · · · xJ 1 2 · · · C ... ... ... ... Tabular Features x1(top) Normalization · · ·· · · · · · · · · xK(top) Normalization · · ·· · · · · · Topical Feature Networks Topical Prevalence xtop Topic Model doc1 doc2 docN Documents Figure 1: STREAM model architecture. After fitting a topic model, a downstream NAM can be fit and analyzed. Table 1: Overview over preprocessed datasets that are available in STREAM. Additionally, the OCTIS datasets, BBC-News, 20 Newsgroups, M10, DBLP are available. Name # Docs # Words Reuters 8,929 24,803 Reddit_GME 21,549 21,309 Poliblogs 13,246 70,726 Spotify_most_popular 4,538 53,181 Spotify_least_popular 4,374 111,738 Spotify_random 4,185 80,619 Stocktwits_GME 11,114 19,383 Stocktwits_GME_large 136,138 80,435 2.2 Topic Analysis One of the core concepts of topic modelling is the subsequent qualitative and visual analysis of the created topics. In addition to the available topicword-lists and matrices, STREAM implements multiple visualization methods to easily analyze the created topics. Besides classical wordclouds, the created topic clusters, topical distances, or top word distributions can be interactively visualized.   from stream . visuals import visualize_topic_model visualize_topic_model(model , port=8050)   437 Figure 2: Topical distances of all topics towards an interactively selected topic. The distances are calculated based on topical centroids and cosine similarities in the embedding space. 3 Downstream Tasks While the visual analysis of topics is often very helpful in analyzing a large corpus, the contents of documents often also have effects on other variables. Roberts et al. (2018) e.g. introduced a model that captures the effects of additional tabular variables on topics. STREAM offers the possibility to analyze the effects of topics and additional tabular variables on any given target variable, via implementing a downstream NAM1. The general form of a NAM can be written as: E(y) = h  β + J X j=1 fj(xj)  , (1) where h(·) is the activation function used in the output layer, e.g. linear activation for a simple regression task or softmax activation for a classification task. x ∈Rj are the input features, β describes the intercept. The shape-functions are expressed as fj : R →R and represent the MultiLayer Perceptron (MLP) corresponding to the j-th feature. The model structure of a simple NAM is given in Figure 3. 1see an example in the appendix f1 fJ ......... x1 PJ j=1 ˆy h ............... xJ Figure 3: Architecture of a classical NAM. All features are fit independently through a Multi-Layer Perceptron and summed before the activation function and final output layer Further, let x ≡(xtab, xdoc) denote the categorical and numerical (continuous) structural features xtab and xdoc denote the documents. After fitting a topic model (see section 2), STREAM extracts the documents topical prevalences and thus "creates" z ≡(xtab, xtop), a probability vector over the documents and topics. Note, that x(i) j(tab) denotes the jth tabular feature of the i-th observation and x(i) k(top) denotes document i-th topical prevalence for topic k. In order to preserve interpretability the available downstream model is given by: h(E [y]) = β + J X j=1 fj(xj(tab)) + K X k=1 fk(xk(top)), (2) Thus, the visualization of shape-function fk shows 438 the impact topic k has on a target variable y and the visualization of fj shows the impact of tabular feature j. With the given datasets and examples available in STREAM, this could represent the effect a topic created from the Spotify dataset and a songs duration have on a songs popularity. With a fitted topic model (see section 2), fitting a downstream model is straight forward leveraging the pytorch trainer class. Subsequently, all shape functions can easily be visualized similar to the plots introduced by Agarwal et al. (2021).   from pytorch_lightning import Trainer from stream . NAM import DownstreamModel # Instantiate the DownstreamModel downstreammodel = DownstreamModel( trained_topic_model=topic_model , #your trained topic model target_column='day' , #specify your target column task='regression' , #or 'classification' dataset=dataset , batch_size=128 , lr=0.0005 )    # Use PyTorch Lightning's Trainer to train and validate the model trainer = Trainer(max_epochs=10) trainer . fit(downstreammodel) # Plotting from stream . visuals import plot_downstream_model plot_downstream_model(downstream_model)   4 Conclusion In this paper, we present the STREAM framework. A user-friendly topic modeling module for creating datasets, training and evaluating topic models, visualizing results and fitting interpretable downstream models. The proposed framework is a python library and closely interacts with the existing OCTIS framework from Terragni et al. (2021a). Future adaptations could include the integration of further more performant or e.g. distributional downstream models (Chang et al., 2022; Luber et al., 2023; Thielmann et al., 2024a) to further allow researchers to analyze the effect a topic has on a regression or classification task. 5 Limitations We present a python package for topic modeling. While all implemented models, visualizations and the downstream models are straightforward, the actual interpretation of the results and figures is still done by the user. Given that especially textual data might include a lot of noise or harmful language, we must therefore stress the users to be careful in their final assessment of their created results. Additionally, while NAMs (Agarwal et al., 2021) offer visual interpretability, they do not allow for statistical significance as the more theoretical Generalized Additive Models (Wood, 2017) or direct causal inference. 439 References Suman Adhya, Avishek Lahiri, Debarshi Kumar Sanyal, and Partha Pratim Das. 2022. Improving contextualized topic models with negative sampling. In Proceedings of the 19th International Conference on Natural Language Processing (ICON), pages 128–138, New Delhi, India. Association for Computational Linguistics. Rishabh Agarwal, Levi Melnick, Nicholas Frosst, Xuezhou Zhang, Ben Lengerich, Rich Caruana, and Geoffrey E Hinton. 2021. Neural additive models: Interpretable machine learning with neural nets. Advances in neural information processing systems, 34:4699–4711. Dimo Angelov. 2020. Top2vec: Distributed representations of topics. arXiv preprint arXiv:2008.09470. Federico Bianchi, Silvia Terragni, and Dirk Hovy. 2021. Pre-training is a hot topic: Contextualized document embeddings improve topic coherence. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 759–766, Online. Association for Computational Linguistics. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyzing text with the natural language toolkit. O’Reilly Media, Inc. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022. Jordan Boyd-Graber, Yuening Hu, David Mimno, et al. 2017. Applications of topic models. Foundations and Trends® in Information Retrieval, 11(2-3):143– 296. Chun-Hao Chang, Rich Caruana, and Anna Goldenberg. 2022. NODE-GAM: Neural generalized additive model for interpretable deep learning. In International Conference on Learning Representations. Jen-Tzung Chien, Chao-Hsi Lee, and Zheng-Hua Tan. 2018. Latent dirichlet mixture model. Neurocomputing, 278:12–22. Ali Daud, Juanzi Li, Lizhu Zhou, and Faqir Muhammad. 2010. Knowledge discovery through directed probabilistic topic models: a survey. Frontiers of computer science in China, 4(2):280–301. Adji B Dieng, Francisco JR Ruiz, and David M Blei. 2020. Topic modeling in embedding spaces. Transactions of the Association for Computational Linguistics, 8:439–453. Max Falkenberg, Alessandro Galeazzi, Maddalena Torricelli, Niccolò Di Marco, Francesca Larosa, Madalina Sas, Amin Mekacher, Warren Pearce, Fabiana Zollo, Walter Quattrociocchi, et al. 2022. Growing polarization around climate change on social media. Nature Climate Change, 12(12):1114–1121. Andrina Grani´c and Nikola Maranguni´c. 2019. Technology acceptance model in educational context: A systematic literature review. British Journal of Educational Technology, 50(5):2572–2593. Maarten Grootendorst. 2022. Bertopic: Neural topic modeling with a class-based tf-idf procedure. arXiv preprint arXiv:2203.05794. David Hall, Dan Jurafsky, and Christopher D Manning. 2008. Studying the history of ideas using topic models. In Proceedings of the 2008 conference on empirical methods in natural language processing, pages 363–371. Timothy R Hannigan, Richard FJ Haans, Keyvan Vakili, Hovig Tchalian, Vern L Glaser, Milo Shaoqing Wang, Sarah Kaplan, and P Devereaux Jennings. 2019. Topic modeling in management research: Rendering new theory from textual data. Academy of Management Annals, 13(2):586–632. Thomas Hofmann. 2001. Unsupervised learning by probabilistic latent semantic analysis. Machine learning, 42(1):177–196. Timo Honkela. 1997. Self-organizing maps in natural language processing. Ph.D. thesis, Citeseer. Eunji Jeon, Naeun Yoon, and So Young Sohn. 2023. Exploring new digital therapeutics technologies for psychiatric disorders using bertopic and patentsberta. Technological Forecasting and Social Change, 186:122130. Gillian Kant, Levin Wiebelt, Christoph Weisser, Krisztina Kis-Katos, Mattias Luber, and Benjamin Säfken. 2022. An iterative topic model filtering framework for short and noisy user-generated data: analyzing conspiracy theories on twitter. International Journal of Data Science and Analytics, pages 1–21. Thomas K Landauer, Peter W Foltz, and Darrell Laham. 1998. An introduction to latent semantic analysis. Discourse processes, 25(2-3):259–284. Hugo Larochelle and Stanislas Lauly. 2012. A neural autoregressive topic model. Advances in Neural Information Processing Systems, 25. Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 530–539. Daniel Lee and H Sebastian Seung. 2000. Algorithms for non-negative matrix factorization. Advances in neural information processing systems, 13. David D Lewis. 1997. Reuters-21578 text categorization collection data set. 440 Guiwen Liu, Juma Hamisi Nzige, and Kaijian Li. 2019. Trending topics and themes in offsite construction (osc) research: The application of topic modelling. Construction innovation. Lin Liu, Lin Tang, Wen Dong, Shaowen Yao, and Wei Zhou. 2016. An overview of topic modeling and its current applications in bioinformatics. SpringerPlus, 5(1):1–22. Mattias Luber, Anton Thielmann, and Benjamin Säfken. 2023. Structural neural additive models: Enhanced interpretable machine learning. arXiv preprint arXiv:2302.09275. Mattias Luber, Anton Thielmann, Christoph Weisser, and Benjamin Säfken. 2021. Community-detection via hashtag-graphs for semi-supervised nmf topic models. arXiv preprint arXiv:2111.10401. Daniel Maier, Annie Waldherr, Peter Miltner, Gregor Wiedemann, Andreas Niekler, Alexa Keinert, Barbara Pfetsch, Gerhard Heyer, Ueli Reber, Thomas Häussler, et al. 2018. Applying lda topic modeling in communication research: Toward a valid and reliable methodology. Communication Methods and Measures, 12(2-3):93–118. Tom Mitchell. 1999. Twenty Newsgroups. UCI Machine Learning Repository. DOI: https://doi.org/10.24432/C5C323. Hamed Rahimi, David Mimno, Jacob Hoover, Hubert Naacke, Camelia Constantin, and Bernd Amann. 2024. Contextualized topic coherence metrics. In Findings of the Association for Computational Linguistics: EACL 2024, pages 1760–1773, St. Julian’s, Malta. Association for Computational Linguistics. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45–50, Valletta, Malta. ELRA. http://is.muni.cz/ publication/884893/en. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Arik Reuter, Anton Thielmann, Christoph Weisser, Benjamin Säfken, and Thomas Kneib. 2024. Probabilistic topic modelling with transformer representations. arXiv preprint arXiv:2403.03737. Margaret Roberts, Brandon Stewart, Dustin Tingley, Kenneth Benoit, Maintainer Brandon Stewart, LinkingTo Rcpp, et al. 2018. Package ‘stm’. R Package Version, 1(3):3. Suzanna Sia, Ayush Dalmia, and Sabrina J. Mielke. 2020. Tired of topic models? clusters of pretrained word embeddings make for fast and good topics too! In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1728–1736, Online. Association for Computational Linguistics. Akash Srivastava and Charles Sutton. 2017. Autoencoding variational inference for topic models. In International Conference on Learning Representations. Yee Teh, Michael Jordan, Matthew Beal, and David Blei. 2004. Sharing clusters among related groups: Hierarchical dirichlet processes. Advances in neural information processing systems, 17. Silvia Terragni, Elisabetta Fersini, Bruno Giovanni Galuzzi, Pietro Tropeano, and Antonio Candelieri. 2021a. Octis: Comparing and optimizing topic models is simple! In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 263–270. Silvia Terragni, Elisabetta Fersini, and Enza Messina. 2021b. Word embedding-based topic similarity measures. In International Conference on Applications of Natural Language to Information Systems, pages 33–45. Springer. Anton Thielmann, René-Marcel Kruse, Thomas Kneib, and Benjamin Säfken. 2024a. Neural additive models for location scale and shape: A framework for interpretable neural regression beyond the mean. In Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, volume 238 of Proceedings of Machine Learning Research, pages 1783–1791. Anton Thielmann, Arik Reuter, Quentin Seifert, Elisabeth Bergherr, and Benjamin Säfken. 2024b. Topics in the haystack: Enhancing topic quality through corpus expansion. Computational Linguistics, pages 1–36. Anton Thielmann, Christoph Weisser, Thomas Kneib, and Benjamin Säfken. 2023. Coherence based document clustering. In 2023 IEEE 17th International Conference on Semantic Computing (ICSC), pages 9–16. IEEE. Anton Thielmann, Christoph Weisser, and Astrid Krenz. 2021. One-class support vector machine and lda topic model integration—evidence for ai patents. In Soft computing: Biomedical and related applications, pages 263–272. Springer. Anton Thielmann, Christoph Weisser, and Benjamin Säfken. 2024c. Human in the loop: How to effectively create coherent topics by manually labeling only a few documents per class. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 8395–8405. 441 Marah-Lisanne Thormann, Jan Farchmin, Christoph Weisser, René-Marcel Kruse, Benjamin Säfken, and Alexander Silbersdorff. 2021. Stock price predictions with lstm neural networks and twitter sentiment. Statistics, Optimization & Information Computing, 9(2):268–287. Arne Tillmann, Lindrit Kqiku, Delphine Reinhardt, Christoph Weisser, Benjamin Säfken, and Thomas Kneib. 2022. Privacy estimation on twitter: Modelling the effect of latent topics on privacy by integrating xgboost, topic and generalized additive models. In 2022 IEEE Smartworld, Ubiquitous Intelligence Computing, Scalable Computing Communications, Digital Twin, Privacy Computing, Metaverse, Autonomous Trusted Vehicles, pages 2325–2332. Yuli Vasiliev. 2020. Natural language processing with Python and spaCy: A practical introduction. No Starch Press. Rui Wang, Deyu Zhou, and Yulan He. 2019. Atm: Adversarial-neural topic model. Information Processing & Management, 56(6):102098. Christoph Weisser, Christoph Gerloff, Anton Thielmann, Andre Python, Arik Reuter, Thomas Kneib, and Benjamin Säfken. 2023. Pseudo-document simulation for comparing lda, gsdmm and gpm topic models on short and sparse text using twitter data. Computational Statistics, 38(2):647–674. Simon N Wood. 2017. Generalized additive models: an introduction with R. CRC press. Hajar Zankadi, Abdellah Idrissi, Najima Daoudi, and Imane Hilal. 2023. Identifying learners’ topical interests from social media content to enrich their course preferences in moocs using topic modeling and nlp techniques. Education and Information Technologies, 28(5):5567–5584. A Appendix A.1 Available Models Multiple topic model/document clustering and subsequent topic extraction models are available in STREAM. Additionally, STREAM inherits from all models available in OCTIS. Thus, the following models are available: Table 3: Available Models Name Implementation WordCluTM STREAM CEDC STREAM DCTE STREAM KMeansTM STREAM SomTM STREAM CBC STREAM CTMneg STREAM TNTM STREAM CTM OCTIS ETM OCTIS HDP OCTIS LDA OCTIS LSI OCTIS NMF OCTIS NeuralLDA OCTIS ProdLDA OCTIS The SomTM is described in Honkela (1997). WordCluTM follows the word clustering approach introduced by Sia et al. (2020). CEDC is described in Thielmann et al. (2024b). The KMeansTM is similar to Grootendorst (2022) and often used as a fast-compute benchmark model. DCTE is a semisupervised few-shot model introduced in Thielmann et al. (2024c). TNTM is introduced in Reuter et al. (2024). CTMneg is based on CTM (Bianchi et al., 2021) and introduced by Adhya et al. (2022). CBC is the only model of the STREAM models not based on document embeddings and focuses on coherence scores between documents, described in Thielmann et al. (2023) with adaptations from Luber et al. (2021). The neural topic models implemented in OCTIS and thus also available in STREAM are the CTM introduced by Bianchi et al. (2021), the ETM (Dieng et al., 2020), NeuralLDA and ProdLDA introduced by Srivastava and Sutton (2017). Further models are LDA (Blei et al., 2003), HDP (Teh et al., 2004), LSI (Landauer et al., 1998) and classical NMF (Lee and Seung, 2000). A.2 Available Datasets The available datasets are described in the paper in section 2.1. Since most of STREAMs models are centered around Document embeddings (Reimers and Gurevych, 2019), STREAM comes along with 442 Table 2: Comparison between STREAM and the most well-known topic modeling libraries Features STREAM OCTIS Gensim STTM PyCARET MALLET TOMODAPI Pre-processing tools ✓ ✓ ✓ ✓ ✓ ✓ Pre-processed datasets ✓ ✓ ✓ ✓ ✓ ✓ ✓ Pre-embedded datasets ✓ Classical topic models ✓ ✓ ✓ ✓ ✓ ✓ ✓ Neural topic models ✓ ✓ ✓ Clustering topic models ✓ Coherence metrics ✓ ✓ ✓ ✓ ✓ ✓ Diversity metrics ✓ ✓ Significance metrics ✓ ✓ Classification metrics ✓ ✓ ✓ ✓ ✓ ✓ Intruder word metrics ✓ Downstream Model ✓ Visualization ✓ Hyper-parameters tuning BO BO MLE grid-search MLE a set of pre-embedded datasets. Once a user fits a model that leverages document embeddings, the embeddings are saved and automatically loaded the next time the user wants to fit any model with the same set of embeddings, thus enabling very fast model fitting and comparison. Table 4: Dataset Overview Name # Docs # Words # Features Reuters 8,929 24,803 Reddit_GME 21,549 21,309 6 Poliblogs 13,246 70,726 4 Spotify_most_popular 4,538 53,181 14 Spotify_least_popular 4,374 111,738 14 Spotify 4,185 80,619 14 Stocktwits_GME 11,114 19,383 3 Stocktwits_GME_large 136,138 80,435 3 A.3 Available Metrics In addition to the metrics from OCTIS, STREAM offers the following available topic evaluation metrics: ISIM, INT and ISH are all intruder based metrics proposed by Thielmann et al. (2024b). Embedding Coherence is similarly implemented as by Terragni et al. (2021b) without the normalization of the embeddings. NPMI describes classical NPMI scores proposed by Lau et al. (2014) and Embedding Coherence is similar to the Coherence metrics from Terragni et al. (2021b). Expressivity and Embedding Topic Diversity are both diversity metrics calculated in the embedding space. Future developments could include e.g. metrics proposed by Rahimi et al. (2024) or Weisser et al. (2023). • Intruder Metrics – ISIM: Average cosine similarity of top words of a topic to an intruder word. – INT: For a given topic and a given intruder word, Intruder Accuracy is the fraction of top words to which the intruder has the least similar embedding among all top words. – ISH: Calculates the shift in the centroid of a topic when an intruder word is replaced. • Diversity Metrics – Expressivity: Cosine Distance of topics to meaningless (stopword) embedding centroid. – Embedding Topic Diversity: Topic diversity in the embedding space. • Coherence Metrics – Embedding Coherence: Cosine similarity between the centroid of the embeddings of the stopwords and the centroid of the topic. – NPMI: Classical NPMi coherence computed on the source corpus. A.4 Downstream task As a demonstration of the downstream task, we have simulated some simple data. We have created three data generating topics, consisting of fruits, vehicles and animals. The documents are generated by having a random draw with 60% out of one specified topic and the remaining 40% out random topics. Additionally, we have generated two continuous variables and made the target variable a function of two effects of the continuous variables as well as an effect of the number of words 443 Figure 4: The animal topic as detected by the CDEC model and visualized via a wordcloud generating function available in STREAM. Figure 5: The effect of the animal topic on the target variable. Put simple: The "more" a document is about animals, the larger y gets. taken from each generated topic. We subsequently fit a CDEC (Thielmann et al., 2024b) model and extracted the topics. The animal topic is depicted in Figure 4. Figure 6: The numerical effect of feature x1 on y visualized with a function available in STREAM. The visualizations closely follow the ones created by Agarwal et al. (2021). The downstream model is then simply specified as defined in equation 2. The continuous feature effects are accurately detected and visualized in figures 6 and 7. The topic effects, one continuous 5 and one more complicated 8 are also accurately depicted. It is clearly recognizable, that the animal topic from figure 4 has a continuous positive effect on the target variable whereas the effect of the second topic is more refined and roughly follows a squared function. Figure 7: The numerical effect of feature x2 on y visualized with a function available in STREAM. The visualizations closely follow the ones created by Agarwal et al. (2021). Figure 8: A more complicated topical effect of topic 1, vehicles on the target variable. 444
2024_short
41
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 445–458 August 11-16, 2024 ©2024 Association for Computational Linguistics DocFinQA: A Long-Context Financial Reasoning Dataset Varshini Reddy, Rik Koncel-Kedziorski, Viet Dac Lai Michael Krumdick, Charles Lovering, Chris Tanner Kensho Technologies [email protected] Abstract For large language models (LLMs) to be effective in the financial domain – where each decision can have a significant impact – it is necessary to investigate realistic tasks and data. Financial professionals often interact with documents spanning hundreds of pages, but most financial research datasets only deal with short excerpts from these documents. To address this, we introduce a long-document financial QA task. We augment 7,437 questions from the existing FinQA dataset with full-document context, extending the average context length from under 700 words in FinQA to 123k words in DocFinQA. We conduct extensive experiments over retrieval-based QA pipelines and long-context language models. Based on our experiments, DocFinQA proves a significant challenge for even state-of-the-art systems. We also provide a case study on a subset of the longest documents in DocFinQA and find that models particularly struggle with these documents. Addressing these challenges may have a wide-reaching impact across applications where specificity and long-range contexts are critical, like gene sequences and legal document contract analysis. DocFinQA dataset is publicly accessible1. 1 Introduction The frequent need to reason over large volumes of textual and tabular data makes financial analysis particularly challenging for LLMs (Azzi et al., 2019). Existing work on automating financial numerical reasoning focuses on unrealistically specific document snippets (Chen et al., 2021; Zhu et al., 2021). Datasets are often limited to preselected document sections, failing to reflect the broader and more realistic scenarios faced by analysts (Masson and Montariol, 2020). Financial professionals usually sift through hundreds of pages per document, requiring a deep understanding of 1https://huggingface.co/datasets/kensho/DocFinQA Table of Contents Issuer Purchases of Equity Securities During the three months ended December 31, 2012, we repurchased 619,314 shares of our common stock including commissions and fees, pursuant to our publicly announced stock repurchase program, as follows: Period Total Number of Shares Purchased(1) Average Price Paid per Share(2) October 2012 27,524 $ 72.62 November 2012 489,390 $ 74.22 December 2012 102,400 $ 74.83 Total Fourth Quarter 619,314 $ 74.25 (1) Repurchases made pursuant to the $1.5 billion stock repurchase program approved by our Board of Direc program, our management is authorized to purchase shares from time to time through open market purch prices as permitted by securities laws and other legal requirements, and subject to market conditions and purchases pursuant to trading plans under Rule 10b5-1 of the Exchange Act, which allows us to repurcha prevented from doing so under insider trading laws or because of self-imposed trading blackout periods. (2) Average price per share is calculated using the aggregate price, excluding commissions and fees. We continued to repurchase shares of our common stock pursuant to our 2011 Buyback subsequent to De January 21, 2013, we repurchased an additional 15,790 shares of our common stock for an aggregate of $1.2 mi 2011 Buyback. As a result, as of January 21, 2013, we had repurchased a total of approximately 4.3 million sha aggregate of $245.2 million, including commissions and fees. We expect to continue to manage the pacing of th response to general market conditions and other relevant factors. 26 30 Table of Contents Issuer Purchases of Equity Securities During the three months ended December 31, 2012, we repurchased 6 including commissions and fees, pursuant to our publicly announced stock re Period Total Number of Shares Purchased(1) October 2012 27,524 November 2012 489,390 December 2012 102,400 Total Fourth Quarter 619,314 (1) Repurchases made pursuant to the $1.5 billion stock repurchase progr program, our management is authorized to purchase shares from time prices as permitted by securities laws and other legal requirements, an purchases pursuant to trading plans under Rule 10b5-1 of the Exchang prevented from doing so under insider trading laws or because of self (2) Average price per share is calculated using the aggregate price, exclud We continued to repurchase shares of our common stock pursuant to o January 21, 2013, we repurchased an additional 15,790 shares of our commo 2011 Buyback. As a result, as of January 21, 2013, we had repurchased a tot aggregate of $245.2 million, including commissions and fees. We expect to c response to general market conditions and other relevant factors. Table of Contents Issuer Purchases of Equity Securities During the three months ended December 31, 2012, we repurch including commissions and fees, pursuant to our publicly announced s Period Total of S Purch October 2012 November 2012 4 December 2012 1 Total Fourth Quarter 6 (1) Repurchases made pursuant to the $1.5 billion stock repurchase program, our management is authorized to purchase shares from prices as permitted by securities laws and other legal requireme purchases pursuant to trading plans under Rule 10b5-1 of the Ex prevented from doing so under insider trading laws or because o (2) Average price per share is calculated using the aggregate price, We continued to repurchase shares of our common stock pursua January 21, 2013, we repurchased an additional 15,790 shares of our c 2011 Buyback. As a result, as of January 21, 2013, we had repurchase aggregate of $245.2 million, including commissions and fees. We expe response to general market conditions and other relevant factors. Table of Contents Issuer Purchases of Equity Securities During the three months ended December 31, 2012, we repurchased 619,314 shares of our common stock for an aggregate of approximately $46.0 million, including commissions and fees, pursuant to our publicly announced stock repurchase program, as follows: Period Total Number of Shares Purchased(1) Average Price Paid per Share(2) Total Number of Shares Purchased as Part of Publicly Announced Plans or Programs Approximate Dollar Value of Shares that May Yet be Purchased Under the Plans or Programs (in millions) October 2012 27,524 $ 72.62 27,524 $ 1,300.1 November 2012 489,390 $ 74.22 489,390 $ 1,263.7 December 2012 102,400 $ 74.83 102,400 $ 1,256.1 Total Fourth Quarter 619,314 $ 74.25 619,314 $ 1,256.1 (1) Repurchases made pursuant to the $1.5 billion stock repurchase program approved by our Board of Directors in March 2011 (the “2011 Buyback”). Under this program, our management is authorized to purchase shares from time to time through open market purchases or privately negotiated transactions at prevailing prices as permitted by securities laws and other legal requirements, and subject to market conditions and other factors. To facilitate repurchases, we make purchases pursuant to trading plans under Rule 10b5-1 of the Exchange Act, which allows us to repurchase shares during periods when we otherwise might be prevented from doing so under insider trading laws or because of self-imposed trading blackout periods. This program may be discontinued at any time. (2) Average price per share is calculated using the aggregate price, excluding commissions and fees. We continued to repurchase shares of our common stock pursuant to our 2011 Buyback subsequent to December 31, 2012. Between January 1, 2013 and January 21, 2013, we repurchased an additional 15,790 shares of our common stock for an aggregate of $1.2 million, including commissions and fees, pursuant to the 2011 Buyback. As a result, as of January 21, 2013, we had repurchased a total of approximately 4.3 million shares of our common stock under the 2011 Buyback for an aggregate of $245.2 million, including commissions and fees. We expect to continue to manage the pacing of the remaining $1.3 billion under the 2011 Buyback in response to general market conditions and other relevant factors. 26 30 dec_2012 = 102_400 q_4 = 619_314 ans = dec_2012 / q_4 print(ans) Figure 1: DocFinQA extends FinQA to documents often over 150 pages long (100K+ tokens), so it is difficult to find the pertinent information. The question for the example above is: “For the quarter December 31, 2012 what was the percent of the total number of shares purchased in December?” The correct answer is 16.5%. both content and structure to navigate and extract pertinent information effectively. Current long-document QA datasets such as NarrativeQA Koˇciský et al. (2018) do not test the quantitative reasoning skills needed in the financial domain. In this work, we introduce DocFinQA, a longdocument financial question-answering task. We extend the FinQA dataset of expert annotated questions and answers (Chen et al., 2021) with full Securities and Exchange Commission (SEC) reports. This results in a significantly longer context in the DocFinQA dataset – by a factor of 175 – than the FinQA dataset. Additionally, we manually verified and annotated questions of the test set. The resulting long-document QA task offers a more realistic evaluation of a model’s reasoning capabilities over financial documents. In line with recent work on program synthesis for financial QA (Koncel-Kedziorski et al., 2023), the questions in DocFinQA are appended with Python programs to generate the answers, allowing for training and evaluating program synthesis models for use in 445 realistic financial workflows. Using this setup, we evaluate retrieval-based and long-context LLM systems. We study a typical retrieval pipeline that chunks and encodes the document, searching for the best chunks given a question, and passing the question and top-k chunks to a generative QA model (Hsu et al., 2021). We also evaluate retrieval-free approaches using long-context LLMs (Weston and Sukhbaatar, 2023). Our results show that the successful employment of LLMs in financial settings requires further study of the specific nuances of the financial domain, such as context disambiguation. Our dataset represents a step towards better capturing these nuances. 2 Related Work Prior studies in financial question answering focus on non-numerical reasoning (Day and Lee, 2016; Jørgensen et al., 2023; Maia et al., 2018). Short-context grounded numerical reasoning tasks were introduced with datasets such as FinQA (Chen et al., 2021) and TAT-QA (Zhu et al., 2021). Recently, understanding long documents has attracted more attention for tasks involving events (Yang et al., 2018), table of contents (Bentabet et al., 2020), and causal relations (Mariko et al., 2022). However, to the best of our knowledge, this is the first attempt to address financial numerical QA grounded in long documents with upwards of hundreds of pages of context for each question. Long-document QA has been studied in NLP with the introduction of datasets such as SearchQA (Dunn et al., 2017), NarrativeQA (Koˇciský et al., 2018), QuALITY (Pang et al., 2022), and PDFTriage (Saad-Falcon et al., 2023). Due to the limited context size of LLMs, retrieval-based models are commonly used to filter irrelevant text (Izacard et al., 2022; Lewis et al., 2020). Recently, advances in attention mechanisms (Beltagy et al., 2020; Dao et al., 2022) and positional embeddings (Press et al., 2021; Su et al., 2023) allow for end-toend grounded QA with context windows of more than 100k tokens. However, these methods suffer from loss of important context (Zhang et al., 2023) and often fail to make full use of longer inputs (Liu et al., 2023). Our work studies the intersection of numerical reasoning and long-document processing, and our results demonstrate that there is still ample room for improvement in this domain. 3 DocFinQA Dataset Dataset Representation: Each question in FinQA is a triplet (cgolden, q, a) composed of a golden context cgolden, a question q, and an answer a written in human language. An example of FinQA is shown in Table 5 (See Appendix A). We extend the dataset in two ways: (1) context cgolden is extended to the full document context D, and (2) we added a Python program p that produces the answer a. Each final sample in DocFinQA is a quartet (D, q, p, a). An example of DocFinQA is shown in Table 6 (See Appendix A). Filings Collection: For each question of the FinQA dataset, we identify the corresponding SEC filing from which it was created. We retrieve the filing in HTML/XML format from SEC’s EDGAR service and parse the text and table into clean markdown format (Wang et al., 2023). The collection and parsing processes are presented in more detail in Appendix A and Appendix B, respectively. Figure 2 shows the distribution of document lengths in DocFinQA. Figure 2: Histogram of document length (#words) in DocFinQA dataset with dash line representing the average length of the documents. The purple line depicts the proportion of documents where the question context is within the current number of words. Chunking and Alignment: To study retrievalbased QA systems, we split each document D into a set of chunks C = {c1, · · · , cn}. Each chunk consists of 2,750 characters (∼509 tokens) with a 20% overlap to avoid missing context at the edges. To compute the performance, we identify the best context chunk, c⋆, from the chunk set C associated with each document D that includes the information to answer question 446 Dataset #Docs #QAs #Words Multi-page Numeric Tabular NarrativeQA 1,572 46,765 63,000 ✓ QuALITY 381 6,737 5,159 ✓ PDFTriage 82 908 12,000 ✓ ✓ ✓ TAT-QA 2,757 16,552 260 ✓ ✓ FinQA 2,789 8,281 687 ✓ ✓ DocFinQA 801 7,437 123,453 ✓ ✓ ✓ Table 1: Comparison of DocFinQA and existing Finance QA and Long Document QA dataset. DocFinQA includes multi-page documents with both numeric and tabular data. q. Since FinQA already provides cgolden, we compute a pair-wise score (ci, cgolden), for all chunks, ci ∈C, including the golden chunk. We find that four-gram-based similarity score offers the sharpest matching signal among tri-gram, four-gram, and fuzzy matching. The chunk with the highest score is selected as the target context chunk for retrieval. We verify that this process results in good c⋆chunks through manual inspection and by substituting c⋆for cgolden in a few-shot QA evaluation with GPT-3.5. Code Generation: The FinQA dataset provides solutions in a “program” syntax that, when executed, yields the answer (e.g., in Figure 1 the solution is divide(102400, 619314). However, this derivation does not provide meaningful context of what is being calculated. In our running example, 102400 is not semantically grounded to the document. Koncel-Kedziorski et al. (2023) augments FinQA with readable Python code (including named variables like, dec_shares = 102_400) that can be executed to derive the answer, providing a layer of interpretability. Thus, we use the code-enhanced version of DocFinQA (See Appendix C). Statistics: The resultant DocFinQA dataset comprises of 5,735 training, 780 development, and 922 test samples, derived from 801 unique SEC filings. Table 1 shows the statistics and characteristics of DocFinQA in comparison with other finance numerical reasoning and long-document QA datasets. Impact of Data Selection - DocFinQA vs FinQA: Due to the limited availability of complete SEC filings (refer Appendix A) and imperfections in the code generation process, DocFinQA encompasses 7,437 out of 8,281 of FinQA questions. This process may filter out a collection of question types that the LLM did not answer due to its limited capability. We investigates the impact of this process by comparing the distribution of the question types in FinQA and DocFinQA. To do this, we show the distribution of questions grouped by their first 2 non-stop words in Figure 6 (Appendix E). The most important observation is that, overall, the distribution of the question set in DocFinQA and FinQA are very similar. No major groups are being filtered out by our data selection process. The dominant questions (above 1% in FinQA) remain dominant and no major impact on the percentages of those questions is observed. The mid-group (above 0.2% in FinQA) question sets see a mixed effect. A large portion of these questions are increased in percentage while some experience significant loss (e.g., “what percentual” and “what decrease”). Lastly, the long tail group ( under 0.2% in FinQA) either remains the same (e.g., “percent total” and “what greatest”) or is completely wiped out due to a small population (e.g., “was average”, and “what return”). 4 Retrieval-based QA Evaluation Retrieval Task: We test three models for context retrieval: ColBERT (ColB)(Khattab and Zaharia, 2020), Sentence-BERT (SentB) (Reimers and Gurevych, 2019), and OpenAI’s Ada (Greene et al., 2022). Further, we finetune the ColBERT model (FT ColB) on the training set of DocFinQA to evaluate an in-domain model. More details on the fine-tuning process are given in Appendix G. We also test a matching-based model, BM25 (Robertson et al., 1995), but observe poor performance (See Appendix F for details). To retrieve context for a question q over chunk set C, we encode both q and C with the encoding models mentioned above. This results in an embedding, vq, for the question and chunk embeddings VC = {vci|ci ∈C}. We compute the 447 Model Size Upper Bound Original ColBERT Finetuned ColBERT Sentence-BERT OpenAI ADA * * Top 1 Top 3 Top 3 Top 1 Top 3 Top 3 Top 1 Top 3 Top 3 Top 1 Top 3 Top 3 1 shot 3 shot 3 shot 1 shot 3 shot 3 shot 1 shot 3 shot 3 shot 1 shot 3 shot 3 shot 1 shot 3 shot Falcon 7B 2.0 2.0 1.9 0.0 0.0 1.9 1.3 0.0 1.2 0.1 1.3 2.0 0.1 0.0 MPT 7B 6.8 6.6 4.5 0.8 0.2 4.9 1.0 1.2 3.9 0.6 0.8 4.3 1.6 2.0 MPT 30B 27.1 31.0 15.3 2.2 1.7 16.8 3.2 3.8 1.1 3.8 2.7 15.7 10.4 5.1 Llama 2 7B 17.3 22.0 12.8 5.8 8.0 14.0 6.0 10.3 8.9 2.7 6.5 11.2 4.0 11.0 Llama 2 + SFT 7B 67.1 69.7 30.0 32.6 31.3 32.2 35.3 33.9 19.9 24.1 24.3 28.7 29.4 27.7 Llama 2 13B 30.0 33.4 14.4 10.4 14.1 19.1 11.9 14.5 14.9 7.9 10.2 18.3 9.8 13.7 CodeLlama 7B 26.9 34.0 12.6 11.4 16.1 15.7 12.3 16.8 11.9 8.9 13.2 15.4 14.2 17.5 CodeLlama 13B 32.1 39.0 19.5 14.8 21.5 21.2 15.7 22.5 13.2 8.5 16.0 18.3 14.4 20.9 Mistral 7B 39.7 48.8 23.0 18.8 21.3 25.9 16.8 25.2 19.0 13.6 17.6 20.9 18.8 22.1 GPT 3.5 67.3 67.5 36.0 39.0 38.8 38.8 40.7 40.2 24.8 30.1 36.3 35.0 36.5 36.9 Table 2: Performance of the models on DocFinQA in one-shot and few-shot in-context learning settings for the top 1 and top 3 retrieved chunk contexts on the development set. For each model, the best performance among all configurations is in bold. For each model, the best performance among different configurations for the same retrieval model is underlined. Top 1 and Top 3 indicate the number of retrieved chunks used as context for a configuration. *The single original context chunk from the FinQA test set is used to estimate the upper bound. Figure 3: Hit rate of retrieval models. cosine similarity between vq and each vector in VC to retrieve the top-k most similar chunks. We evaluate these models using HR@k on the test set of DocFinQA using the target c⋆. Results are shown in Figure 3. The FT ColB yields the highest HR, followed by ColB. FT ColB yields an average improvement of 91% HR over SentB and obtains a 0.35 (HR@1) and 0.55 (HR@3). 4.1 Question Answering Task We formulate the QA task as a few-shot in-context learning task (Brown et al., 2020). For each incontext example, we only provide the relevant chunk and the answer. For the actual query, we provide k chunks. More details of the few-shot settings are provided in Appendix H. We evaluate Falcon (Penedo et al., 2023), MPT (MosaicML, 2023), LLaMa 2 and CodeLlaMa (Touvron et al., 2023), Mistral (Jiang et al., 2023), GPT3.5 (Brown et al., 2020; OpenAI, 2023) models. We weren’t able to evaluate proprietary models such as GPT3 (Brown et al., 2020), BloombergGPT (Wu et al., 2023) due to their inaccessibility. We skipped models that were not finetuned for code generation such as PIXIU (Xie et al., 2023) and FinGPT (Yang et al., 2023) due to their poor performance. We also skipped models trained for other languages such as BBT-Fin (Lu et al., 2023) and XuanYuan 2.0 (Zhang and Yang, 2023). Performance on the development set: Table 2 reports the full performance of the development set with four retrieval models and three few-shot settings. This results in a total of twelve unique configurations. For both fine-tuned and pre-trained models, we use greedy decoding whenever applicable. One trend noted was that all generic LLMs showed higher accuracy with shorter context and more few-shot examples i.e. top chunk with 3 shots. While code-based LLMs such as Starcoder and CodeLLama showed higher accuracy with longer context i.e. top 3 chunks with 3 shots. This trend is also depicted in Figure 10. Performance on the test set: Table 3 reports the performance of the same 10 state-of-the-art models on the test set of DocFinQA. The few-shot setting and retrieval model configuration for each LLM are treated as hyperparameters and are picked based on the performance of the development set. We observe that larger models outperform smaller models (e.g., MPT 30B vs MPT 7B). Models trained on code yield higher accuracy than non448 Model/Size ColB SentB ADA FT ColB Falcon/7B 2.3 0.3 1.2 1.8 MPT/7B 4.6 2.7 3.8 4.8 MPT/30B 17.3 11.1 12.1 18.1 Llama 2/7B 13.5 8.9 11.1 13.5 Llama 2/13B 18.7 12.7 14.9 19.1 CodeLlama/7B 15.6 12.2 15.2 16.8 CodeLlama/13B 19.1 13.8 18.8 21.0 Mistral/7B 23.2 14.9 21.5 25.0 Llama 2/7B+SFT 32.9 24.8 34.3 36.1 GPT-3.5/41.6 33.8 36.4 42.6 Table 3: Performance on DocFinQA test set. For each row, the best performance among all retrieval models is in bold. The fewshot setting is selected based on the best performance on the development set (See Table 2). code models (e.g., CodeLlama vs Llama). Models with additional supervised finetuning (e.g., LLama 2/7B+SFT) and instruction tuning (e.g., GPT-3.5) are among the best examined. Notably, Mistral 7B outperforms several larger models, although it lags behind Llama 2/7B+SFT and GPT-3.5. The FT ColB model is the best retrieval model in all but one setting. It yields a marginal but consistent improvement over the ColB, and a large improvement over SentB and Ada. 5 Case Study w/ 100K+ Token Documents Recent LLMs can handle context lengths of 128K tokens, but more than 40% of the documents in DocFinQA remain unanswerable even at this content length (see Figure 2). Here, we evaluate performance on a test subsample of 200 randomly selected documents, each of which has 100K or more tokens due to the monetary and temporal costs of human evaluation and GPT4. We explore two retrieval-free options - System 2 Attention (S2A) and Iterative method. S2A extracts relevant information from each 100K-token chunk of a document before answering the question using the combined extracted information as context (Weston and Sukhbaatar, 2023). The Iterative method produces the output program iteratively as the LLM processes each 100k section of the document. A temporary answer program (initially “None”) is input with each section to the LLM. We also report the performance of the best retrievalbased model (Retrieval) based on the experiment in Section 4. We conducted human evaluations on these 200 Model/Size + Method w/ Retrieval Test Subsample Human No 41.0 Mistral/7B + Iterative No 11.5 Mistral/7B + S2A No 15.5 Mistral/7B + Retrieval Yes 20.0 GPT-4 + Iterative No 20.0 GPT-4 + S2A No 23.0 GPT-4 + Retrieval Yes 47.5 Table 4: Retrieval-free performance on a case-study of 100K+ token documents. questions highlighting the challenging nature of this dataset with experienced but non-expert human participants (See Appendix J for details). Nonexpert human performance on DocFinQA is lower than human performance reported in FinQA (Chen et al., 2021) (41% versus 50.7%). This can be attributed to the difficulty of finding the golden page, compared to the golden page being given in FinQA. Notably, the expert performance reported in FinQA is 91.2%. Nonetheless, the non-expert human performance is double that of retrieval-free GPT-4 on these long documents, and roughly triple that of retrieval-free Mistral models. The performance of the iterative method was worse than S2A for both GPT-4 and Mistral with a reduced accuracy of 3% and 4%, respectively. With retrieval, both Mistral and GPT-4 outperform their retrieval-free counterparts, with the assisted GPT-4 now on par with the human cohort. Together, these results highlight that DocFinQA is a difficult test for long-document QA and that there is still room for significant improvement in this domain. For instance, further exploration into methods that combine information across multiple calls to a document-processing LLM is warranted. 6 Conclusion This paper introduces a realistic document-level question-answering dataset over financial reports. Each question includes a full financial report (averaging 123K words), a far greater challenge than previous work that hones in on pre-specified content. Our findings reveal that this more realistic setting presents a significantly more difficult challenge, thereby opening new avenues for research in quantitative financial question answering. 449 Acknowledgement We express our gratitude to our colleagues at Kensho Technologies and S&P Global for their invaluable contributions to data annotation, which greatly enhanced the completion of this project. We extend our appreciation to the anonymous reviewers for their supportive input. Limitation This work introduced an extension of the existing FinQA dataset. Due to limited human resources, we only validated the test set while the training and the development set were not fully validated. As a result, we can not make any claim of bias and question quality in the not-yet-validated data points offered in this paper. Additionally, as discussed in section 3, the code provided in this work was generated by WizardCoder LLMs. We assume that the code is correct if it produces correct or approximately close to the golden answer. This method may generate both false positive codes (the code that generates the correct answer with incorrect rationales) and false negative codes (the correct code that fails the approximation test). Broader Impact and Ethical Considerations We do not foresee any considerable risks associated with our work given that it is an extension of an open-source dataset and uses publicly available documents. To uphold transparency, the paper provides detailed documentation of the dataset creation process, including the sources of data and annotation details. Our dataset serves as a resource to underscore the need for longer context-oriented benchmarks both within and outside the financial domain and does not intend to criticize one or more LLMs. The annotation in this work is done automatically or in-house, so no crowd-sourced or contract annotators were hired throughout the process. The human evaluation in this study was done by fulltime paid coworkers known to the authors. References Abderrahim Ait Azzi, Houda Bouamor, and Sira Ferradans. 2019. The FinSBD-2019 shared task: Sentence boundary detection in PDF noisy text in the financial domain. In Proceedings of the First Workshop on Financial Technology and Natural Language Processing, pages 74–80, Macao, China. Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. LongFormer: The long-document transformer. arXiv preprint arXiv:2004.05150. Najah-Imane Bentabet, Rémi Juge, Ismail El Maarouf, Virginie Mouilleron, Dialekti Valsamou-Stanislawski, and Mahmoud El-Haj. 2020. The financial document structure extraction shared task (FinToc 2020). In Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation, pages 13–22, Barcelona, Spain (Online). COLING. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena Shah, Iana Borova, Dylan Langdon, Reema Moussa, Matt Beane, Ting-Hao Huang, Bryan Routledge, and William Yang Wang. 2021. FinQA: A dataset of numerical reasoning over financial data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3697–3711, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022. FlashAttention: Fast and memory-efficient exact attention with io-awareness. Advances in Neural Information Processing Systems, 35:16344–16359. Min-Yuh Day and Chia-Chou Lee. 2016. Deep learning for financial sentiment analysis on finance news providers. In 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pages 1127–1134. IEEE. Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. SearchQA: A new Q&A dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179. Ryan Greene, Ted Sanders, Lilian Weng, and Arvind Neelakantan. 2022. New and improved embedding model. Chao-Chun Hsu, Eric Lind, Luca Soldaini, and Alessandro Moschitti. 2021. Answer generation for retrievalbased question answering systems. In Findings of the Association for Computational Linguistics: ACLIJCNLP 2021, pages 4276–4282, Online. Association for Computational Linguistics. Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane DwivediYu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Atlas: Few-shot learning with retrieval augmented language models. arXiv preprint arXiv:2208.03299. 450 Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. arXiv preprint arXiv:2310.06825. Rasmus Jørgensen, Oliver Brandt, Mareike Hartmann, Xiang Dai, Christian Igel, and Desmond Elliott. 2023. MultiFin: A dataset for multilingual financial NLP. In Findings of the Association for Computational Linguistics: EACL 2023, pages 894–909, Dubrovnik, Croatia. Association for Computational Linguistics. Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 39– 48. Tomáš Koˇciský, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The NarrativeQA reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317–328. Rik Koncel-Kedziorski, Michael Krumdick, Viet Lai, Varshini Reddy, Charles Lovering, and Chris Tanner. 2023. Bizbench: A quantitative reasoning benchmark for business and finance. arXiv preprint arXiv:2311.06602. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive NLP tasks. Advances in Neural Information Processing Systems, 33:9459– 9474. Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023. Lost in the middle: How language models use long contexts. Transactions of the Association for Computational Linguistics. Dakuan Lu, Jiaqing Liang, Yipei Xu, Qianyu He, Yipeng Geng, Mengkun Han, Yingsi Xin, Hengkui Wu, and Yanghua Xiao. 2023. Bbt-fin: Comprehensive construction of chinese financial domain pre-trained language model, corpus and benchmark. arXiv preprint arXiv:2302.09432. Macedo Maia, Siegfried Handschuh, André Freitas, Brian Davis, Ross McDermott, Manel Zarrouk, and Alexandra Balahur. 2018. WWW’18 open challenge: financial opinion mining and question answering. In Companion proceedings of the the web conference 2018, pages 1941–1942. Dominique Mariko, Hanna Abi-Akl, Kim Trottier, and Mahmoud El-Haj. 2022. The financial causality extraction shared task (FinCausal 2022). In Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022, pages 105–107, Marseille, France. European Language Resources Association. Corentin Masson and Syrielle Montariol. 2020. Detecting omissions of risk factors in company annual reports. In Proceedings of the Second Workshop on Financial Technology and Natural Language Processing, pages 15–21, Kyoto, Japan. -. MosaicML. 2023. MPT-30B: Raising the bar for opensource foundation models. OpenAI. 2023. GPT-4 technical report. Richard Yuanzhe Pang, Alicia Parrish, Nitish Joshi, Nikita Nangia, Jason Phang, Angelica Chen, Vishakh Padmakumar, Johnny Ma, Jana Thompson, He He, and Samuel Bowman. 2022. QuALITY: Question answering with long input texts, yes! In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5336–5358, Seattle, United States. Association for Computational Linguistics. Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. 2023. The refinedweb dataset for ccon llm: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116. Ofir Press, Noah A Smith, and Mike Lewis. 2021. Train short, test long: Attention with linear biases enables input length extrapolation. In Proceedings of the 2022 International Conference on Learning Representations (ICLR). Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Stephen E Robertson, Steve Walker, Susan Jones, Micheline M Hancock-Beaulieu, Mike Gatford, et al. 1995. Okapi at TREC-3. Nist Special Publication Sp, 109:109. Jon Saad-Falcon, Joe Barrow, Alexa Siu, Ani Nenkova, Ryan A Rossi, and Franck Dernoncourt. 2023. Pdftriage: Question answering over long, structured documents. arXiv preprint arXiv:2309.08872. Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. 2023. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing, page 127063. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. ArXiv. 451 Jilin Wang, Michael Krumdick, Baojia Tong, Hamima Halim, Maxim Sokolov, Vadym Barda, Delphine Vendryes, and Chris Tanner. 2023. A graphical approach to document layout analysis. In International Conference on Document Analysis and Recognition, pages 53–69. Springer. Jason Weston and Sainbayar Sukhbaatar. 2023. System 2 attention (is something you might need too). arXiv preprint arXiv:2311.11829. Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, and Gideon Mann. 2023. Bloomberggpt: A large language model for finance. arXiv preprint arXiv:2303.17564. Qianqian Xie, Weiguang Han, Xiao Zhang, Yanzhao Lai, Min Peng, Alejandro Lopez-Lira, and Jimin Huang. 2023. Pixiu: A comprehensive benchmark, instruction dataset and large language model for finance. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Hang Yang, Yubo Chen, Kang Liu, Yang Xiao, and Jun Zhao. 2018. DCFEE: A document-level Chinese financial event extraction system based on automatically labeled training data. In Proceedings of ACL 2018, System Demonstrations, pages 50–55, Melbourne, Australia. Association for Computational Linguistics. Hongyang Yang, Xiao-Yang Liu, and Christina Dan Wang. 2023. Fingpt: Open-source financial large language models. arXiv preprint arXiv:2306.06031. Shiyue Zhang, David Wan, and Mohit Bansal. 2023. Extractive is not faithful: An investigation of broad unfaithfulness problems in extractive summarization. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2153–2174, Toronto, Canada. Association for Computational Linguistics. Xuanyu Zhang and Qing Yang. 2023. Xuanyuan 2.0: A large chinese financial chat model with hundreds of billions parameters. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, pages 4435–4439. Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and TatSeng Chua. 2021. TAT-QA: A question answering benchmark on a hybrid of tabular and textual content in finance. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3277–3287, Online. Association for Computational Linguistics. 452 A SEC Filing Collection Each data point in the FinQA dataset consists of a document identification field as shown in Table 5. This field is made up of 3 sections separated by a forward slash. The first is a string called company ticker symbol, the second refers to the year in which this document was filed and the third is the page number in the document where the answer can be found. Downloading the right 10-K filing from the SEC begins with identifying the company code from the company ticker symbol. For example, C/2017/page_328.pdf-1 in FinQA maps to the CITIGROUP INC with company code 831001. This mapping is obtained from the official file released by SEC which can be found here https://www.sec.gov/file/ company-tickers. We automatically generate a URL using the company code obtained. From the SEC website, either filings are downloaded as TXT, HTML, or XBRL using the generated URL. At this stage, approximately 6.5% (or 543) data points corresponding to approximately 9.4% (or 17) documents were dropped, either due to lack of mapping or non-availability of older documents. Further, the conversion of the downloaded files to PDF caused a loss of 117 data points (19 unique documents) due to formatting issues. ID: C/2017/page_328.pdf-1 Context: Performance graph comparison of five-year cumulative total return the following graph and table compare the cumulative total return on Citi 2019s common stock, which is listed on the NYSE under the ticker symbol 201cc 201d and held by 65691 common stockholders of record as of January 31, 2018, with the cumulative total return of the S&P 500 index and the S&P financial index over the five-year period through December 31, 2017. The graph and table assume that $ 100 was invested on December 31, 2012 in Citi 2019s common stock, the S&P 500 index and the S&P financial index, and that all dividends were reinvested . comparison of five-year cumulative total return for the years ended date Citi S&P 500 financials. | DATE | CITI | S&P 500 | S&P FINANCIALS | | :— | :— | :— | :— | | 31-Dec-2012 | 100.0 | 100.0 | 100.0 | | 31-Dec-2013 | 131.8 | 132.4 | 135.6 | | 31-Dec-2014 | 137.0 | 150.5 | 156.2 | | 31-Dec-2015 | 131.4 | 152.6 | 153.9 | | 31-Dec-2016 | 152.3 | 170.8 | 188.9 | | 31-Dec-2017 | 193.5 | 208.1 | 230.9 | Question: What was the percentage cumulative total return for the five year period ended 31-dec-2017 of citi common stock? Answer: 93.5% Table 5: Example from FinQA dataset. The context provided here has been formatted from the original dataset values. 453 Context: Table of Contents UNITED STATES SECURITIES AND EXCHANGE COMMISSION # ANNUAL REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT of 1934 For the Fiscal Year Ended December 30, 2006 Commission file number 1-4171 # Kellogg Company (Exact Name of Registrant as Specified in its Charter) Delaware (State of Incorporation) (I.R.S. Employer Identification No.) One Kellogg Square (Address of Principal Executive Offices) Securities registered pursuant to Section 12(b) of the Securities Act: Title of each class: Name of each exchange on which registered: · · · The Consolidated Financial Statements and related Notes, together with Management’s Report on Internal Control over Financial Reporting, and the Report thereon of Pricewaterhouse Coopers LLP dated February 23, 2007, are included herein in Part II, Item 8. # (a) 1. Consolidated Financial Statements Consolidated Statement of Earnings for the years ended December 30, 2006, December 31, 2005 and January 1, 2005. Consolidated Statement of Shareholders’ Equity for the years ended December 30, 2006, December 31, 2005 and January 1, 2005. Notes to Consolidated Financial Statements. # (a) 2. Consolidated Financial Statement Schedule All financial statement schedules are omitted because they are not applicable or the required information is shown in the financial statements or the notes thereto. # (a) 3. Exhibits required to be filed by Item 601 of Regulation S-K The information called for by this Item is incorporated herein by reference from the Exhibit Index on pages 61 through 64 of this Report. Pursuant to the requirements of Section 13 or 15(d) of the Securities Exchange Act of 1934, the Registrant has duly caused this Report to be signed on its behalf by the undersigned, thereunto duly authorized, this 23rd day of February, 2007. Pursuant to the requirements of the Securities Exchange Act of 1934, this Report has been signed below by the following persons on behalf of the Registrant and in the capacities and on the dates indicated. Electronic(E), | 10.48 | | IBRF | | :— | :— | :— | | | Commission file number 1-4171.* | | | 21.01 | Domestic and Foreign Subsidiaries of Kellogg. | E | | 23.01 | Consent of Independent Registered Public Accounting Firm. | E | | 24.01 | Powers of Attorney authorizing Gary H. Pilnick to execute our Annual Report on Form 10-K for the fiscal year ended December 30, 2006, on behalf of the Board of Directors, and each of them. | E | | 31.1 | Rule 13a-14(a)/15d-14(a) Certification by A.D. David Mackay. | E | | 31.2 | Rule 13a-14(a)/15d-14(a) Certification by John A. Bryant. | E | | 32.1 | Section 1350 Certification by A.D. David Mackay. | E | | 32.2 | Section 1350 Certification by John A. Bryant. | E | Question: What was the average cash flow from 2004 to 2006? Program: net_cash_2006 = 957.4 net_cash_2005 = 769.1 net_cash_2004 = 950.4 t o t a l _ n e t _ c a s h = net_cash_2006 + net_cash_2005 + net_cash_2004 average_net_cash = t o t a l _ n e t _ c a s h / 3 answer = average_net_cash Answer: 892.3 Table 6: Examples from DocFinQA dataset with text and tables from entire SEC document as context (truncated for legibility), question, associated program and answer. A full report can be founded here https://www. annualreports.com/HostedData/AnnualReportArchive/k/NYSE_K_2006.pdf 454 B Parsing SEC Filings Since each filing contains many tables, maintaining the structure and order during extraction is critical for numerical reasoning. We convert each HTMLformatted filing to PDF format and use a financespecific PDF extractor to parse the filing into markdown format. This process ensures that: (i) our dataset is grounded in the relevant financial documentation and (ii) all the tables in the filings are parsed with high precision into a consistent format without any HTML-tag noise. We explore different methods for parsing SEC filings consisting of HTML and XML markup into text and markdown tables for use in our QA systems. To evaluate parsing strategies, we measure HR@k (Hit Rate @ k) when searching for the gold chunk among all document chunks for a single document using the FinQA question as the search query. Queries and document chunks are encoded with OpenAI’s ADA model. We compare BeautifulSoup, a standard library for manipulating HTML and XML formatted data, and Kensho Extract, a finance-specific text and table extraction model.2 Figure 4 shows the performance of these two methods. Additionally, we note a better downstream performance of finance-specific models with Kensho Extract retrieved-context compared to that of Beautiful Soup. Qualitative analysis of the different parsers reveals that Kensho Extract is better at structuring the tables used in financial documents, resulting in better readability which seems to extend to the encodings. Figure 4: Accuracy for varying HR@ for two context extraction methods. 2Passing the raw HTML/XML to the language model produces near-zero performance. C Code Conversion Figure 5 shows the steps of converting (a) derivation of the result in FinQA into (b) dummy Python code with dummy variable names, and finally transforming it to (c) a meaningful Python program in DocFinQA following the work by KoncelKedziorski et al. (2023). (a) subtract(34.8, 1.2), divide(#0, 34.8) (b) a = 34.8 −1.2 b = a/34.8 c = b ∗100 (c) payments_decrease = 34.8 −1.2 change = payments_decrease/34.8 answer = change ∗100 Figure 5: Example of code conversion. (a) Original FinQA’s derivation. (b) Dummy Python Program (c) Meaningful Python Code in DocFinQA. D Model Details In this work, we used the base models of Falcon, MPT, Llama 2, CodeLlama, and Mistral throughout our work. These models were not trained with supervised finetuning or reinforcement learning human feedback. The GPT-3.5 model employed in this study is gpt-3.5-turbo-0613 while the GPT-4 model used is gpt-4-1106-preview. We also included the Llama 2/7B + SFT that was finetuned on the training set of DocFinQA with golden chunk from FinQA (cgolden). The finetuning process takes 3 epochs with a batch size of 32. We use the context provided by the FinQA dataset as the input due to the limited maximum token length of the model. The maximum token length is set to 2048. The model is finetuned on 8 x Nvidia A100-80GB GPUs. We use AdamW optimizer with learning rate of 2e-6. The training process takes 4 hours to complete. 455 E Distribution of question by question types Figure 6 shows the distribution by question types of the dataset before (FinQA, green) and after (DocFinQA, purple) the automatic data selection. 10 1 100 101 Percentage (log) what interest what as what impact what unrealized what return what effective did jpmorgan what market was average what minimum what balance what anticipated what debt what gross what implied what company what largest percent total what greatest what combined what current what number what profit what maximum what range how cash what mathematical what lowest what tax what rate what year what sum what yearly what cumulative what estimated what decrease what highest what approximate what annual what variation what percentual what expected what operating what amount what value what roi what be how many what increase what difference how much what growth what net what change what portion what ratio what average what total what percent what percentage FinQA DocFinQA Figure 6: Distribution of questions grouped by question types in the original FinQA and DocFinQA. The x-axis (percentage) is presented in log scale to magnify the differences between the two sets. F Performance of retrieval methods Figure 7 shows a pilot study comparing dense retrieval with OpenAI ADA and Sentence BERT versus sparse retrieval (BM 25) on the development set. We can see that the dense retrieval model offers a much higher hit ratio. Figure 7: Accuracy for varying HR@ for three search methods on the development set Figure 8 shows the prompt template with incontext learning that we used. Context: {golden chunk} Question: {question} Python Program: {program} Answer: {answer} Context: {golden chunk} Question: {question} Python Program: {program} Answer: {answer} Context: {golden chunk} Question: {question} Python Program: {program} Answer: {answer} Context: {first chunk} {second chunk} {third chunk} Question: {question} Python Program: Figure 8: Prompt template with Top-3 context and 3shot In-Context Learning. 456 G ColBERT Finetuning We finetune the original ColBERT v1 model on the train set of DocFinQA. For each data point, we perform chunking and alignment to generate one golden chunk and n −1 negative chunks. For training, we generate a list of tuples (qid, pid+, pid-), where qid refers to the question, pid+ refers to the golden chunk and pid- refers to each of the negative chunks in that document. We train the model for a total of 3 epochs and store the checkpoints at the end of each epoch. The hit rate of the Finetuned ColBERT model after each epoch on the development set is shown in Figure 9. We observe that after the first epoch, additional finetuning does not show any performance improvement. The Finetuned ColBERT model referred to in this study thus uses the weights after the first epoch of training. Figure 9: Hit rate of different ColBERT variants on the development set of DocFinQA. H Few-shot Settings Due to the limited context length of the LLMs, the number of few-shot demonstrations and the number of chunks fed into the In-Context Learning must be optimized. We explore 3 settings of the number of few-shot examples and 4 settings of the number of chunks used as context in the query. Figure 10 shows the performance of these settings in retrieval and answered by LLama 13B and CodeLLaMa 13B on the development set. We see that a higher number of few-shot examples (numshot=3) yield consistently better performance compared to a lower one (numshot=1). Figure 10: A QA performance plot on the development set of DocFinQA for the Llama 2 13B and CodeLlama 2 13B models for each of the 12 configurations 457 I Golden Chunk Position Figure 11 shows the distribution of the position golden chunk with the documents. We see that most of the golden chunks appear within the first 250 chunks (approximately 125K tokens which can be fed into the newest generative models). Nonetheless, there are a substantial number of questions that the golden chunk appears beyond this threshold. Figure 11: Histogram of the position of the FinQA context in the original SEC filing that is split into chunks of size 2750. J Human Evaluation Setting We recruited three data professionals with 4-5 years of experience working with financial documents, including but not limited to 10-K filings, to estimate human evaluation. The professionals were provided with the entire document in PDF format, maintaining the SEC’s original format for ease of reading. They were allowed to use the keywordsearch feature of PDF reader applications and a simple calculator for basic arithmetic operations required for this task. On average, the professionals spent 25 minutes per question. 458
2024_short
42
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 459–469 August 11-16, 2024 ©2024 Association for Computational Linguistics MaskLID: Code-Switching Language Identification through Iterative Masking Amir Hossein Kargaran♣, François Yvon♠and Hinrich Schütze♣ ♣LMU Munich & Munich Center for Machine Learning, Munich, Germany ♠Sorbonne Université & CNRS, ISIR, Paris, France [email protected] Abstract We present MaskLID, a simple, yet effective, code-switching (CS) language identification (LID) method. MaskLID does not require any training and is designed to complement current high-performance sentence-level LIDs. Sentence-level LIDs are classifiers trained on monolingual texts to provide single labels, typically using a softmax layer to turn scores into probabilities. However, in cases where a sentence is composed in both L1 and L2 languages, the LID classifier often only returns the dominant label L1. To address this limitation, MaskLID employs a strategy to mask text features associated with L1, allowing the LID to classify the text as L2 in the next round. This method uses the LID itself to identify the features that require masking and does not rely on any external resource. In this work, we explore the use of MaskLID for two open-source LIDs (GlotLID and OpenLID), that are both based on the FastText architecture. Code and demo are available at github.com/cisnlp/MaskLID. 1 Introduction Code-switching (CS), the juxtaposition of two or more languages within a single discourse (Gumperz, 1982), is prevalent in both written and spoken communication (Sitaram et al., 2019; Do˘gruöz et al., 2021). While CS has traditionally been explored as a speech phenomenon (Milroy and Muysken, 1995; Auer, 2013), the increasing prevalence of CS in digital communication, such as SMS and social media platforms (Das and Gambäck, 2013; Bali et al., 2014), requires the development of techniques to also analyze CS in written texts. There is however a lack of CS data for researchers, making it difficult to study CS and to effectively train CS-aware models. This shortage affects many NLP applications dealing with CS scenarios (Solorio et al., 2021; Winata et al., 2023). A first step towards the collection of high-quality corpora of CS texts is thus to identify samples of CS in running texts. Previous works on CS language identification (LID) have mainly focused on building word-level LIDs for code-switching between specific pairs of languages, and are often limited to recognize only two languages (Solorio et al., 2014; Nguyen and Do˘gruöz, 2013; Elfardy et al., 2013; Barman et al., 2014). However, such approaches are not realistic on a larger scale, especially considering that texts on the web typically lack prior information about the languages that are actually being used. More recently, Burchell et al. (2024) have investigated the use of high-quality LID at the sentencelevel to detect instances of CS. They propose to reformulate CS LID as a sentence-level task and to associate each segment with a set of language labels. Their investigation reveals the difficulty of achieving effective CS LID with existing LID models. Furthermore, their findings indicate that such LIDs predominantly predict only one of the languages occurring in CS sentences. In this work, we continue this line of reserach and introduce MaskLID, a method that also uses high-quality sentence-level LID to identify CS segments. By masking the presence of the text features associated with the dominant language, MaskLID improves the ability to recognize additional language(s) as well. We explain in detail how MaskLID works in cooperation with two existing LIDs that are based on the FastText (Bojanowski et al., 2017) architecture in Section 3. As we discuss, our method can identify arbitrary pairs of languages, and is also able to detect mixtures of more than two languages in the same segment. Being based on FastText, it is also extremely fast. This two properties make MaskLID well suited to mine large web corpora for examples of real-world CS segments, that can then serve as valuable training data for applications designed to handle CS inputs. We evaluate MaskLID on two test datasets contain459 ing both CS and monolingual data, showing the benefits of using MaskLID (see Section 4). 2 One Sentence, Multiple Languages 2.1 Code-switching, Code-mixing Code-switching (CS) can be defined as the alternate use of two languages within the same utterance and can happen either between sentences (intersentential CS) or within a sentence (intra-sentential CS or code-mixing) (Gumperz, 1982). While loanwords are often seen as a simple form of CS, their assimilation into a foreign linguistic system sometimes yields a mixed use of languages within a single word. For the purpose of this work, we mostly focus on inter-sentential CS and use the terms codeswitching and code-mixing interchangeably, even though our approach could in fact apply to longer chunks of texts. From an abstract perspective, the main trait of CS is thus the juxtaposition of two (or more) languages within a single segments, a view that is also adopted in e.g. from Bali et al. (2014). From this perspective, CS ID can be formulated as identifying more than one language ID in a given text segment. We also use the fact that mixing does not take place randomly (Myers-Scotton, 1997), and that one language plays a dominant role and provides the linguistic structure into which inserts from other languages can take place. In the next paragraph, we discuss two previous approaches that share this view and which serve as the foundation of MaskLID. For other related works, refer to Appendix A. 2.2 Detecting CS with Lexical Anchors Our work is most closely related to the research of Mendels et al. (2018). They propose a method to identify CS data in sentences written in two languages L1 and L2. Their approach first requires a language identifier that is able to label the majority language of a document as language L1, even when the document also contains words that belong to L2. This aligns with our setup, as sentence-level LID models trained on monolingual texts often demonstrate similar performance on CS data, primarily predicting the dominant language L1 (Burchell et al., 2024). Mendels et al. (2018) also introduce the concept of anchors for each language, defining an anchor as a word belonging to only one language within a language pool L. The set of anchors in their work is computed based on the analysis of monolingual corpora, and constitutes an external resource to their CS LID system. To relax the definition of anchors, they also introduce the notion of weak anchor for a language L2 relative to some other language L1: an anchor is considered a weak anchor’ if it is observed in monolingual L2 corpora but not in monolingual L1 corpora. In their definition of CS for L1+L2 sentences, a sentence is then considered CS if and only if it is predicted to be in language L1 by the LID model and contains at least one weak anchor from the L2 anchor set (relative to L1). Our method shares similarity with this work in that, for L1+L2 sentences, the initial step consists in the identification of L1. However, while their approach requires the identification of sets of weak anchors for each language pair, we identify the minority language(s) L2 using only features that are internal to the main LID model, dispensing from the need to compile external resources. 2.3 CS Detection as Set Prediction Problem Another work that is closely related to ours is the research conducted by Burchell et al. (2024). They use three different sentence-level LID models for CS LID: 1) OpenLID (Burchell et al., 2023), a highquality LID model operating at the sentence level; 2) Multi-label OpenLID, which is similar to OpenLID but is trained with a binary cross-entropy loss instead of the conventional cross-entropy, and delivers Yes-No decisions for each possible language;1 and 3) Franc (Wormer, 2014), which uses trigram distributions in the input text and a language model to compute languages and their scores. However, the result of these models on CS LID are not very promising especially for the TurkishEnglish CS dataset (see Section 4). One reason is that the occurrence of one single English word in a Turkish sentence is tagged in the gold reference as an instance of CS. Yet, one single word may not be enough to yield large logit values for the English label in these difficult predictions. But this is not the only reason these models fail. Scaling the baseline LID to support more languages, which is a strong motivation behind models such as GlotLID (Kargaran et al., 2023) and OpenLID, makes CS LID predictions more challenging. For instance, when the model encounters a TurkishEnglish sentence and predicts Turkish as the top 1See FastText documentation: fasttext.cc/docs/en/supervised-tutorial.html# multi-label-classification. 460 language, the second best prediction may not be English, but a language closest to Turkish instead, such as North Azerbaijani or Turkmen, which have more active ngram features in the CS sentence than English. Consider, for instance, the example sentence from Burchell et al. (2024, Table 9): bir kahve dükkanında geçen film tadında güzel bir şarkıya ayrılsın gece falling in love at a coffee shop OpenLID’s top 5 predictions for this sentence are: 1) Turkish, 2) North Azerbaijani, 3) Crimean Tatar, 4) Turkmen, 5) Tosk Albanian, with English predicted as the 15th most likely language. Yet, for a speaker of either Turkish or English, it is obvious that this sentence is a mixture of just these two languages. To solve this, MaskLID suggests to mask the Turkish part of the sentence: <MASK> film <MASK> falling in love at a coffee shop. If we now ask OpenLID to predict this masked sentence (without the token <MASK>), the top prediction would be English with 0.9999 confidence. MaskLID makes models such as OpenLID much more suitable for this task. Details on how MaskLID computes the masked parts are in Section 3. 3 MaskLID 3.1 FastText-based LIDs In this paper, we explore the use of MaskLID for LIDs based on the FastText (Bojanowski et al., 2017) architecture. However, it is also possible to apply MaskLID to other LIDs, as long as they enable to determine how much each feature (e.g., word) contributes to each supported language. FastText is one of the most popular LID architectures due to its open-source nature, high performance, ease of use, and efficiency. FastText classifier is a multinomial logistic classifier that represents the input sentence as a set of feature embeddings, making it easy to assess each feature’s contribution to the final prediction. Given a sentence s, let f1, f2, . . . , fT represent the features extracted from s. Note that these features are linearly ordered, i.e., fi precedes fi+1 in s. FastText maps these features onto vectors in Rd via feature embeddings x1, x2, . . . , xT . The dimensionality of these embeddings, denoted d, is a hyperparameter. A base LID using FastText architecture computes the posterior probability for a language c ∈[1 : N] by applying the softmax function over logits as: P(c|s) = exp(bc · 1 T PT t=1 xt) PN c′=1 exp(bc′ · 1 T PT t=1 xt) . (1) P(c|s) is the base LID probability of the input text s belonging to language c, bc is the weight vector for language c, and N is the total number of classes supported by the base LID. To evaluate how much each feature contributes to each supported language, we need to compute logits separately for each feature. For simplicity and alignment with the FastText tokenizer (which considers white-spaces as token boundaries), we set the level of granularity of features to be the word level. The word-level feature embedding is obtained as the summation of all feature embeddings that build each word. Noting W the number of words in a sentence s, we define the N × W matrix V(s), where each element Vc,t(s) represents the logits for language c and word-level feature xt: Vc,t(s) = bc · xt. (2) 3.2 The MaskLID Method We define the MaskLID algorithm in alignment with Burchell et al. (2024): given an input text, the objective is to return a set of codes corresponding to the language(s) it contains. However, MaskLID is more explainable and provides insights into which parts of the sentence contributed to its decision. The MaskLID algorithm works as follows: Input: 1) sentence s. 2) α, an integer parameter used to define strong associations between words and languages: having a language appear in the top-α logit values for a word is a strong cue that this word belongs to that language. 3) β, an integer parameter used to define weak associations between words and languages: languages appearing in the top-β logit values for a word are weakly associated with that word. β is always greater than α. 4) τ, a threshold representing the minimum size of a sentence (in bytes) for which the LID makes reliable decisions. 5) λ, a parameter defining the number of times the algorithm should be repeated. 461 Output: 1) List of predicted languages, along with their associated word-level features. Procedure: 0) Take sentence s and compute V(s) using Eq. (2). Assign s to variable u. 1) Compute the posterior probability for each possible language using Eq. (1). Find the most likely class (L1 = arg maxc P(c|u)) along with its corresponding probability P(L1|u). Assign L1 to variable Lu. 2) Process column V:,t(s) for each unmasked word t in u. If the value of VLu,t(s) is in the top-β values for that column, then assign word t to language Lu. If the value of VLu,t is among the top-α values for that column, mask word t from sentence u. Masked words play here a role similar to the anchors used in (Mendels et al., 2018): recall that for these authors, anchor words are selected to uniquely identify one language – there removal is likely to decrease the recognition of L1, without impacting the ability to recognize L2. In our approach, we identify these pseudo-anchors on the spot, relying on the LID internal scoring procedure. 3) check if length of u (in bytes, ignoring masked words) is greater than τ. If not, then terminate. This is one termination condition (for additional considerations, refer to Appendix B). Setting τ = 0 will just check that the masked sentence is not empty, but it is better to use a non-zero threshold, as most sentence-level LIDs do not reliably predict short sentences (Jauhiainen et al., 2019). 4) if the number of iterations is lower than λ then go to back to step 1, else stop. The complexity of this greedy procedure is O(λ × T × N log β). 4 Experiments and Results Here, we provide an overview of our baselines and test data. We assess the performance of the baselines by testing them both with and without MaskLID. Our setting of hyperparameters is explained in Appendix C.2. 4.1 Baselines Our baseline LID models are OpenLID2 (supporting ≈200 languages) and GlotLID v3.03 (supporting ≈2100 languages), two LIDs based on the FastText architecture. For a fair comparison between these models, we limit the languages that GlotLID supports to the same set as OpenLID (see details in Appendix C.1). Two exceptions are romanized Nepali (nep_Latn) and Hindi (hin_Latn), which are not supported by OpenLID, but for which we also have test data that is also used to evaluate MaskLID with GlotLID. 4.2 Test Data We choose Turkish-English (Yirmibe¸so˘glu and Eryi˘git, 2018), Hindi-English (Aguilar et al., 2020), Nepali-English (Aguilar et al., 2020) and BasqueSpanish (Aguirre et al., 2022), as our test datasets. We have data for four CS labels and six single labels (see Table 1). Details regarding these test sets, preprocessing, their descriptions, and information on access are in Appendix D. 4.3 Metrics We use the number of exact (#EM) and partial matches (#PM), along with the count of false positives (#FP) as the main metrics in our evaluation. To ensure clarity and prevent misinterpretation of the results, we report the absolute number of instances rather than percentages. 1) #EM: This metric counts a prediction as a match when it exactly matches the true 2) #PM: This metric counts a prediction as a match when only part of the information is correct: for a single label, if it is part of the prediction; for a CS label, if part of the label exactly matches the prediction. 3) #FP: If any label other than X is misclassified as X, it counts as an FP for X. We do not consider the #FP for single labels, as partial matches of CS are counted as FP for single labels. Therefore, we only report the FP for CS sentences. 4.4 Results Table 1 presents the results on the test data for two baseline LIDs and two settings, with and without MaskLID. The best exact match (#EM) for CS labels is in boldface in the table, demonstrating that 2https://huggingface.co/laurievb/openlid 3https://huggingface.co/cis-lmu/glotlid 462 Baseline + MaskLID Baseline #EM/#PM ↑ #FP ↓ #EM/#PM ↑ # FP ↓ #S GlotLID OpenLID GlotLID OpenLID GlotLID OpenLID GlotLID OpenLID CS Turkish–English 333 91/328 68/327 0 0 4/327 4/326 0 0 CS Basque–Spanish 440 43/430 47/426 0 0 9/426 9/424 0 3 (from Spanish) CS Hindi–English 253 29/219 0 5/211 0 CS Nepali–English 712 22/444 0 0/420 0 Single Basque 357 354/354 355/355 353/353 355/355 Single Spanish 347 335/337 297/300 337/340 287/311 Single Turkish 340 333/337 329/334 335/337 329/335 Single Hindi 29 18/19 17/18 Single Nepali 197 63/75 68/72 Single English 508 459/490 428/469 486/490 455/462 Table 1: Number of exact (#EM) and partial matches (#PM) and count of false positives (#FP) calculated over CS and single label test instances. The best exact match for CS instances is in bold, and the second is underlined. #S reports the number of sentences for each test set. the baseline with MaskLID achieves better performance compared to the baseline without it. Partial matches (#PM) in both settings (with and without MaskLID) are quite similar. For CS Turkish-English, MaskLID detects 91 CS at best, compared to 4 without it. For BasqueSpanish, MaskLID detects 47 CS, versus 9 without it. For Hindi-English, MaskLID detects 29 CS, compared to 5 without it. For Nepali-English, MaskLID detects 22 CS, while none are detected without it. In all single-language test instances, GlotLID outperforms OpenLID. This is also the case for CS language instances, except for Basque-Spanish. Considering the relatively poorer performance of OpenLID in both single Basque and single Spanish, overall, GlotLID proves to be the better model for these tasks. Additional Considerations. For CS instances: 1) The difference between #PM and #EM corresponds to the number of times only one of two mixed languages in a CS instance is predicted. 2) The difference between number of sentences (#S) and #PM corresponds to the number of times none of the languages in the CS instance is predicted. In all CS setups, the #EM and #PM value in the baseline with MaskLID are always greater than without. Additionally, the difference between #PM and #EM is also smaller, which indicates a higher precision in CS LID. For single language instances: 1) The difference between #PM and #EM corresponds to the number of times the single label instance is classified as part of a multi-label instance. 2) The difference between #S and #PM corresponds to the number of times a single label is never predicted, even as part of a multi-label instance. For all single language instances, the results are quite similar except for single English, where the number of incorrect CS in baseline with MaskLID (#PM - #EM) is greater than with baseline alone. To address this, using a larger minimum length τ helps decrease the number of CS false positives. For single English, in GlotLID with MaskLID setting, increasing τ from 20 to 25 raises the #EM from 459 to 473; however, it reduces the #EM in GlotLID with MaskLID setting for CS Turkish-English from 91 to 67, CS Hindi-English from 29 to 26, and CS Nepali-English from 22 to 18. Examples of successes and failures of MaskLID are provided in Appendix E. 5 Conclusion We present MaskLID, a simple, yet effective, method for scalable code-switching (CS) language identification (LID). MaskLID is designed to complement existing high-performance sentence-level LID models and does not require any training. In our experiments, MaskLID increases CS LID by a factor of 22 in Turkish-English, by 22 in Nepali-English, by 6 in Hindi-English and by 5 in Basque–Spanish. In future work, we aim to explore the use of subword-level, instead of word-level features, extending the applicability of the method to languages that do not use spaces for word separation. Additionally, we plan to generalize this method to other LID models using techniques like LIME (Ribeiro et al., 2016) to map features to languages. Last, we intend to apply MaskLID on the web data, in the hope that it will help build larger high-quality web corpora for CS. 463 Limitations The CS testsets we use in this study only represent a small subset of potential uses of CS languages. Creating additional CS datasets for more languages would definitely be an extension of this work. MaskLID uses hyperparameters, and changing the model and the set of languages it supports may require adjustments to these parameters. Although MaskLID detects more CS than the standalone baseline LID models, it still has a long way to go to predict the majority of them. One important source of remaining errors is loan words, where the L2 insert is just one word long: these cannot be detected with out current hyperparameter settings. The performance of MaskLID is also bound by the LID it uses; it might not have good performance for some languages, resulting e.g. in a large number of false positives. Ethics Statement MaskLID uses openly available open-source LID models and does not require any additional resources except for hyperparameters. Concerning the evaluation data, these datasets have undergone anonymization to safeguard the privacy of all parties involved. We provide links to the data and do not host it ourselves. We provide detailed descriptions of our method and evaluation process. Additionally, we make our code openly available to foster collaboration and reproducibility. Acknowledgements The authors thank the anonymous reviewers and editors for their comments of the previous version of this work. This research was supported by DFG (grant SCHU 2246/14-1). References Ife Adebara, AbdelRahim Elmadany, Muhammad Abdul-Mageed, and Alcides Inciarte. 2022. AfroLID: A neural language identification tool for African languages. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1958–1981, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Gustavo Aguilar, Sudipta Kar, and Thamar Solorio. 2020. LinCE: A centralized benchmark for linguistic code-switching evaluation. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 1803–1813, Marseille, France. European Language Resources Association. Maia Aguirre, Laura García-Sardiña, Manex Serras, Ariane Méndez, and Jacobo López. 2022. BaSCo: An annotated Basque-Spanish code-switching corpus for natural language understanding. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 3158–3163, Marseille, France. European Language Resources Association. Mohamed Al-Badrashiny and Mona Diab. 2016. LILI: A simple language independent approach for language identification. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1211– 1219, Osaka, Japan. The COLING 2016 Organizing Committee. Peter Auer. 2013. Code-switching in conversation: Language, interaction and identity. Routledge. Kalika Bali, Jatin Sharma, Monojit Choudhury, and Yogarshi Vyas. 2014. “I am borrowing ya mixing ?” an analysis of English-Hindi code mixing in Facebook. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 116–126, Doha, Qatar. Association for Computational Linguistics. Utsab Barman, Amitava Das, Joachim Wagner, and Jennifer Foster. 2014. Code mixing: A challenge for language identification in the language of social media. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 13–23, Doha, Qatar. Association for Computational Linguistics. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Ralf D Brown. 2012. Finding and identifying text in 900+ languages. Digital Investigation, 9:S34–S43. Laurie Burchell, Alexandra Birch, Nikolay Bogoychev, and Kenneth Heafield. 2023. An open dataset and model for language identification. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 865–879, Toronto, Canada. Association for Computational Linguistics. Laurie Burchell, Alexandra Birch, Robert Thompson, and Kenneth Heafield. 2024. Code-switched language identification is harder than you think. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 646–658, St. Julian’s, Malta. Association for Computational Linguistics. Amitava Das and Björn Gambäck. 2013. Code-mixing in social media text. Traitement Automatique des Langues, 54(3):41–64. Amitava Das and Björn Gambäck. 2014. Identifying languages at the word level in code-mixed Indian 464 social media text. In Proceedings of the 11th International Conference on Natural Language Processing, pages 378–387, Goa, India. NLP Association of India. A. Seza Do˘gruöz, Sunayana Sitaram, Barbara E. Bullock, and Almeida Jacqueline Toribio. 2021. A survey of code-switching: Linguistic and social perspectives for language technologies. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1654–1666, Online. Association for Computational Linguistics. Jonathan Dunn. 2020. Mapping languages: The corpus of global language use. Language Resources and Evaluation, 54:999–1018. Jonathan Dunn and Lane Edwards-Brown. 2024. Geographically-informed language identification. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 7672–7682, Torino, Italia. ELRA and ICCL. Heba Elfardy, Mohamed Al-Badrashiny, and Mona Diab. 2013. Code switch point detection in Arabic. In Natural Language Processing and Information Systems: 18th International Conference on Applications of Natural Language to Information Systems, NLDB 2013, Salford, UK, June 19-21, 2013. Proceedings 18, pages 412–416. Springer. John J Gumperz. 1982. Discourse strategies. 1. Cambridge University Press. Tommi Jauhiainen, Heidi Jauhiainen, and Krister Lindén. 2022. HeLI-OTS, off-the-shelf language identifier for text. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 3912–3922, Marseille, France. European Language Resources Association. Tommi Jauhiainen, Marco Lui, Marcos Zampieri, Timothy Baldwin, and Krister Lindén. 2019. Automatic language identification in texts: A survey. Journal of Artificial Intelligence Research, 65:675–782. Navya Jose, Bharathi Raja Chakravarthi, Shardul Suryawanshi, Elizabeth Sherly, and John P McCrae. 2020. A survey of current datasets for codeswitching research. In 2020 6th international conference on advanced computing and communication systems (ICACCS), pages 136–141. IEEE. Amir Kargaran, Ayyoob Imani, François Yvon, and Hinrich Schuetze. 2023. GlotLID: Language identification for low-resource languages. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 6155–6218, Singapore. Association for Computational Linguistics. Amir Hossein Kargaran, François Yvon, and Hinrich Schütze. 2024. GlotScript: A resource and tool for low resource writing system identification. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 7774– 7784, Torino, Italia. ELRA and ICCL. Laurent Kevers. 2022. CoSwID, a code switching identification method suitable for under-resourced languages. In Proceedings of the 1st Annual Meeting of the ELRA/ISCA Special Interest Group on UnderResourced Languages, pages 112–121, Marseille, France. European Language Resources Association. Ben King and Steven Abney. 2013. Labeling the languages of words in mixed-language documents using weakly supervised methods. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1110–1119, Atlanta, Georgia. Association for Computational Linguistics. Tom Kocmi and Ondˇrej Bojar. 2017. LanideNN: Multilingual language identification on character window. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 927–936, Valencia, Spain. Association for Computational Linguistics. Thomas Lavergne, Gilles Adda, Martine Adda-Decker, and Lori Lamel. 2014. Automatic language identity tagging on word and sentence-level in multilingual text sources: a case-study on Luxembourgish. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 3300–3304, Reykjavik, Iceland. European Language Resources Association (ELRA). Holy Lovenia, Samuel Cahyawijaya, Genta Winata, Peng Xu, Yan Xu, Zihan Liu, Rita Frieske, Tiezheng Yu, Wenliang Dai, Elham J. Barezi, Qifeng Chen, Xiaojuan Ma, Bertram Shi, and Pascale Fung. 2022. ASCEND: A spontaneous Chinese-English dataset for code-switching in multi-turn conversation. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 7259–7268, Marseille, France. European Language Resources Association. Manuel Mager, Özlem Çetino˘glu, and Katharina Kann. 2019. Subword-level language identification for intra-word code-switching. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2005–2011, Minneapolis, Minnesota. Association for Computational Linguistics. Gideon Mendels, Victor Soto, Aaron Jaech, and Julia Hirschberg. 2018. Collecting code-switched data from social media. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). 465 Lesley Milroy and Pieter Muysken. 1995. One speaker, two languages: Cross-disciplinary perspectives on code-switching, volume 10. Cambridge University Press. Giovanni Molina, Fahad AlGhamdi, Mahmoud Ghoneim, Abdelati Hawwari, Nicolas ReyVillamizar, Mona Diab, and Thamar Solorio. 2016. Overview for the second shared task on language identification in code-switched data. In Proceedings of the Second Workshop on Computational Approaches to Code Switching, pages 40–49, Austin, Texas. Association for Computational Linguistics. Carol Myers-Scotton. 1997. Duelling languages: Grammatical structure in codeswitching. Oxford University Press. Dong Nguyen and A. Seza Do˘gruöz. 2013. Word level language identification in online multilingual communication. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 857–862, Seattle, Washington, USA. Association for Computational Linguistics. NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No language left behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135– 1144. Shruti Rijhwani, Royal Sequiera, Monojit Choudhury, Kalika Bali, and Chandra Shekhar Maddila. 2017. Estimating code-switching on Twitter with a novel generalized word-level language detection technique. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1971–1982, Vancouver, Canada. Association for Computational Linguistics. Sunayana Sitaram, Khyathi Raghavi Chandu, Sai Krishna Rallabandi, and Alan W. Black. 2019. A survey of code-switched speech and language processing. arXiv preprint arXiv:1904.00784. Thamar Solorio, Elizabeth Blair, Suraj Maharjan, Steven Bethard, Mona Diab, Mahmoud Ghoneim, Abdelati Hawwari, Fahad AlGhamdi, Julia Hirschberg, Alison Chang, and Pascale Fung. 2014. Overview for the first shared task on language identification in code-switched data. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 62–72, Doha, Qatar. Association for Computational Linguistics. Thamar Solorio, Shuguang Chen, Alan W. Black, Mona Diab, Sunayana Sitaram, Victor Soto, Emre Yilmaz, and Anirudh Srinivasan, editors. 2021. Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching. Association for Computational Linguistics, Online. Aleksander Stensby, B John Oommen, and OleChristoffer Granmo. 2010. Language detection and tracking in multilingual documents using weak estimators. In Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR), pages 600–609. Springer. Genta Winata, Sudipta Kar, Marina Zhukova, Thamar Solorio, Mona Diab, Sunayana Sitaram, Monojit Choudhury, and Kalika Bali, editors. 2023. Proceedings of the 6th Workshop on Computational Approaches to Linguistic Code-Switching. Association for Computational Linguistics, Singapore. Titus Wormer. 2014. Franc library. Zeynep Yirmibe¸so˘glu and Gül¸sen Eryi˘git. 2018. Detecting code-switching between Turkish-English language pair. In Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy Usergenerated Text, pages 110–115, Brussels, Belgium. Association for Computational Linguistics. A Related Work LID has been a longstanding and active research area in NLP (Jauhiainen et al., 2019). Past research in LID can be classified into two primary subcategories: 1) monolingual LID; 2) CS LID. The first category is designed under the assumption that the text is entirely monolingual, or the text contains discrete monolingual chunks (e.g., sentences) in different languages. The aim of these works is to identify the language of the whole text or each chunk. The majority of research on this topic has been focused on covering more languages, with recent work claiming to cover over a thousand (Kargaran et al., 2023; Adebara et al., 2022; NLLB Team et al., 2022; Burchell et al., 2023; Dunn, 2020; Dunn and Edwards-Brown, 2024; Jauhiainen et al., 2022; Brown, 2012). The second category has received less attention than the first category. LID at either the document or sentence level is not effective in accurately identifying CS, which may occur within a sentence. 466 LIDs that identify languages at the word level are proposed to address this issue. The majority of studies have focused on scenarios where two predefined languages are looked for in the input, specifically concentrating on binary language detection at the word level (Nguyen and Do˘gruöz, 2013; Das and Gambäck, 2014; Elfardy et al., 2013; King and Abney, 2013; Al-Badrashiny and Diab, 2016). While some attempts choose sentence-level granularity (Stensby et al., 2010; Lavergne et al., 2014), most CS LIDs prefer operating at the word or token level. Nevertheless, certain approaches broaden the analysis to the character level (Kocmi and Bojar, 2017). Among the most recent works on CS LID, Kevers (2022) propose a method to locate CS, primarily in multilingual documents when language diversity is unstructured. It uses a sliding window and determines the local language of each token. This method requires linguistic resources such as word lists and monolingual corpora. Rijhwani et al. (2017) acknowledge the challenges in building word-level LID for CS LID. They propose an unsupervised word-level LID approach and apply it to estimate language pairs code-switched on Twitter. Their findings indicate that approximately 3.5% of tweets were code-switched. Mager et al. (2019) extend the LID task from the word level to the subword level, involving the splitting of mixed words and tagging each part with an LID. However, training such LID models at the subword level requires CS training data, which is not practical on a larger scale. B Confidence in MaskLID We discuss here additional considerations regarding the design MaskLID, notably aimed the keeping a good balance between over and under detection of labels, which is a key aspect to reliably detect instances of CS. A first comment is that in our approach, the value of parameter α is kept constant. An extension would vary this value during iterations, depending on the desired level of CS-sensitive results. However, selecting a smaller α increases the likelihood of a language being chosen again in the next round(s). In such cases, the α value for the next round should be increased so that more words belonging to L1 are masked. To ensure that MaskLID yields a low false positive rate (FPR), the feature set assigned to language Lu in step 2 should have a minimum length (in byte) τ. If not, we should increase the β value and repeat the process again to obtain a larger feature set, and evaluate whether the confidence probability prediction for this set is high. If not, terminate the procedure. It is important to note that β does not play a role in masking, as only α affects this process. The reason for defining both α and β instead of relying solely on α is to ensure a minimum byte size so that the probability prediction for this feature set can be trusted and to guarantee its high confidence. Typical α values should thus be lower than β and only target the features that strongly cue language and should accordingly be masked. Maintaining high confidence in steps 1 and 4 is more tricky; the reason for the low confidence probability in these steps could be the presence of another language. However, it could also be because the text is not among the languages supported by the LID (Kargaran et al., 2023). We suggest using a low confidence threshold for these steps or not using one at all. Finally, our algorithm uses two termination conditions, one based on the minimum sentence length (τ) , one based on the maximum number of languages in a given sentence (λ): 2 or 3 is recommended. In our test dataset, we know in advance that the number of languages is at most 2. C Experimental Settings C.1 The Label Sets of LIDs Following the labeling proposed by NLLB Team et al. (2022), our two baseline LIDs use languagescripts as labels. They define a language-script as a combination of a ISO 639-3 language code and a ISO 15924 script code. We constrain GlotLID to the set of languages supported by OpenLID. Most of the labels supported by OpenLID are supported by GlotLID. The total number of labels is 201 for OpenLID, and we select 200 labels for the constrained version of GlotLID. The only difference is due to the fact that OpenLID uses two labels for the Chinese language (zho), written in Hans and Hant scripts, whereas GlotLID combines both under the label Hani. Also, GlotLID does not support acq_Arab, nor does it not support labels pes_Arab and prs_Arab individually (as OpenLID does) but as the merged macrolanguage fas_Arab. To compensate for the lack of these two labels and to also perform experiments for Hindi and Nepali in romanized script, we add hin_Latn and npi_Latn to the set of labels for con467 strained GlotLID. To restrict a FastText-based LID model to a specific subset of languages, as indicated by Eq. (1), we only need to consider the bc values for languages c that are members of the chosen set of languages. This implies that languages not included in this set will be excluded from the softmax computation. Additionally, the rows belonging to these languages are also deleted from the matrix V(s) (Eq. (2)). C.2 Hyperparameters We here explain the hyperparameters specific to each method. MaskLID. We generated 12 small synthetic code-switch corpora by combining sentence parts from French, English, Arabic, and Persian languages, ensuring a presence of at least 30% from each of the two languages participating in the final sentence. Subsequently, we applied MaskLID with different hyperparameters to achieve the best results. The hyperparameters derived from this method, which we used for the experiments in this paper, are as follows: α = 3, β = 15, λ = 2, and τ = 20. Additionally, we employed a high-confidence threshold of 0.9 for OpenLID and GlotLID to evaluate the probability predictions for the feature set in step 2 of the algorithm, as further detailed in Section B. Baseline. Following Burchell et al. (2024), we use a threshold of 0.3 to select languages (i.e., among all languages supported by the model, the languages with confidence probability greater than 0.3 are selected). However, for a fairer comparison (since λ = 2), we only consider the top two that pass this threshold. D Data Selection The CS test sets available for consideration cover a small potential language set (Jose et al., 2020; Aguilar et al., 2020). Accessing suitable CS test sets for evaluating our method poses several challenges: 1) Arabic dialects, such as Standard ArabicEgyptian Arabic, are represented in some CS datasets (Elfardy et al., 2013; Aguilar et al., 2020). However, none of the baseline LID models yield impressive results for Arabic dialects. For instance, according to Burchell et al. (2024, Table 3), OpenLID exhibits the worst FPR for Standard Arabic and Egyptian Arabic among all the languages it supports. 2) Certain datasets present unrealistic scenarios for testing our method. For example, MandarinEnglish datasets with Mandarin written in Hani script and English in Latin script (Lovenia et al., 2022). Methods employing script detection can separate perfectly Hani from Latin, and perform two separate LID predictions.4 This does not showcase the advantages of MaskLID and the performance only is dependent to the LID performance. 3) Many accessible datasets involve CS between one language and English. Given these challenges, we decided to use datasets involving English in three sets (TurkishEnglish, Hindi-English, Nepali-English) and another set with CS between languages without English (Basque-Spanish). The Turkish-English and Basque-Spanish datasets are also used by Burchell et al. (2024). We use the code provided by these authors to label them into sentence-level tags. Turkish-English. Yirmibe¸so˘glu and Eryi˘git (2018) developed a Turkish–English dataset for CS as part of their work on CS LID for this langauge pair. The dataset is sourced from Twitter and the Ek¸si Sözlük online forum. Labels in this dataset are assigned at the token level, indicating whether each token is Turkish or English. The dataset comprises 376 lines of data, and 372 of these sentences are labeled as CS. However, for our purposes, we also require monolingual datasets in these languages, not just CS data. To address this, we created a monolingual version of the CS data for the Turkish language by removing tokens labeled as English. A similar approach cannot be applied to create an English monolingual dataset, as the English parts of the data are short sentences and would adversely impact the quality of the English monolingual data. The original dataset can be found here: github. com/zeynepyirmibes/code-switching-tr-en. Basque-Spanish. The Basque–Spanish corpus (Aguirre et al., 2022) comprises Spanish and Basque sentences sourced from a collection of text samples used in training bilingual chatbots. Volunteers were presented with these sentences and tasked with providing a realistic alternative text with the same meaning in Basque–Spanish CS. The dataset consists of 2304 lines of data, with 1377 sentences labeled as CS, 449 as Basque, and 478 as Spanish. The original dataset is available at: 4For example, GlotScript (Kargaran et al., 2024) provides a separate_script function that divides text based on different scripts: github.com/cisnlp/GlotScript. 468 github.com/Vicomtech/BaSCo-Corpus. Hindi-English & Nepali-English. Aguilar et al. (2020) provide a benchmark for linguistic CS evaluation, used in previous shared tasks on CS LID (Solorio et al., 2014; Molina et al., 2016). We test on two of its language pairs, Hindi– English and Nepali-English, using the validation sets since the test sets are private. These datasets are both sourced from Twitter and are annotated at the word level. The Hindi-English dataset has 739 lines: 322 CS, 31 Hindi, and 386 English sentences. The NepaliEnglish dataset has 1332 lines: 943 CS, 217 Nepali, and 172 English sentences. We consider both CS and monolingual data for experiments. Preprocessing Sentence-level LIDs may not perform well on very short sentences. In the corpus creation pipelines using these LIDs, shorter sentences are typically discarded. Therefore, we filter sentences with a length of 20 byte or fewer for monolingual sentences and sentences with a length of 40 byte or fewer for CS sentences. The remaining number of sentences (#S) for each portion of the data is detailed in Table 1. In addition, we clean user tags and emojis from the datasets before applying LIDs. E Examples We showcase below some failed and successful examples of MaskLID. Failed Example. In this example, the only English word is “status”. yarın bir status yapıp işlerin üstünden geçelim As we define the minimum length for each selected language to be at least τ = 20 byte, this sentence gets classified as Turkish, which is acceptable. If, otherwise, “status” would be evaluated alone, OpenLID would predict “Norwegian Nynorsk” language, and GlotLID “Kinyarwanda”. This is the reason why τ is important to be set because otherwise the result of LID cannot be trusted. The average length of the English part of sentences in the CS Turkish-English getting classified solely as Turkish by GlotLID + MaskLID is 17.858 bytes and by OpenLID + MaskLID is 19.877 bytes. So the main reason for failing these models here is the English part of this sentences is short and often does not pass the minimum length condition. Successful Example. In this example, “deadline crash walking I heard it at study” are the English words inserted in the Turkish sentence. These words are not next one to the other, so methods that only consider sliding windows might fail. MaskLID does not depend on the position of words in a sentence and correctly classify this example as Turkish-English CS. ya deadline gelmişti çok büyük bir crash olmuş arkadaşlarla walking yaparken I heard it at boğaziçi sesli study However, predicting it using solely based on OpenLID results in the top 3 labels being “Turkish”, “Turkmen”, and “North Azerbaijani”. The average length of the English part of sentences from CS Turkish-English getting classified correctly as CS Turkish-English by GlotLID + MaskLID is 42.121 bytes and by OpenLID + MaskLID is 45.294 bytes. 469
2024_short
43
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 470–487 August 11-16, 2024 ©2024 Association for Computational Linguistics An Empirical Analysis on Large Language Models in Debate Evaluation Xinyi Liu∗ Pinxin Liu∗ Hangfeng He University of Rochester [email protected] [email protected] [email protected] Abstract In this study, we investigate the capabilities and inherent biases of advanced large language models (LLMs) such as GPT-3.5 and GPT-4 in the context of debate evaluation. We discover that LLM’s performance exceeds humans and surpasses the performance of state-of-theart methods fine-tuned on extensive datasets in debate evaluation. We additionally explore and analyze biases present in LLMs, including positional bias, lexical bias, order bias, which may affect their evaluative judgments. Our findings reveal a consistent bias in both GPT-3.5 and GPT-4 towards the second candidate response presented, attributed to prompt design. We also uncover lexical biases in both GPT-3.5 and GPT-4, especially when label sets carry connotations such as numerical or sequential, highlighting the critical need for careful label verbalizer selection in prompt design. Additionally, our analysis indicates a tendency of both models to favor the debate’s concluding side as the winner, suggesting an end-of-discussion bias.1 1 Introduction Prior research in automatic debate evaluation has predominantly relied on pre-trained encoders and the modeling of argument relations and structures (Hsiao et al., 2022; Li et al., 2020; Ruiz-Dolz et al., 2022; Zhang et al., 2023). A significant drawback of these approaches is their dependency on feature engineering and extensive data training, limiting their generalizability across diverse datasets. The advent of advanced large language models (LLMs) such as GPT-3.5 and GPT-4 (Achiam et al., 2023) has marked the beginning of a new era in automating a wide spectrum of complex tasks (Wei et al., 2022; Thirunavukarasu et al., 2023; Lin* et al., 2023; Wang et al., 2023a; Tang et al., 2023; ∗Equal contribution 1Our code is publicly available at https://github.com/ XinyiLiu0227/LLM_Debate_Bias/ Evaluation: Pro side is the winner LLMs Positional biases, Lexical biases, Order biases, etc. Figure 1: Large Language Models presents various biases during the evaluation of long debates. Zhang et al., 2024; Jiang et al., 2023). These models have been increasingly utilized as automatic evaluators (Chiang and Lee, 2023a,b; Lin and Chen, 2023; Chan et al., 2023; Zeng et al., 2023; He et al., 2023). Leveraging LLMs for debate evaluation presents more challenges, including the extended duration of debates, evolving argument dynamics, and the necessity for evaluators to rely on comprehensive knowledge and reasoning that extend beyond the immediate scope of the debate. Our research delves into the utilization of LLMs for debate evaluation, uncovering their zero-shot capabilities that parallel human evaluators and surpass all existing state-of-the-art (SOTA) methods finetuned on ample data (Li et al., 2020; Hsiao et al., 2022). We further investigate potential biases in GPT3.5 and GPT-4 within the context of debate evaluation. While previous research has identified various biases in LLMs, such as persona bias (Wan et al., 2023), political bias (Feng et al., 2023), and positional bias (Wang et al., 2023b), our investigation uniquely concentrates on biases affecting debate evaluation performance, a relatively unexplored 470 domain. Specifically, upon comparing outcomes between scenarios where the positions of candidate responses are switched, persistent bias has been observed in both GPT-3.5 and GPT-4 toward the second candidate response presented, a positional bias induced by the prompt design. Beyond this, both models also display significant lexical biases, particularly when label sets carry connotations such as sequential or magnitude, underscoring the importance of careful selection of label verbalizers in prompt design to mitigate unintended biases (Liu et al., 2023). Moreover, our study reveals that both GPT-3.5 and GPT-4 exhibit a tendency to favor the concluding side of a debate as the winner, pointing to a potential end-of-discussion order bias. Interestingly, after all the identified biases are eliminated, GPT-3.5 still demonstrates a consistent bias, while this residual bias is less obvious for GPT-4. These insights highlight the nuanced nature of biases in LLMs and the complexity of designing fair and unbiased evaluation methodologies for debate evaluation. 2 Methodology LLMs’ capability for debate evaluation. As illustrated in Table 1, we utilize an evaluation template T with two placeholders, T(Side1_label, Side2_label), to solicit debate assessments from LLMs. For each debate topic, two labels are assigned to represent the Pro and Con sides, respectively. These labels are then inserted into the designated slots of the evaluation template, creating a prompt T(Side1_label = P, Side2_label = C). This tailored prompt is subsequently employed to query the LLM for its judgment on the debate. Experiments with other templates are presented in the Appendix A.4. LLMs’ biases in debate evaluation. Analyzing errors and biases in LLMs is complex due to their multifaceted decision-making processes, which are influenced by inherent orders, stances, and the nuances of prompt design. To separate LLM-induced biases from dataset biases, we use a balanced dataset, ensuring class representation equality and mitigating skewness from overrepresentation. This method focuses on LLM-specific biases within a controlled environment, minimizing dataset composition effects. We explore LLM sensitivity to label choice from alphabetic (A/B, Content The content of the whole debate: The current speech in the debate is from the user {Side1_label}: [The content of the side1] The current speech in the debate is from the user {Side2_label}: [The content of the side2] The current speech in the debate is from the user {Side1_label}: [The content of the side1] . . . Prompt Assume you are a debate evaluator, there are two participants in this debate. Given the above context of the whole debate. Please give the decision on which participant is the winner, you only need to give the character(number) of either {Side1_label}, or {Side2_label}. {Side1_label} means user {Side1_label} wins. {Side2_label} means user {Side2_label} wins. Please only give the result without any other words. Table 1: Our evaluation template for determining debate outcomes for Large Language Models. We provide a specific 3-round debate example in Appendix A.5 P/C), numerical (1/-1), to textual format (Pro/Con), providing a comprehensive bias analysis. Our study also examines the applicability of these findings in real-world, original distribution settings, extending the relevance of our insights. 3 Experiments Dataset. We utilize DDO dataset (Durmus and Cardie, 2019), which comprises 77,655 debates from 23 topics on debate.org, structured into rounds with a single utterance from each of the Pro and Con side. We focus on debates of 3 to 5 rounds, defining winners by audience vote differences exceeding two, and exclude debates with forfeits to maintain analysis integrity, following previous works’ setting (Li et al., 2020; Hsiao et al., 2022). The length of these debates aligns well with the input length capacities of current LLMs, making it more suitable than other datasets derived from transcribed debate videos. We present experiments on an additional dataset in the Appendix A.6, which demonstrate consistent findings. The dataset exhibits a win bias towards the Con side across 3 to 5-round debates (36.9% vs. 63.1%, 44.9% vs. 55.1%, 37.9% vs. 62.1%, respectively), likely due to a concluding side bias with Con frequently concluding debates. To evaluate LLMs in debate assessment, we propose two settings: balanced and unbalanced. The unbalanced setting 471 (a) A/B label set (b) P/C label set (c) 1/-1 label set (d) Pro/Con label set Figure 2: The observed positional bias in GPT-3.5 is evident through the alteration in the proportion of Predicted Con outcomes, which increases when Con is positioned as the second candidate response compared to its placement as the first. This consistent preference across all label configurations suggests a systematic positional bias favoring the second candidate, underscoring the model’s sensitivity to the order in which options are presented. replicates the original dataset’s distribution, sampling 500 debates for each round count (totaling 1500). Conversely, the balanced setting aims to examine LLMs’ inherent bias by ensuring equal representation of four scenarios—Pro or Con initiating and winning or losing—with 125 debates each for 3 and 4 rounds, and due to data constraints, 75 debates each for 5 rounds, resulting in 500 debates for 3 and 4 rounds and 300 for 5 rounds. Evaluation metrics. In addition to the accuracy reported by previous works, we measure weighted F-1 score to accommodate the imbalance between Pro win and Con win in the original data distribution, aiming for a more comprehensive understanding. Models. For open-source model, we select LLaMA2-70B (Touvron et al., 2023) as it has been demonstrated as the most powerful model in the LLaMA family. For close-source models, we select the latest stable versions of OpenAI’s GPT-3.5 and GPT-4 models at the time to conduct our experiments, namely gpt-3.5-turbo-1106 and gpt-4-1106preview. Human annotation. To assess the effectiveness of LLMs, two authors manually annotated the “win/lose” outcomes of randomly selected debates independently for 75 debates. Unlike the collective voting in multi-audience settings, this annotation was independently completed by a single annotator. 4 Results and Analysis 4.1 LLMs’ Performance Table 2 reveals that GPT-3.5 and GPT-4 match human evaluators in assessing debates, highlighting their effectiveness. Using 75 debates labeled by two of the authors enables a direct comparison with GPT-3.5 and GPT-4. They achieve significant accuracy and F1 scores—82.04% and 81.85% for GPT-3.5, and 86.22% and 86.01% for GPT-4, respectively, outperforming previous SOTA models. LLaMA2-70B, on the other hand, performs significantly worse than existing methods, being only comparable to the ruble-based method. Thus, it is less likely for LLaMA2 to be adopted as the automatic debate evaluator. Our further experiments for bias analysis therefore mainly focus on GPT-3.5 and GPT-4. Notably, the word choice in the prompt can have a profound impact on the performance of LLMs, as shown in Table 3. Within our study, employing the label set 1/-1 results in a marked decline in the performance of GPT-3.5, and using the label set Pro/Con leads to the lowest observed outcomes in GPT-4. GPT-3.5 is particularly sensitive to negative phrasing; its performance degrades below that of random selection when prompted to identify the debate’s loser rather than the winner. In contrast, GPT-4 demonstrates much less sensitivity to such changes, showing only a minor decrease in performance. 4.2 Biases Analysis Our study explores biases present in GPT-3.5 using a balanced setting of DDO dataset. Additional analyses of the GPT-3.5 on the original unbalanced DDO data and analysis of GPT-4 are in Appendix A.3 and A.2, respectively. The experiments with an extra dataset that confirm our findings are presented in Appendix A.6. Positional Bias. Figure 2 compares the proportion of predictions labeled as "Con" between in472 (a) Shuffled A/B vs. Shuffled B/A (b) Shuffled P/C vs. Shuffled C/P (c) Shuffled 1/-1 vs. Shuffled -1/1 (d) Shuffled Pro/Con vs. Shuffled Con/Pro (e) A/B vs. B/A (f) P/C vs. C/P (g) 1/-1 vs. -1/1 (h) Pro/Con vs. Con/Pro Figure 3: Each subfigure’s legend delineates the Pro/Con label set across different verbalizer configurations. GPT-3.5 demonstrates a consistent lexical bias, which persists across shuffled positions aimed at counteracting positional bias, and in settings where Pro consistently precedes Con, except for the insignificant bias within P/C. Evaluators Size Acc F1 Rule-based 6058 67.53 46.68 LLaMA2-70B 1500 65.69 56.07 BERT + Structure 78.89 – BERT + Relation 1964 80.04 – GPT-3.5 1500 82.04 81.85 GPT-4 1500 86.22 86.01 Human 1 75 77.33 77.39 Human 2 75 78.67 78.15 Table 2: GPT-3.5 and GPT-4’s performance are on par with human performance and outperform the existing state-of-the-art BERT-based methods with fine-tuning (Li et al., 2020; Hsiao et al., 2022). The rule-based model predicts the winner as the side that concludes the debate. LLaMA2-70B has similar performance to the rule-based model. stances where the Con is positioned at the first candidate response and the instances where Con is placed as the second candidate response. It shows that GPT-3.5 systematically favors the candidate response in the second position across all tested verbalizer settings. The two-sided P-values of the two-proportion z-test consistently suggest the positional bias is significant. This finding confirms the second position preference of GPT-3.5 as reported by Wang et al. (2023b). On the unbalanced data that reflects the original distribution, we also Evaluators Verbalizer Outcome Acc F1 GPT-3.5 A/B Winner 82.04 81.85 P/C Winner 81.39 81.02 1/-1 Winner 72.08 68.24 Pro/Con Winner 81.86 81.60 A/B Loser 37.72 24.74 GPT-4 A/B Winner 84.49 84.49 P/C Winner 85.11 84.78 1/-1 Winner 86.22 86.01 Pro/Con Winner 79.72 78.16 A/B Loser 80.94 81.11 Table 3: The “Verbalizer" column lists Pro_label and Con_label sets, and the “Outcome" column shows whether GPT-3.5 and GPT-4 are tasked with identifying debate winners or losers. Bold formatting indicates the top-performing verbalizer choice, while italics highlight the least effective choice. investigate the changes in the counts of predicted Pros and predicted Cons between the settings with shuffled candidate response positions and fixed positions. The details are shown in Appendix A.2, and A.3, suggesting a consistent trend. Lexical Bias. GPT-3.5 is affected by the lexical choice of labels representing the two sides of a debate, as demonstrated by Figure 3. These differences highlight the inherent lexical bias of GPT-3.5 within the selected label set. GPT-3.5 prefers the label ‘B’(‘-1’) over ‘A’(‘1’), predicting Con as the 473 winner significantly more frequently when ‘B’(‘1’) represents Con as opposed to when ‘A’(‘1’) does, as shown in Figure 3. There is no significant lexical bias found within the P/C label set for GPT3.5. The Con/Pro label configuration, which swaps the position names of the two sides, could confuse LLMs about each label’s corresponding side, as the content of the debate usually reveals the actual position of each side. This ambiguity might contribute to the poorer performance observed in the Con/Pro label setting and raises questions about the inferred preference for the ’Con’ label. The analysis of lexical bias is further detailed in Appendix A.2 and A.3. Order Bias. GPT-3.5 exhibits a significant order bias, favoring the side that concludes the debate, as shown in experiments where the Pro_label consistently ranked as the primary response (Table 4). This bias is statistically significant across all verbalizer options. The results suggest an inherent tendency in LLMs to give more weight to the final arguments. Verbalizer End-Side # P-Pro # P-Con P-Value A/B Pro 389 253 < 0.001 Con 215 427 P/C Pro 408 245 < 0.001 Con 184 460 1/-1 Pro 238 409* < 0.001 Con 70 575 Pro/Con Pro 399 218 < 0.001 Con 248 426 Table 4: Analysis of GPT-3.5 predictions correlating with debate orders, using Chi-square tests for significance. "# P-Pro" and "# P-Con" indicate the counts of Pro and Con sides predicted as winners, respectively. The results reveal a significant association with order for all verbalizer choices. * here highlights the strong lexical bias for ‘-1’ that dominates the others. 5 Discussion Our research demonstrates that LLMs outperform current SOTA models in evaluating debates but are influenced by specific word choices, affecting their efficacy. We highlight LLMs’ embedded biases—positional, lexical, and order—offering insights for future LLM training enhancements. Despite attempts to neutralize positional bias by shuffling labels in Figs 3a and 3d, GPT-3.5 still exhibits a Pro bias, contradicting its lexical preference for ’B’(’Con’). This might suggest a confirmation bias-like tendency in GPT-3.5, favoring agreement with the debate topic. We further conduct experiments shuffling A/B with B/A and 1/-1 with -1/1 label sets, where each label randomly represents Pro or Con in 50% of cases, with positions also shuffled. Despite eliminating lexical and positional biases, results indicate a persistent Pro bias, detailed in Appendix Figure 6, pointing to an underlying tendency warranting further investigation. 6 Limitations The insights from our investigation, based on the examination of GPT-3.5 and GPT-4, indicate that the discerned behavioral patterns might be unique to these specific models and not necessarily extend to other language models with divergent architectures or training approaches. With the relentless advancement in language model technology and the anticipation of updated versions, the biases detected in GPT-3.5 and GPT-4 could become obsolete in subsequent iterations. Highlighting the significance of prompt types and training techniques on the efficacy of models, our research underlines the imperative for continued research to identify the optimal prompt types for various scenarios and the optimal training methods for reducing bias. Although various biases may interact and potentially counterbalance each other, leading to improvements, the intensity of distinct bias types can vary significantly across different contexts. Consequently, a prompt that appears to exhibit balanced bias in one scenario may manifest more pronounced bias under slightly altered conditions. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. 2023. Chateval: Towards better llm-based evaluators through multi-agent debate. Cheng-Han Chiang and Hung-yi Lee. 2023a. Can large language models be an alternative to human evaluations? arXiv preprint arXiv:2305.01937. Cheng-Han Chiang and Hung-yi Lee. 2023b. A closer look into using large language models for automatic evaluation. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 8928– 8942. 474 Esin Durmus and Claire Cardie. 2019. A corpus for modeling user and language effects in argumentation on online debating. arXiv preprint arXiv:1906.11310. Shangbin Feng, Chan Young Park, Yuhan Liu, and Yulia Tsvetkov. 2023. From pretraining data to language models to downstream tasks: Tracking the trails of political biases leading to unfair nlp models. arXiv preprint arXiv:2305.08283. Hangfeng He, Hongming Zhang, and Dan Roth. 2023. Socreval: Large language models with the socratic method for reference-free reasoning evaluation. Fa-Hsuan Hsiao, An-Zi Yen, Hen-Hsen Huang, and Hsin-Hsi Chen. 2022. Modeling inter round attack of online debaters for winner prediction. In Proceedings of the ACM Web Conference 2022, pages 2860–2869. Dongfu Jiang, Xiang Ren, and Bill Yuchen Lin. 2023. Llm-blender: Ensembling large language models with pairwise ranking and generative fusion. arXiv preprint arXiv:2306.02561. Jialu Li, Esin Durmus, and Claire Cardie. 2020. Exploring the role of argument structure in online debate persuasion. arXiv preprint arXiv:2010.03538. Jingyang Lin*, Hang Hua*, Ming Chen, Yikang Li, Jenhao Hsiao, Chiuman Ho, , and Jiebo Luo. 2023. Videoxum: Cross-modal visual and textural summarization of videos. Yen-Ting Lin and Yun-Nung Chen. 2023. Llm-eval: Unified multi-dimensional automatic evaluation for open-domain conversations with large language models. arXiv preprint arXiv:2305.13711. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9):1–35. Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153–157. Ramon Ruiz-Dolz, Stella Heras, and Ana García-Fornes. 2022. Automatic debate evaluation with argumentation semantics and natural language argument graph networks. arXiv preprint arXiv:2203.14647. Yunlong Tang, Jinrui Zhang, Xiangchen Wang, Teng Wang, and Feng Zheng. 2023. Llmva-gebc: Large language model with video adapter for generic event boundary captioning. arXiv preprint arXiv:2306.10354. Arun James Thirunavukarasu, Darren Shu Jeng Ting, Kabilan Elangovan, Laura Gutierrez, Ting Fang Tan, and Daniel Shu Wei Ting. 2023. Large language models in medicine. Nature medicine, 29(8):1930– 1940. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. Yixin Wan, Jieyu Zhao, Aman Chadha, Nanyun Peng, and Kai-Wei Chang. 2023. Are personalized stochastic parrots more dangerous? evaluating persona biases in dialogue systems. Longyue Wang, Chenyang Lyu, Tianbo Ji, Zhirui Zhang, Dian Yu, Shuming Shi, and Zhaopeng Tu. 2023a. Document-level machine translation with large language models. arXiv preprint arXiv:2304.02210. Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023b. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837. Zhiyuan Zeng, Jiatong Yu, Tianyu Gao, Yu Meng, Tanya Goyal, and Danqi Chen. 2023. Evaluating large language models at evaluating instruction following. arXiv preprint arXiv:2310.07641. Daoan Zhang, Junming Yang, Hanjia Lyu, Zijian Jin, Yuan Yao, Mingkai Chen, and Jiebo Luo. 2024. Cocot: Contrastive chain-of-thought prompting for large multimodal models with multiple image inputs. arXiv preprint arXiv:2401.02582. Gechuan Zhang, Paul Nulty, and David Lillis. 2023. Argument mining with graph representation learning. In Proceedings of the Nineteenth International Conference on Artificial Intelligence and Law, pages 371–380. 475 A Appendix A.1 More details of the Dataset The dataset extends beyond textual debate content to audience votes across four evaluation criteria: making more convincing arguments, better conduct, use of reliable sources, and spelling and grammar proficiency. Consistent with prior research (Li et al., 2020; Hsiao et al., 2022), our analysis utilizes the criterion of "making more convincing arguments" for assessing debate outcomes. To ensure alignment with these studies and enhance comparability, we narrow our focus to debates with a definitive margin of victory—requiring a vote difference exceeding two—and limit our analysis to debates spanning three to five rounds, which represent the bulk of the dataset. Debates compromised by forfeits, identified either through explicit forfeit labels or instances of one side forfeiting a round, are omitted from consideration. The debates within the dataset have an average length of 1574.93 words, with the majority fit within the input length constraints of contemporary LLMs. Regarding audience engagement, the average vote counts for 3-round, 4-round, and 5-round debates stand at 10.05, 7.02, and 7.03, respectively. Furthermore, the average vote differences for these debate formats are 5.52, 4.69, and 4.79, indicating a clear preference in outcomes that facilitate our focused analysis on convincing arguments. The percentages of Con conclude the debates are 77.84%, 78.24%, and 78.13% for 3-round, 4-round and 5-round debates respectively. A.2 Additional Results of DDO Dataset in the Balanced Setting The detailed confusion matrices with various settings we experiment on balanced datasets can be found in Figure 7 for GPT-3.5 and in Figure 9 for GPT-4. Performance. We also test GPT-3.5 and GPT-4 on the same subset of human-annotated data. The accuracies achieved by GPT-3.5 and GPT-4 are 79.73% and 84.00% respectively. Positional Bias. For GPT-3.5, McNemar’s tests (McNemar, 1947) are also conducted for the settings with shuffled candidate response positions and fixed positions based on Table 5, and the results are all significant. Verbalizers ffixed_shuffled fshuffled_fixed χ2 P-Value A/B 25 86 33.52 < 0.001 P/C 16 205 161.63 < 0.001 1/-1 38 124 45.65 < 0.001 Pro/Con 34 79 17.92 < 0.001 Table 5: McNemar’s test demonstrates that all positional biases are significant within GPT-3.5. ffixed_shuffled indicates the number of debates predicted as Pro winning by the first verbalizer set but Con winning by the second verbalizer set. fshuffled_fixed indicates the number of debates predicted as Pro winning with shuffled positions but Con winning by GPT-3.5 with fixing Pro as the first candidate response. The direction of the positional bias presented by GPT-4 is also shown towards the second position, contradicting the finding of the first position favorite illustrated by Wang et al. (2023b). The two-sided p-value from the two-portion z-tests demonstrates that the positional bias in GPT-4 as shown in Figure 4 is also statistically significant. Lexical Bias. The difference in the significance of lexical bias within the A/B label set and P/C label set could be due to the alphabetical distance they have or due to their common usage. To discern the underlying cause, we further experiment with the M/N label set for they are alphabetically adjacent but not typically associated with sequential interpretation. The results, detailed in Figure 8, reveal minimal lexical bias within the M/N group, suggesting that the bias originates from conventional usage rather than alphabetic proximity. To further quantitatively assess the lexical bias in GPT-3.5, we employ McNemar’s test to analyze instances of concordances (both predict Pro or Con), instances of discordances (one predicts Pro and the other predict Con) of each flipping group as shown in Table 6. All results are statistically significant. 476 (a) A/B label set (b) P/C label set (c) 1/-1 label set (d) Pro/Con label set Figure 4: This figure illustrates the impact of positional bias on GPT-4 through the changes in the proportion of Predicted Con, shifting from when Con is fixed as the first candidate response to when it is positioned as the second. GPT-4 exhibits a positional bias towards the second candidate presented across all label set configurations. Verbalizers f12 f21 χ2 P-Value A/B vs B/A 59 178 59.751 < 0.001 P/C vs C/P 166 99 16.94 < 0.001 1/-1 vs -1/1 33 556 464.40 < 0.001 Pro/Con vs Con/Pro 147 298 51.24 < 0.001 Table 6: McNemar’s test demonstrates that all lexical biases are significant within GPT-3.5. f12 indicates the number of debates predicted as Pro winning by the first verbalizer set but Con winning by the second verbalizer set. f21 indicates the number of debates predicted as Pro winning by the second verbalizer set but Con winning by the first verbalizer set. The positions of verbalizers in the prompt are shuffled. Similar to GPT-3.5, GPT-4 also exhibits lexical bias towards ’B’, ’-1’ and potentially ’Con’ within the A/B, 1/-1, and Pro/Con label set. However, GPT-4 favors ’C’ over ’P’ significantly. McNemar’s tests of lexical bias for GPT-4 are shown in Table 8 for GPT-4. Verbalizers ffixed_shuffled fshuffled_fixed χ2 P-Value A/B 6 54 36.82 < 0.001 P/C 9 39 17.52 < 0.001 1/-1 15 45 14.02 0.002 Pro/Con 11 63 35.15 < 0.001 Table 7: McNemar’s test demonstrates that all positional biases are significant within GPT-4. ffixed_shuffled indicates the number of debates predicted as Pro winning by the first verbalizer set but Con winning by the second verbalizer set. fshuffled_fixed indicates the number of debates predicted as Pro winning with shuffled positions but Con winning by GPT-3.5 with fixing Pro as the first candidate response. Order Bias. The Chi-squared test to show the association between the GPT-4’s predictions and the sides that conclude the debates are shown in Table 9. Same as GPT-3.5, across all verbalizer choices, the order biases presented by GPT-4 are also statistically significant. In addition, the magnitude of the order bias within GPT-3.5 is much stronger than GPT-4, as measured by the Phi Coefficient. Stance Bias. Our hypothesis regarding stance bias is less evident in GPT-4, as it becomes overshadowed by lexical bias after positional bias is mitigated through shuffled positions. We conduct two experiments, employing shuffled label sets and positions under the A/B and 1/-1 configurations, as depicted in Figure 6. The findings reveal a contrasting residual bias in GPT-4 compared to GPT-3.5, after addressing positional, lexical, and order biases. A.3 Extension to Unbalanced Setting of DDO Dataset Positional Bias. We additionally explore variations in "Pro" and "Con" predictions when alternating between shuffled and fixed candidate response placements in unbalanced data that reflects the original distribution. These observations, detailed in Table 10, highlight a consistent pattern. 477 (a) Shuffled A/B vs. Shuffled B/A (b) Shuffled P/C vs. Shuffled C/P (c) Shuffled 1/-1 vs. Shuffled -1/1 (d) Shuffled Pro/Con vs. Shuffled Con/Pro Figure 5: This figure illustrates the impact of lexical bias on GPT-4 through the changes in the portion of Predicted Con from switching the verbalizers for Pro and Con. Verbalizers f12 f21 χ2 P-Value A/B vs B/A 4 36 24.03 < 0.001 P/C vs C/P 6 134 115.21 < 0.001 1/-1 vs -1/1 11 166 133.99 < 0.001 Pro/Con vs Con/Pro 6 581 561.29 < 0.001 Table 8: McNemar’s test demonstrates that all lexical biases are significant within GPT-4. f12 indicates the number of debates predicted as Pro winning by the first verbalizer set but Con winning by the second verbalizer set. f21 indicates the number of debates predicted as Pro winning by the second verbalizer set but Con winning by the first verbalizer set. The positions of verbalizers in the prompt are shuffled. Lexical Bias. The same experiments applied to the unbalanced dataset with the original distribution yield consistent results for the direction of lexical bias in GPT-3.5 (see Table 11), except for the non-significance P/C set. A.4 Enhancing Bias Reduction through Prompt Engineering Winning Definition We find no significant difference in the models’ performance between giving a definition and not giving a definition in the prompt in our preliminary experiments. Therefore, we stick with the more concise version that we illustrate in the main body of the paper. We spectacle it is because our definition of ‘winning’ is consistent with the common understanding of the term. LLM-Eval In a further step, we direct the LLMs to provide reasons for their judgments before they generate the outcomes using the prompt template shown in Table 12. Such a method is reported by Wang Verbalizer End-Side # P-Pro # P-Con Phi Coeff. P-Value A/B Pro 359 291 0.099 < 0.001 Con 293 356 P/C Pro 277 372 0.076 0.006 Con 227 419 1/-1 Pro 286 362 0.074 0.007 Con 238 411 1/-1 Pro 203 445 0.069 0.001 Con 162 486 Table 9: GPT-4 predictions and debate conclusions association analysis with significance determined by Chi-square tests. # P-Pro and # P-Con denote the number of predicted Pro sides and Con sides as the winner by the model, respectively. 478 (a) Double shuffled A/B in GPT-3.5 (b) Double shuffled 1/-1 in GPT-3.5 (c) Double shuffled A/B in GPT-4 (d) Double shuffled 1/-1 in GPT-4 Figure 6: Both the assignment of labels within each label set and the positions of labels are shuffled. These matrcies demonstrate that after eliminating the influence of the order bias, positional bias, and lexical bias, GPT-3.5 shows a stance bias towards the Pro stance, while GPT-4 shows a stance bias towards the Con stance. et al. (2023b) to be able to reduce the positional bias. We do a pilot experiment using GPT-3.5 with a single A/B label set to see if the effect comes from ’reducing’ the bias or from providing a bias in the opposite direction and thus counteract it. As the results shown in Figure 10, GPT-3.5 exhibits a greater bias towards Pro when generating analysis compared to when positions are shuffled to eliminate the positional bias. Therefore, it is more likely that prompting GPT-3.5 to generate the analysis first introduces a new bias towards Pro, which is in the opposite direction of the positional bias, since Con is consistently positioned as the second candidate response. However, arriving at a definitive answer necessitates further experimentation, which we defer to future research. A.5 Debate Example The debate example can be found in Table 13, 14 and 15. A.6 Extension to IQ2 Dataset in the Balanced Setting with GPT-4 There are 108 debates in the IQ2 dataset. The average number of words contained in each debate, including all contexts, is 17579, exceeding the current maximum length constraint of GPT-3.5 (16k tokens). Only 24 debates in IQ2 have a word count below this limit, which would result in a sample size too small to derive meaningful results. While excluding the context from the host or audience involved could reduce the average length of each debate to 12801 words, it could also lead to a lack of context in some parts of the debaters’ conversation. Therefore, we only analyze IQ2 dataset on GPT-4 with a 32k token limit. We again use the balanced setting as we explained in the Methodology section. Based on the smallest 479 (a) A/B label set (b) P/C label set (c) 1/-1 label set (d) Pro/Con label set (e) Shuffled A/B label set (f) Shuffled P/C label set (g) Shuffled 1/-1 label set (h) Shuffled Pro/Con label set (i) Shuffled B/A label set (j) Shuffled C/P label set (k) Shuffled -1/1 label set (l) Shuffled Con/Pro label set Figure 7: This figure displays confusion matrices for GPT-3.5 with various Pro_label/Con_label sets. The matrices in the first row correspond to scenarios where the Pro_label consistently occupies the leading position in the instruction prompt, potentially introducing a positional bias. In contrast, the second and third rows present matrices from experiments where the positions of Pro_label and Con_label are shuffled, aiming to mitigate this bias for pure comparisons between switching corresponding label verbalizers of Pro and Con. category (Con end with Con win) among the four conditions, we sampled IQ2 to be 13 for pro/con side end with pro/con win, a total of 52 samples. Positional Bias. GPT-4 exhibits consistent positional bias on the IQ2 dataset, as shown in Table 16. The second position is preferred over the first position, proved by the higher proportion of Predicted Con when Con is positioned as the second candidate response. Lexical Bias. We find consistent lexical biases in the IQ2 dataset with GPT-4, as shown in Table 17. ‘B’(‘-1’) is preferred over ‘A’(‘1’), indicated by the higher proportion of Predicted Con when ‘B’(‘-1’) represents Con compared to when ‘A’(‘1’) represents Con. Order Bias. GPT-4 exhibits order bias on the IQ2 dataset, which is also consistent with our finding on DDO dataset, as demonstrated by Table 18. The ending side of a debate is more likely to be predicted as the winner. 480 (a) M/N label set (b) Shuffled M/N label set (c) Shuffled N/M label set Figure 8: Lexical bias of M/N label set in GPT-3.5 Verbalizer Position # Pred Pro # Pred Con A/B Fixed 533 954 Shuffled 610 881 P/C Fixed 494 1000 Shuffled 727 769 1/-1 Fixed 230 1267 Shuffled 340 1155 Pro/Con Fixed 517 977 Shuffled 590 904 Table 10: Upon fixing and shuffling the positions of labels set as candidate responses in an unbalanced dataset that replicates the original data distribution, the analysis systematically reveals a positional bias towards the second position in GPT-3.5. Verbalizer # P-Pro # P-Con A/B 610 882 B/A 755 734 P/C 727 769 C/P 717 835 1/-1 340 1155 -1/1 943 553 Pro/Con 590 904 Con/Pro 877 619 Table 11: Upon flipping label sets and shuffling their positions in an unbalanced dataset which replicates the original data distribution, the analysis systematically reveals lexical biases in GPT-3.5 that align directionally with those identified in a balanced dataset. # P-Pro and # P-Con denote the number of predicted Pro sides and Con sides as the winner by the model, respectively. 481 (a) A/B label set (b) P/C label set (c) 1/-1 label set (d) Pro/Con label set (e) Shuffled A/B label set (f) Shuffled P/C label set (g) Shuffled 1/-1 label set (h) Shuffled Pro/Con label set (i) Shuffled B/A label set (j) Shuffled C/P label set (k) Shuffled -1/1 label set (l) Shuffled Con/Pro label set Figure 9: This figure illustrates the impact of lexical bias on GPT-4 through confusion matrices for various Pro_label/Con_label sets. The matrices in the first row correspond to scenarios where the Pro_label consistently occupies the leading position in the instruction prompt, potentially introducing a positional bias. In contrast, the second and third rows present matrices from experiments where the positions of Pro_label and Con_label are shuffled, aiming to mitigate this bias. (a) A/B label set (b) Shuffled A/B label set (c) A/B label set with Evaluation Figure 10: Analysis of the effect of generating analysis on reducing positional bias. 482 Content Prompt The content of the whole debate: The current speech in the debate is from the user {Side1_label}: [The content of the side1] The current speech in the debate is from the user {Side2_label}: [The content of the side2] The current speech in the debate is from the user {Side1_label}: [The content of the side1] . .. Vanilla Prompt Assume you are a debate evaluator, there are two participants in this debate. Given the above context of the whole debate. Please give the decision on which participant is the winner, you only need to give the character(number) of either {Side1_label}, or {Side2_label}. {Side1_label} means user {Side1_label} wins. {Side2_label} means user {Side2_label} wins. Please only give the result without any other words. Eval Prompt Assume you are a debate evaluator, there are two participants in this debate. Given the above context of the whole debate, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment. Finally, decide who wins the debate. Output with the following format: Evaluation: <your comprehensive evaluation explanation here> <winner ({Side1_label} or {Side2_label})> The final line of your output should contain only one word: {Side1_label} if you conclude that user {Side1_label} wins, or {Side2_label} if you conclude that user {Side2_label} wins. No tie or inconclusive results are allowed. Table 12: The "Vanilla Evaluation" prompts the model to predict results directly based on the content prompt. The "Eval Prompt" mandates the model to evaluate arguments for both sides and provide a holistic assessment based on the “Content prompt”. 483 The current speech in the debate is from the user {Side1_label}: Thank you, to whoever accepts this challenge, I look forward to this debate. Now, to start off, I will go over some definitions. Morality: [conformity to the rules of right conduct] Evil: [morally wrong or bad; immoral; wicked] Right: [in accordance with what is good, proper, or just] Atheist: [a person who denies or disbelieves the existence of a supreme being or beings.] Theist: [the belief in one God (in this debate I am referring to the Christian God) as the creator and ruler of the universe, without rejection of revelation] http://dictionary.reference.com... To begin with, what makes something wrong or right? The law of a specific nation? Yourself? This question was simple when we were kids, for instance if John hit Sue then John was wrong and then gets in trouble. But as we get older this topic becomes more complicated. For instance, who said it was wrong for John to hit Sue? Who said it was wrong for someone to steal, cheat, lie, murder, torture, rape? The point is, in some cultures its acceptable and even encouraged to do these things. Just look at Hitler, Stalin or any other evil dictator/government. Anyone can read about the trattorias acts that have been accurately recorded through out history. But here’s the thing, all these men committed terrible acts without believing that they themselves were ’wrong.’ For example, Hitler murdered 10 million people for ethnic cleansing reasons, and through out his entire life as ruler over Germany, never once thought he was doing an immoral act. In fact, he believed he was doing just the opposite, Hitler thought, that through killing 10 million people he was "glorifying the Father Land" and doing the world a huge favor. Plus, Hitler not only was evil himself, but he had a whole nation behind him. Millions swore true allegiance to him, and his ideas. Now, given the above paragraph, it is impossible to say that Hitler’s actions were immoral under an Atheistic world view. Why? Because in an Atheistic world view there is no God to judge such acts. The only thing that can judge Hitler in an Atheistic world are other people, but what if every single person on the planet became a Nazi. So there must be an ultimate judge, or over seer, in order for Hitler’s actions to be held accountable. So, if one wants to debate that morality is defined by the law of a specific nation, or ones ability to justify there own actions, then the voters and my oppenent should be able to see clearly that an Atheistic world view can not account for morality. Please answer the following questions in your next argument. How can Atheism account for morality? And what will you base you morality off of, if not God? The current speech in the debate is from the user {Side2_label}: This should be an interesting debate.... I love this sort of debate, ie. what are morals and why do we have them sort of thing "Because in an Atheistic world view there is no God to judge such acts. The only thing that can judge Hitler in an Atheistic world are other people, but what if every single person on the planet became a Nazi." First If everyone was a Nazi there would be no problem with Nazism because they wouldn’t have a WW2 Repeat due to the fact that everyone would agree.... Second the people/self being the judge is what I am arguing. The Ultimate judge is humanity. The concept of the Other best applies here. When we look at another acting, we judge them. When we look at ourselves acting the same way, we remember that judgment. We don’t have to actually see someone else, but imagine that there is that Other judging us. Also if we look to the roots of morality we don’t find God, but humanity. Why is it immoral to kill? Because if it was allowed then people would freely kill us. If we look at what we would think had we seen the event happen, or been the recipient, we will agree that the event is bad. From All this we can take morality to really be a golden rule of sorts. Judge ourselves as we would judge others. Do to others as we would have done to ourselves. Neither of these concepts require God, in fact they function just as well with a God as without. "How can Atheism account for morality? And what will you base you morality off of, if not God?" I have already sort of answered this but I will do it again for sake of order and clarity. Atheism accounts for morality via Humanity. The roots of our morals exist in an atheist society, they were created not by God but by human conscience and need for order and safety. I don’t want to retype the explanation of the Other(which was admittedly pretty bad) but that is a general concept of how atheism can account for and provide a base for morality. The golden rule is another base for morality. Morals Exist for human safety primarily. Why is it immoral to kill? because we don’t want to be killed. God is not the source of Morals, and therefore an atheistic world view can account for morals just as well as a theistic world view can. Table 13: The first round of a debate example. 484 The current speech in the debate is from the user {Side1_label}: Thanks for your response. Metz said, "Second the people/self being the judge is what I am arguing. The Ultimate judge is humanity." To say that humanity is the ultimate judge is not saying anything. For instance, in one part of the world it may be morally acceptable to murder your wife if she disobeys her husband. In another part of the world that particular act may be unacceptable. But, which view of the issue is right? Who decides it? The point is, that to base what is considered right or wrong off humanity is ridiculous, since humanity can not agree on an absolute, universal view of what is considered moral or immoral. Since this is true anything could be acceptable, such as murder, rape, lying cheating, abusing, drugs ect... Why? Again, because morality is totally arbitrary under the jurisdiction of humanity, since all humans have different standards of morals. And since all humans have different standards on morals, then this just illustrates my point, there must be a God to judge people’s actions. In an Atheistic world there are no absolutes for morals. Also, if there are seven hundred billion people on the planet and half say gay marriage is right but the other half say gay marriage is wrong, then who decides? What makes one view right and the other wrong? This question can not be answered in an Atheistic universe, since all the opinions given by the people are different. So, humanity, can not, on it’s own make a rational decision, dealing with morality. This is why there must be an objective standard for people to base their judgement off of. Again, under an Atheistic world view morals can not be accounted for. "Why is it immoral to kill? Because if it was allowed then people would freely kill us." What about the people who could care less about whether or not death is a reaction of killing another person. For instance, a man could be very enraged at a particular moment, so, what if he decides to kill everyone in the town regardless of wether he dies that day or lives, in the process of committing all the murders he can. Not only that flaw, but there are people who murder people all the time without getting caught, or getting killed back in the process. So for these murderers there is no incentive what so ever for them to not go out and murder another human being. Plus, saying that its immoral to murder because you will get murdered back is not even answering the question of why it is immoral to murder another human being. You need to tell me why murder is wrong in the first place. Metz said "Do to others as we would have done to ourselves." Its amazing how Atheists think, they will always claim there world has morals, and do things such as feed the poor and help many in need ect... These are all good things, its just the principles in which these acts are found, are in the Bible. You see, Atheists take morals from the Christian world view but do not acknowledge the basis of which those morals came from, which is ultimately God. Now I’m not saying that all of the morals in an Atheistic world view are taken from Christianity, but a lot of them are, Along with many other religions that acknowledge the presence of a god. Metz said, "Also if we look to the roots of morality we don’t find God, but humanity." Prove to me that we find humanity, don’t just say it, prove it or at least tell expand on that reasoning. I do not agree with that statement at all and until you try to prove it it is just your word against mine. Which is exactly what an atheistic world view consists of, one man’s word against another, which is no absolutes or universal ideas I also encourage the voters to check out this link, it will help illustrate my point. Thank you charles 15 Good Luck The current speech in the debate is from the user {Side2_label}: "To say that humanity is the ultimate judge is not saying anything. For instance, in one part of the world it may be morally acceptable to murder your wife if she disobeys her husband. In another part of the world that particular act may be unacceptable. But, which view of the issue is right? Who decides it? The point is, that to base what is considered right or wrong off humanity is ridiculous, since humanity can not agree on an absolute, universal view of what is considered moral or immoral." But this accounts for morality... it just doesn’t account for my opponents version of morality. Also this really doesn’t say why Theism can actually account for universal morals. People disagree on religion. If Morals were universal then the scenario my opponent laid out wouldn’t exist. But yet he claimed it does.... So what my opponent is essentially arguing is that Morality doesn’t work. "In an Atheistic world there are no absolutes for morals." Ok... Same thing in a Theist world. But lets look at the topic for a moment shall we? It never says Atheism needs to account for universal morals, just morals. This really doesn’t attack my case at all. The Definition of Morality my opponent gives is "conformity to the rules of right conduct" But it never says these rules must be universal. If we have laws they do not hold everyone accountable worldwide, likewise morality doesn’t have to be universal. "Again, because morality is totally arbitrary under the jurisdiction of humanity, since all humans have different standards of morals." That is how I argue we can account for morality. If we want to find acceptable morality we need people to disagree, this is how democracy works and how morality would inevitable work. And yet again, Theism is different how? "Also, if there are seven hundred billion people on the planet and half say gay marriage is right but the other half say gay marriage is wrong, then who decides? What makes one view right and the other wrong? This question can not be answered in an Atheistic universe, since all the opinions given by the people are different. So, humanity, can not, on it’s own make a rational decision, dealing with morality." Oh yeah... and God is doing so much better? The reason so many people disagree is primarily religion...granted there are other factors but religion and tradition are massive players. "So, humanity, can not, on it’s own make a rational decision, dealing with morality" Well actually we live in a largely theist world... so what you meant to say was " So, God and religion cannot make a rational decision dealing with morality" "This is why there must be an objective standard for people to base their judgement off of" yeah, its called survival mate.... people see other and judge themselves... People tell others that a certain action is wrong because they don’t want what they see done to other done to themselves... "You need to tell me why murder is wrong in the first place." Its wrong because people say its wrong... you essentially made my argument for me there; "its immoral to murder because you will get murdered back" it isn’t moral to Murder because you are ending that persons existence. I don’t want to end my existence so I tell people that it is wrong to kill. If I wanted to be killed would I say it is wrong to kill? "You see, Atheists take morals from the Christian world view but do not acknowledge the basis of which those morals came from, which is ultimately God" Um... Alright... The First appearance of the golden rule was I believe in the Analects of Confucious... Not the bible. Also it really doesn’t matter where the Morals came from as long as an Atheist world can account for them... I personally have a justification for all my moral opinions that has nothing to do with god but with how I perceive humans. "Metz said, "Also if we look to the roots of morality we don’t find God, but humanity." Prove to me that we find humanity, don’t just say it, prove it or at least tell expand on that reasoning." That I will be glad to do.... Name any generally accepted moral principle and I will show how it can be traced back to humanity. Also my opponent again makes the mistake of saying Atheism cannot account for UNIVERSAL MORALS, but sadly neither can Thiesm as we have seen and that is not the subject of this debate. Lets do an example of morality being human using the Moral principle that killing is wrong. 1. Humans don’t want to be killed 2. People, as a general rule, want to do what they feel is right. 3. Therefore people(in general), because they don’t want to be killed, have said that killing is wrong 4. Therefore it is generally accepted among people killing is wrong 5. Hence killing is considered an Immoral Act. Justification behind 1-5. 1. The Urge for Survival in all things is primary, it has been seen through the existence of life 2. The concept of the conscious tells us that we want to do the right thing. So people are deterred by the idea that what they may be doing is wrong. 3. Combination of 1&2 plus the fact that people made this decree to create the deterrence I mentioned in 2 4. A summary of 4 as a general rule 5. Putting the concept of right/wrong into Morals Thank you, Matt Good luck to my opponent, and I urge everyone to look critically at all arguments Table 14: The second round of a debate example. 485 The current speech in the debate is from the user {Side1_label}: Metz said, "Also this really doesn’t say why Theism can actually account for universal morals. People disagree on religion. If Morals were universal then the scenario my opponent laid out wouldn’t exist." I thought I made this clear in my opening statement: the God I am referring to is the Christian God. So, when you say that a theistic world view can not account for morality, because of all the different religions, then yes I would agree with you, because if there are many different religions that judge humans, then there would not be one standard to which morality is based upon. So, when I mention God I am only referring to the Christian God. Now that my view on the issue has been restated, any argument used by Metz (con), about why a theistic universe can not account for morality either; because of all the different God’s derived from different religions, will be irrelevant. Since I am only referring to One religion, which is Christianity. And since there is just one God then there is only one moral standard, thus God can account for what is wrong or right. "It never says Atheism needs to account for universal morals, just morals." Okay, lets have it Metz’s way, Atheism does not need to account for universal laws, just morals in general, very well. If there are no universal morals that prohibit certain acts of crime such as rape, murder, polygamy, theft, ect... then why am I obligated to obey those morals? Why can’t I just abide by my own moral standards, since there are know Universal ones? For instance, I could think that its just fine to murder, rape, steal ect... because that’s what I believe is right. So if there are just MORALS, to be defined by anybody, and no UNIVERSAL MORALS then who is to say that my morals are wrong? Whose to say anything is wrong for that matter? Once again the argument for an Atheistic world view on morals collapses on itself because it can not account for what is truly right or wrong. Metz said, "Its wrong because people say its wrong..." this quote is in response to me asking why murder is wrong. So are you saying that if the majority of the human population say that gays should not be aloud to marry, then that is automatically the moral standard? This is exactly my point, if a moral act is defined by what people say is moral then anything from the act of murder to a little white lie must be accepted by humanity. For instance, I could say murder is right because I said so. Also, a real life example is, ’people’ started ’saying’ that Jews should be considered sub human and thrown into concentration camps, but did this make it right? No, of course not. You see, I can say that Hitler was wrong because God commands it in the Bible, "thall shall not murder," its the 6th commandment. But, the best that my opponent can say, is, "Hitler’s acts of genocide were immoral because the Jews were being murdered against there own will." Well, my opponent’s statement just begs the question, So? Who says the Jews have a right to live in the first place? After all, millions were saying that Jews did not have the right to live. So which side is right, and why? For, to simply say that Hitler was wrong because he murdered Jews against there will is NOT answering the WHY? It only states a mere fact. Another example, John Locke, a well known philosopher who came up with idea of the Social Contract, this contract was to ensure that every human being was born with the right to live. Now, the question I have for Locke’s thinking, along with anyone else who agrees with him, is this, WHY? Why are humans born with the right to live? I do not see a logical answer without God in the equation. 1. Humans don’t want to be killed. 2. People, as a general rule, want to do what they feel is right. 3. Therefore people(in general), because they don’t want to be killed, have said that killing is wrong 4. Therefore it is generally accepted among people killing is wrong 5. Hence killing is considered an Immoral Act. Again, not only does this example have nothing to do with WHY murder is wrong. But, what you have described here is Western Civilization for the past 200 years or so, ONLY. This certainly is not the case in the Philippines, the Middle East, or any other extremely violent area in the world. This totally disproves your point above. And not only that, but what about in past history, such as the Dark Ages where many people considered murder to be a normal act, in order to get food or money so they could fill there bellies. So, when murder became an act that was generally accepted among the people, such as in the Philippines, the Middle East, and any other extremely violent areas in the world or from times in the past, such as the Dark Ages, is it then morally acceptable to murder? I see no reason why not, under an Atheistic universe. In conclusion, I still believe that my opponent has failed to answer the why for his reasoning? For instance, everyone knows that people don’t want to be murdered. But the question I am asking is, why is it wrong for people to be murdered? To say because people don’t want to be murdered is not answering the question. Because why should a murderer care about what his or her victim wants if its just a question of morals and not universal morals? Also my opponent argue that a theistic world view can not answer for this question either. Well, that isn’t answering the question, that’s just pointing fingers. As I have said, I am a Christian and will be basing my arguments off a single religion and a single God. Now, my opponent may take this as a opportunity to criticize my religion like he did in his last argument, some what. If my opponent starts to argue that Christianity is not perfect and why should God be the ultimate judge this is still not answering the question of why anything is right or wrong to do anything. Again its just pointing fingers. Now this is something I have only touched on a little, I can say something is wrong or right because I believe there is an ultimate judge, God. This means there are universal laws of morality, that are absolute, and everyone must abide by them. In an Atheistic universe the only thing that can judge morality is humanity which I have proved is inconsistent and ultimately can not account for morality at all. Again I encourage the voters to listen to the video above it really illustrates my point. My dad also had a personal relationship with Dr. Bahnsen (the man debating in the video). My dad told me that after the debate between Bahnsen (Christian) and Stein (Atheist) they continued debating each other through emails and letters, after a couple weeks of going back and forth with their arguments Stein eventually wrote "I don’t really have any answers for you, but I’m just not ever going to agree with you." Please answer the fallowing questions... 1)Why should a murderer care about what his or her victim wants if its just a question of morals? 2)Why are humans born with the right to live? I do not see a logical answer without God in the equation. 3)Are you saying that if the majority of the human population say that gays should not be aloud to marry, then that is automatically the moral standard? Thank you, charles15 The current speech in the debate is from the user {Side2_label}: I will start with the three questions my opponent proposed to me at the end of his last argument. 1)Why should a murderer care about what his or her victim wants if its just a question of morals? There are, obviously exceptions to my rule of moral deterrence. But remember my proof established it as a general rule. This has nothing to do with Atheism at all, when someone murders someone they are not in a state of mind that would disregard any moral background whatsoever. Even if we assume a theist stance, these people have committed a sin, so therefore God as much fails to uphold morals as would Atheism. Also the Psychological consequences would be felt later as philosopher and psychologist Fyodor Dostoevsky laid out in his book Crime and Punishment. 2)Why are humans born with the right to live? I do not see a logical answer without God in the equation. I hate to say this but its the shocking truth... We are born with the right to live because we have a will to live. If nobody wanted to vote would it be considered a right? This will to live is also not traceable to god, but to the fact that humans are just animals with the ability to reason. Unless my opponent wishes also to deny evolution and biological fact then this has to be accepted. The most primal instinct of live is to preserve itself. This is where morals come from as I have repetedly argued. Humans judging others and therefore judging themselves. 3)Are you saying that if the majority of the human population say that gays should not be aloud to marry, then that is automatically the moral standard? This is Mob rule, not necessarily morality. But not to criticize to much but I have that the same would be said of God. If the Bible says it then its wrong, which seems to be a common belief about gay marriage. As I said The base of Morality is humans, Gay marriage does not threaten anybody, so it is therefore it is not sought to prevent like killing would be. People Judge others in Gay Marriage but it does not affect them so the link between natural morals is flawed. A society may come to the belief that gay marriage is immoral, but it is not intrinsically immoral, and this seems to be what is happening in the world today. Now on to the remaining arguments: "Since I am only referring to One religion, which is Christianity. And since there is just one God then there is only one moral standard, thus God can account for what is wrong or right." This really doesn’t mean that everyone would follow this God, so are these people immoral? People believe do different extents, and so therefore have different morals even assuming the same God and religious texts and Church structure. In order for God to be as great a source for morals as my opponent claims we would need to abandon any remaining Autonomy and become almost robotic in our beliefs, an act which is, ironically, immoral in either world. "So if there are just MORALS, to be defined by anybody, and no UNIVERSAL MORALS then who is to say that my morals are wrong? " Not defined by anybody, defined by humanity. Humans Judge, you are judged by your fellows, you judge others and so judge yourself. Every step of the way there are checks. " If there are no universal morals that prohibit certain acts of crime such as rape, murder, polygamy, theft, ect... then why am I obligated to obey those morals? Why can’t I just abide by my own moral standards, since there are know Universal ones? For instance, I could think that its just fine to murder, rape, steal ect... because that’s what I believe is right." First, I Never said Atheism CAN’T account for universal morals merely that it was not my burden to prove that it did. Also, you can have your own moral standards, I know many people that have there own and are not killers, for example I think we have a moral obligation to fairness and to help people, I have friends that have a more sink or swim attitude. These morals can be relative, this is part of what shapes humanity, to accept that all morals are dictated to us really destroys that humans element. However when we get into killing, people judge more carefully, people are afraid. For the sake of protection and for moral order HUMANS establish moral rules, such as that against killing. Atheism can account for Morality because it was humans all along that accounted for morality. "Who says the Jews have a right to live in the first place? " They do... They have a will to live that is as strong as that of any other. This turns Life into a right intrinsic of humanity. Thus when the Jews were killed Hitler was taking an intrinsic right and the act was thus, immoral. I already addressed the other problem at the beginning. "everyone knows that people don’t want to be murdered. But the question I am asking is, why is it wrong for people to be murdered?" You gave me the answer right there. This bring me back to the same Will to Live argument. It is wrong because people have a will to live. Because they have this will it becomes a recognized right to live. Thus when someone violates this right the act is immoral in most circumstances(there are exceptions to every moral idea). Voters, When you are reading this debate you need to think about whether or not you would logically do some of the things my opponent has said in his examples, and whether you would want them done to yourself. You also must recognize that Murder’s generally have an altered or disturbed state of mind that could be influenced by such things as Alcohol that means in Either world these people don’t respect morals. The key question here is: Did my opponent prove that without God morals COULD NOT exist? Or did I prove that morals COULD exist in such a world. Remember the resolution asks could, which means "is it possible" Thanks, Metz Table 15: The third round of a debate example. 486 Verbalizer (Pro/Con) Positions Predicted Con Proportion A/B Con Second 94.23% A/B Pro Second 34.62% B/A Con Second 34.62% B/A Pro Second 3.85% Table 16: GPT-4 shows positional bias on IQ2 with 52 balanced samples. Verbalizer (Pro/Con) Positions Predicted Con Proportion A/B Con Second 94.23% B/A Con Second 34.62% A/B Pro Second 34.62% B/A Pro Second 3.85% 1/-1 Con Second 65.38% -1/1 Con Second 42.31% Table 17: GPT-4 shows lexical bias on IQ2 with 52 balanced samples. Verbalizer End-Side # P-Pro # P-Con 1/-1 Pro 11 15 Con 7 19 Table 18: Order bias shown by GPT-4 on IQ2 with 52 balanced samples. 487
2024_short
44
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 488–500 August 11-16, 2024 ©2024 Association for Computational Linguistics Fine-Tuned Machine Translation Metrics Struggle in Unseen Domains Vilém Zouhar1∗ Shuoyang Ding2 Anna Currey2 Tatyana Badeka2 Jenyuan Wang2 Brian Thompson2† 1ETH Zürich 2AWS AI Labs [email protected] Abstract We introduce a new, extensive multidimensional quality metrics (MQM) annotated dataset covering 11 language pairs in the biomedical domain. We use this dataset to investigate whether machine translation (MT) metrics which are fine-tuned on humangenerated MT quality judgements are robust to domain shifts between training and inference. We find that fine-tuned metrics exhibit a substantial performance drop in the unseen domain scenario relative to both metrics that rely on the surface form and pre-trained metrics that are not fine-tuned on MT quality judgments. 1 Introduction Automatic metrics are vital for machine translation (MT) research: given the cost and effort required for manual evaluation, automatic metrics are useful for model development and reproducible comparison between research papers (Ma et al., 2019). In recent years, the MT field has been moving away from string-matching metrics like BLEU (Papineni et al., 2002) towards fine-tuned metrics like COMET (Rei et al., 2020), which start with pre-trained models and then fine-tune them on human-generated quality judgments. Fine-tuned metrics have been the best performers in recent WMT metrics shared task evaluations (Freitag et al., 2022, 2023) and are recommended by the shared task organizers, who go so far as to say, “Neural fine-tuned metrics are not only better, but also robust to different domains.” (Freitag et al., 2022). Given the growing popularity of fine-tuned metrics, it is important to better understand their behavior. Here, we examine the question of domain robustness of fine-tuned metrics. Fine-tuned metrics contain extra parameters on top of the pretrained model which are initialized randomly (or to zero) and then fine-tuned on human-generated MT ∗Work done during an internship at Amazon. †Corresponding author COMET BLEURT UniTE BERTScore PRISM ChrF BLEU TER 0.0 0.3 Kendall’s ø WMT Bio WMT Bio WMT Bio Fine-tuned metrics have lower correlation on biomedical domain than WMT … despite other metrics having higher correlation on the biomedical domain Figure 1: Automatic machine translation metric performance on the WMT and biomedical domains, averaged across metric types (see Figure 2 for full results). quality annotations. The primary source of those annotations is prior WMT metrics shared tasks, and domains in WMT are often carried over from year to year (e.g. news). This raises the question: are fine-tuned metrics in fact robust across any domain (including domains not seen in training)? Or can their apparent strong performance be attributed in part to the artificially good domain match between training and test data? To answer these questions, we first collect human multidimensional quality metrics (MQM) annotations in the biomedical (bio) domain. Vocabulary overlap and error analysis suggest that this new dataset is distinct from the domains used in WMT. This data covers 11 language pairs and 21 translation systems, with 25k total judgments. In addition to the MQM annotations, we also create new high-quality reference translations for all directions. We release this data publicly, along with code for replication of our experiments.1 Next, we examine how different types of metrics perform on our new bio test set relative to the WMT test set. We find that fine-tuned metrics have substantially lower correlation with human 1github.com/amazon-science/bio-mqm-dataset 488 Architecture Metrics Surface-Form score tgt ref Metric BLEU CHRF TER Pre-trained+Algorithm score src tgt ref Model Metric BERTSCORE PRISM Pre-trained+Fine-tuned score src tgt ref LLM Metric COMET UNITE BLEURT Pre-trained+Prompt score src tgt ref LLM Metric GEMBA AUTOMQM Table 1: Metric types considered in this work. The components have trainable parameters while use handcrafted heuristics or algorithms and decodes from a language model. The ref input is omitted in the case of reference-free metrics (i.e. quality estimation). judgments in the bio domain, despite other types of metrics having higher correlation in the bio domain (see Figure 1), indicating they struggle with the training/inference domain mismatch. Finally, we present analysis showing that this performance gap persists throughout different stages of the finetuning process and is not the result of a deficiency with the pre-trained model. 2 Related Work Metric types. Table 1 summarizes the different types of metrics that are commonly used to evaluate MT. The earliest type of MT metrics are SurfaceForm metrics, which are purely heuristic and use word- or character-based features. We consider three common Surface-Form metrics: BLEU (Papineni et al., 2002), TER (Snover et al., 2006) and CHRF (Popovi´c, 2015). Metrics like COMET (Rei et al., 2020), BLEURT (Sellam et al., 2020), and UNITE (Wan et al., 2022) start with a pretrained language model and fine-tune it on humangenerated MT quality judgments. We denote these metrics Pre-trained+Fine-tuned.2 Another class of metrics also start with a pre-trained model but do not perform fine-tuning. Examples of such metrics include PRISM (Thompson and Post, 2020a,b), which uses the perplexity of a neural paraphraser, and BERTSCORE (Sun et al., 2022), which is based on cosine similarity of word embeddings. We denote such metrics Pre-trained+Algorithm metrics. More recently, metrics like GEMBA 2The WMT metrics task calls these “trained” metrics. WMT Bio Error severity Critical N/A 8% Major 26% 44% Minor 43% 31% Neutral 31% 16% Error category Fluency 47% 66% Accuracy 44% 18% Terminology 6% 10% Locale 2% 2% Other 1% 4% Error-free segments 45% 72% Errors per erroneous segment 1.9 2.1 Abs. erroneous segment score -4.1 -7.6 Table 2: Error distribution of our new bio dataset and the existing WMT22 MQM dataset. The MQM annotation scheme for WMT in most cases did not contain the Critical category. (Kocmi and Federmann, 2023) and AUTOMQM (Fernandes et al., 2023) have proposed prompting a large language model. We denote these as Pretrained+Prompt metrics. Domain specificity. Domain specificity for MT metrics was first explored by C. de Souza et al. (2014) for Surface-Form metrics. Sharami et al. (2023) brought attention to the issue of domain adaptation for quality estimation (QE), offering solutions based on curriculum learning and generating synthetic scores similar to Heo et al. (2021), Baek et al. (2020), and Zouhar et al. (2023). Sun et al. (2022) examined general-purpose natural language generation metrics and documented their bias with respect to social fairness. For word-level QE, Sharami et al. (2023) reported the lack of robustness of neural metrics. 3 New Bio MQM Dataset We create and release new translations and MQM annotations for the system submissions from 21 participants to the WMT21 biomedical translation shared task (Yeganova et al., 2021). To explore how different the bio domain is from the WMT22 metric task domains, we computed the vocabulary overlap coefficient between each domain. Bio had the smallest average overlap with the WMT domains (0.436) compared to 0.507, 0.486, 0.507, and 0.582 for e-commerce, news, social, and conversation, respectively. See Appendix A for full details and example sentences from each domain. 489 −0.1 0.0 0.1 0.2 ∆Kendall’s τ +0.074 +0.074 -0.021 +0.099 +0.074 +0.059 Pre-trained+Algorithm +0.138 -0.009 +0.080 +0.005 Pre-tr.+Prompt +0.186 +0.081 +0.152 +0.037 +0.167 +0.116 +0.113 +0.014 +0.162 +0.101 +0.159 +0.081 +0.072 -0.068 Pre-trained+Fine-tuned T est:W M T T est:B io REF SRC B E R T Score DAV3 DAV3.QE MQM.21 QE.21 DA.21 INHO.21 DA.22 U niT E B L E U R T P R ISM G E M B A C O M E T C O M E T C O M E T Figure 2: Gains in segment-level correlation (Kendall’s τ) when comparing Surface-Form metrics (average performance of BLEU, CHRF, and TER) to a given metric, on the WMT and bio test sets. Gains for Pre-trained+Finetuned metrics are much smaller in the unseen bio domain than the WMT domain. Pre-trained+Algorithm metrics, which do not train on prior WMT data, do not exhibit the same bias. See Appendix F for results in tabular form. 3.1 Dataset Creation We created the bio MQM dataset in three steps. Annotations and translations were performed by expert linguists with experience in the medical domain (see Appendix C for full details). Step 1: Reference re-translation. The original bio test set consists of bilingual abstracts from crawled academic papers, which might be written by non-native speakers (Névéol et al., 2020) or even MT (Thompson et al., 2024). Therefore, we create new professional reference translations. Step 2: Reference quality. To ensure a high bar of quality for the reference translations, we ask a separate set of annotators to provide MQM annotations for the new references. Any issues identified by this round of MQM annotation are then fixed by a new set of translators, resulting in the final reference translations that we release in this dataset. Step 3: MQM annotations. Finally, we conduct the main MQM annotation on the references and shared task system outputs. In this step, a single annotator rates all translations of a given document (from all systems and the reference).3 Our MQM schema follows Freitag et al. (2021) except that we add a Critical severity (assigned the same score as Major for backward compatibility). Full annotator instructions are in Appendix D. The resulting dataset contains roughly 25k segment-level annotations spanning 11 translation directions.4 In contrast, most publicly available MQM data to date covers only a few language pairs. 3This allows us to distribute annotation jobs to multiple annotators while still allowing the annotator to access documentlevel context and ensuring that the whole document is ranked consistently. 4Pt→En, En↔De, En↔Es, En↔Ru, En↔Fr, Zh↔En We use ~25% of the segments for each language pair as the train/dev set, leaving the rest as the test set (see Appendix B for exact sizes in each pair). We compare error distributions on our new bio MQM dataset and the existing WMT MQM dataset in Table 2. Bio MQM contains more Critical/Major errors, and lower absolute scores on average. However, WMT MQM has more overall sentences where an error occurs. Error category distribution also diverges, notably in Fluency and Accuracy. 4 Analysis 4.1 Are fine-tuned metrics robust across domains? Measuring domain robustness. The performance of a MT metric is typically measured by a certain meta-evaluation metric, such as segmentlevel Kendall’s τ correlation with human judgments. Intuitively, one could simply measure domain robustness by comparing the performance of a certain metric on domain A and domain B. This, however, is not straightforward with metaevaluations for metrics, since performance measured by those meta-evaluations is also affected by factors such as the quantity and quality of the translations included in the dataset, which is often hard to control for. As a result, we resort to comparisons of relative performance measured against a domain-invariant baseline. To establish such comparison, we make two assumptions: 1. We assume Surface-Form metrics can serve as a domain-invariant baseline, as they are purely based on heuristics and do not involve parameters specifically tuned on a certain domain. We use average performance of BLEU, 490 CHRF, and TER as the baseline to minimize the impact of specific choice of heuristics. 2. We assume segment-level Kendall’s τ correlation with human judgments has a linear relationship with the objective performance of a metric. Hence, relative performance can be measured by simple linear subtraction. Observations. Compared to Surface-Form metrics, we find that Pre-trained+Fine-tuned metrics provide a substantially smaller (sometimes even negative) improvement in human correlation in the bio domain than the WMT domain (see Figure 2). On the other hand, Pre-trained+Algorithm metrics, which have not been trained on WMT data, do not exhibit the same gap. This gap suggests that fine-tuned metrics struggle with unseen domains. We also observe a very large performance gap for Pre-trained+Prompt metrics. Unfortunately, these metrics rely on closed-source LLMs without published training procedures, so we do not know what data the underlying LLMs were trained on. 4.2 How does fine-tuning affect domain robustness? Model description. For this section, we focus on COMET (reference-based) and COMET-QE (reference-free) as they are among the most commonly used MT metrics. The COMET model works by representing the source, the hypothesis and the reference as three fixed-width vectors using a language model, such as XLM-Roberta-large (Conneau et al., 2019). These vectors and their combinations serve as an input to a simple feed-forward regressor which is fine-tuned to minimize the MSE loss with human MQM scores. A COMET model is trained in two stages, first on direct assessment (DA) quality annotations and then on MQM annotations, both from WMT shared tasks. Setup. We limit our experiments to the En-De, Zh-En and Ru-En language directions because of WMT MQM availability. We largely followed the training recipe in the COMET Github repo5. For details, please refer to our code. There is high inter-annotator variance in the WMT and bio MQM data. Training on the raw MQM scores is very unstable and therefore perannotator z-normalizing is necessary to replicate our setup. Note that the publicly available WMT MQM data are not z-normalized. 5github.com/Unbabel/COMET/tree/master/configs Test:WMT MQM epochs DA epochs 0 1 2 4 8 0 0.118 0.285 0.281 0.279 0.295 1 0.324 0.333 0.318 0.317 0.323 2 0.326 0.337 0.323 0.323 0.325 4 0.322 0.335 0.323 0.322 0.321 8 0.311 0.335 0.324 0.322 0.316 Test:Bio MQM epochs DA epochs 0 1 2 4 8 0 0.071 0.234 0.229 0.240 0.250 1 0.282 0.280 0.282 0.274 0.270 2 0.270 0.265 0.273 0.268 0.266 4 0.255 0.246 0.258 0.259 0.253 8 0.240 0.242 0.261 0.260 0.253 Table 3: Segment-level correlation (Kendall’s τ) between metrics and human judgments on the WMT (top) and bio (bottom) test sets, for COMET with varying epochs of WMT domain DA and MQM training. D A D A + M Q M 50 100 500 1k 2k 4k 6k Fine-tuning Bio≤s data size 0.28 0.34 Kendall’s τ .324 .282 .334 .287 .331 .288 .323 .293 .319 .302 .317 .313 .314 .316 .296 .324 .297 .328 T est:W M T T est:B io Figure 3: Average performance (8 seeds) of COMET fine-tuned on varying amounts of MQM bio data. Observation 1: Domain gap persists throughout the fine-tuning process. We would like to understand which stage among the two training stages for COMET accounts for the domain gap. To this end, we retrained COMET with varying epochs on DA/MQM data, shown in Table 3. In contrast to catastrophic forgetting (Goodfellow et al., 2013; Thompson et al., 2019a,b), where a model starts with good general-domain performance and then overfits while being adapted to a new task or domain, we do not see a sharp dropoff in the bio domain performance when training on more WMT (DA and/or MQM) data. This indicates that the model is a weak bio metric at all stages, as opposed to first learning and then forgetting. Observation 2: In-domain data dramatically improves COMET. Generally, including bio MQM annotations in training improves COMET’s performance in the bio test set, increasing correlation 491 2.4 2.8 0.21 0.34 Kendall’s τ (↑) 0.216 0.229 FT:WMT BERTScore 0.334 0.325 FT:WMT COMET Test:WMT 2.2 2.5 0.227 0.243 FT:Bio BERTScore 0.287 0.282 FT:Bio COMET Test:Bio XLMR perplexity (↓) Figure 4: Metric performance when pre-trained model is fine-tuned (FT) on bio or WMT domain data. Lower perplexity improves BERTSCORE ⃝but worsens COMET □. Perplexity is average of MLM and TLM objectives on the text portion of the MQM dataset for both domains. from 0.287 to 0.328 with 6k bio judgments. Indeed, just 1k judgements improves correlation to 0.313 (see Figure 3). This rules out the possibility that bio is inherently problematic for COMET’s architecture or fine-tuning strategy. 4.3 How does the pre-trained model affect domain robustness? COMET and BERTSCORE are both based on XLMRoberta-large (Conneau et al., 2019), allowing us to explore how the same changes to the pre-trained model affect each metric. To see whether improving the underlying pre-trained model improves Pre-trained+Algorithm metrics built on those pretrained models, we fine-tune XLM-Roberta with data similar to the WMT and bio domain setup, respectively. Similarly, we also investigate how PRISM, another Pre-trained+Algorithm metric, is affected with changes to the pre-trained model. We use PRISM with the NLLB multilingual MT models (NLLB Team et al., 2022) as they are larger and more recent than the model released with PRISM. Setup. Our fine-tuning data covers the four languages of interest, namely English, German, Russian, and Chinese (see Appendix E.2 for a detailed data list). Since NLLB is a translation model, we use only parallel data to fine-tune the model. For the XLM-Roberta case, note that it was finetuned with two objectives: masked language model (MLM) and translation language model (TLM). We use both parallel and monolingual data for MLM training and parallel data for TLM training. 39 44 46 47 48 0.21 0.23 Kendall’s τ 0.218 0.231 FT:Bio 600M 0.213 0.224 FT:Bio 1.3B 0.213 3.3B T est:B io BLEU Figure 5: Multiple NLLB MT models are used as the base model for PRISMSRC. Fine-tuning the underlying MT model improves the metric. Compute constraints preclude finetuning NLLB-3.3B. Observations: XLM-Roberta. For both domains, improving the pre-trained model improves BERTSCORE but not COMET (see Figure 4). This indicates that the limiting factor for the poor performance of COMET on bio is the effect from its various fine-tuning stages (discussed in Section 4.2), not an underlying weakness in the pre-trained model on bio. Observations: NLLB. Our findings are shown in Figure 5. In general, we found that improving the pre-trained models performance (as measured by BLEU on a held out test set) also improved PRISM’s performance. 5 Conclusion and Future Work This paper investigated the performance of machine translation metrics across divergent domains. To this end, we introduced a new, extensive MQMannotated dataset covering 11 language pairs in the bio domain. Our analysis showed that Pretrained+Fine-tuned metrics (i.e. those that use prior human quality annotations of MT output) exhibit a larger gap between in-domain and out-ofdomain performance than Pre-trained+Algorithm metrics (like BERTSCORE). Further experiments showed that this gap can be attributed to the DA and MQM fine-tuning stage. Despite the gap between in-domain and out-ofdomain performance, COMET is still the best performing metric on the bio domain in absolute terms. Thus, our findings suggest potential directions for future work including collecting more diverse human judgments for Pre-trained+Fine-tuned metrics and exploring ways to improve the generalization of such metrics during fine-tuning. 492 Limitations Our findings are dependent on two empirical assumptions we discussed in section 4.1. To the best of our knowledge, those assumptions are necessary to achieve a fair comparison of metrics across domains, but conclusions may change if our assumptions are refuted in future studies. We draw conclusions based on a single unseen domain (biomedical). While additional domains would have been preferable, data collection was cost prohibitive. Context has been shown to be beneficial in machine translation evaluation (Läubli et al., 2018; Toral, 2020) and some metrics used in this work have document-level versions (Vernikos et al., 2022). However, in order to draw fair comparisons with existing metrics which do not yet have a document-level version, we only evaluated metrics at the sentence level. We focused on segment-level evaluation and did not attempt system-level comparisons because of the limited number of system submissions to the WMT biomedical translation shared task. Acknowledgements We would like to thank Georgiana Dinu, Marcello Federico, Prashant Mathur, Stefano Soatto, and other colleagues for their feedback at different stages of drafting. Ethical Considerations Our human annotations were conducted through a vendor. Annotators were compensated in accordance to the industry standard – specifically, in the range of $27.50 to $37.50 on an hourly basis, depending on the experience of the annotator. References Yujin Baek, Zae Myung Kim, Jihyung Moon, Hyunjoong Kim, and Eunjeong Park. 2020. PATQUEST: Papago translation quality estimation. In Proceedings of the Fifth Conference on Machine Translation, pages 991–998, Online. Association for Computational Linguistics. José G. C. de Souza, Marco Turchi, and Matteo Negri. 2014. Machine translation quality estimation across domains. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 409–420, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. CoRR, abs/1911.02116. Patrick Fernandes, Daniel Deutsch, Mara Finkelstein, Parker Riley, André F. T. Martins, Graham Neubig, Ankush Garg, Jonathan H. Clark, Markus Freitag, and Orhan Firat. 2023. The devil is in the errors: Leveraging large language models for fine-grained machine translation evaluation. Johann Frei and Frank Kramer. 2023. German medical named entity recognition model and data set creation using machine translation and word alignment: Algorithm development and validation. JMIR Form Res, 7:e39077. Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021. Experts, errors, and context: A large-scale study of human evaluation for machine translation. Transactions of the Association for Computational Linguistics, 9:1460–1474. Markus Freitag, Nitika Mathur, Chi-kiu Lo, Eleftherios Avramidis, Ricardo Rei, Brian Thompson, Tom Kocmi, Frederic Blain, Daniel Deutsch, Craig Stewart, Chrysoula Zerva, Sheila Castilho, Alon Lavie, and George Foster. 2023. Results of WMT23 metrics shared task: Metrics might be guilty but references are not innocent. In Proceedings of the Eighth Conference on Machine Translation, pages 578–628, Singapore. Association for Computational Linguistics. Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, Eleftherios Avramidis, Tom Kocmi, George Foster, Alon Lavie, and André F. T. Martins. 2022. Results of WMT22 metrics shared task: Stop using BLEU – neural metrics are better and more robust. In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 46–68, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. 2013. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211. Dam Heo, WonKee Lee, Baikjin Jung, and Jong-Hyeok Lee. 2021. Quality estimation using dual encoders with transfer learning. In Proceedings of the Sixth Conference on Machine Translation, pages 920–927, Online. Association for Computational Linguistics. Tom Kocmi and Christian Federmann. 2023. Large language models are state-of-the-art evaluators of translation quality. arXiv preprint arXiv:2302.14520. Samuel Läubli, Rico Sennrich, and Martin Volk. 2018. Has machine translation achieved human parity? a 493 case for document-level evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4791–4796, Brussels, Belgium. Association for Computational Linguistics. Qingsong Ma, Johnny Wei, Ondˇrej Bojar, and Yvette Graham. 2019. Results of the WMT19 metrics shared task: Segment-level and strong MT systems pose big challenges. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 62–90, Florence, Italy. Association for Computational Linguistics. Aurélie Névéol, Antonio Jimeno Yepes, and Mariana Neves. 2020. MEDLINE as a parallel corpus: a survey to gain insight on French-, Spanish- and Portuguese-speaking authors’ abstract writing practice. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 3676– 3682, Marseille, France. European Language Resources Association. NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No language left behind: Scaling humancentered machine translation. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Maja Popovi´c. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881–7892, Online. Association for Computational Linguistics. Javad Pourmostafa Roshan Sharami, Dimitar Shterionov, Frédéric Blain, Eva Vanmassenhove, Mirella De Sisto, Chris Emmery, and Pieter Spronck. 2023. Tailoring domain adaptation for machine translation quality estimation. In Proceedings of the 24th Annual Conference of the European Association for Machine Translation, pages 9–20, Tampere, Finland. European Association for Machine Translation. Matthew Snover, Bonnie Dorr, Rich Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers, pages 223–231, Cambridge, Massachusetts, USA. Association for Machine Translation in the Americas. Tianxiang Sun, Junliang He, Xipeng Qiu, and Xuanjing Huang. 2022. BERTScore is unfair: On social bias in language model-based metrics for text generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3726–3739, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Brian Thompson, Mehak Preet Dhaliwal, Peter Frisch, Tobias Domhan, and Marcello Federico. 2024. A shocking amount of the web is machine translated: Insights from multi-way parallelism. arXiv preprint arXiv:2401.05749. Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh, and Philipp Koehn. 2019a. Overcoming catastrophic forgetting during domain adaptation of neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2062–2068, Minneapolis, Minnesota. Association for Computational Linguistics. Brian Thompson, Rebecca Knowles, Xuan Zhang, Huda Khayrallah, Kevin Duh, and Philipp Koehn. 2019b. HABLex: Human annotated bilingual lexicons for experiments in machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1382–1387, Hong Kong, China. Association for Computational Linguistics. Brian Thompson and Matt Post. 2020a. Automatic machine translation evaluation in many languages via zero-shot paraphrasing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 90–121, Online. Association for Computational Linguistics. Brian Thompson and Matt Post. 2020b. Paraphrase generation as zero-shot multilingual translation: Disentangling semantic similarity from lexical and syn494 tactic diversity. In Proceedings of the Fifth Conference on Machine Translation, pages 561–570, Online. Association for Computational Linguistics. Antonio Toral. 2020. Reassessing claims of human parity and super-human performance in machine translation at WMT 2019. In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, pages 185–194, Lisboa, Portugal. European Association for Machine Translation. Giorgos Vernikos, Brian Thompson, Prashant Mathur, and Marcello Federico. 2022. Embarrassingly easy document-level MT metrics: How to convert any pretrained metric into a document-level metric. In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 118–128, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Yu Wan, Dayiheng Liu, Baosong Yang, Haibo Zhang, Boxing Chen, Derek Wong, and Lidia Chao. 2022. UniTE: Unified translation evaluation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8117–8127, Dublin, Ireland. Association for Computational Linguistics. Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Doug Burdick, Darrin Eide, Kathryn Funk, Yannis Katsis, Rodney Michael Kinney, Yunyao Li, Ziyang Liu, William Merrill, Paul Mooney, Dewey A. Murdick, Devvret Rishi, Jerry Sheehan, Zhihong Shen, Brandon Stilson, Alex D. Wade, Kuansan Wang, Nancy Xin Ru Wang, Christopher Wilhelm, Boya Xie, Douglas M. Raymond, Daniel S. Weld, Oren Etzioni, and Sebastian Kohlmeier. 2020. CORD-19: The COVID-19 open research dataset. In Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020, Online. Association for Computational Linguistics. Lana Yeganova, Dina Wiemann, Mariana Neves, Federica Vezzani, Amy Siu, Inigo Jauregi Unanue, Maite Oronoz, Nancy Mah, Aurélie Névéol, David Martinez, Rachel Bawden, Giorgio Maria Di Nunzio, Roland Roller, Philippe Thomas, Cristian Grozea, Olatz Perez-de Viñaspre, Maika Vicente Navarro, and Antonio Jimeno Yepes. 2021. Findings of the WMT 2021 biomedical translation shared task: Summaries of animal experiments as new test set. In Proceedings of the Sixth Conference on Machine Translation, pages 664–683, Online. Association for Computational Linguistics. Vilém Zouhar, Shehzaad Dhuliawala, Wangchunshu Zhou, Nico Daheim, Tom Kocmi, Yuchen Eleanor Jiang, and Mrinmaya Sachan. 2023. Poor man’s quality estimation: Predicting reference-based MT metrics without the reference. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 1311– 1325, Dubrovnik, Croatia. Association for Computational Linguistics. 495 WMT Bio Langs Test Train Test Dev Total De-En 2457 903 3360 En-De 18k 28k 2695 917 3612 Es-En 1013 309 1322 En-Es 1112 330 1442 Ru-En 1324 388 1712 En-Ru 19k 16k 825 237 1062 Fr-En 1108 352 1460 En-Fr 1228 308 1536 Zh-En 23k 27k 2838 913 3751 En-Zh 3900 1200 5100 Pt-En 701 222 924 All 60k 71k 19k 6k 25k Table 4: Data split of the bio MQM data released in this work, and WMT22 MQM (Freitag et al., 2022) data. All test results are reported with the test split which is approximately 75% of total. Splits were created to respect document-level boundaries. For WMT, 2022 is used for testing and 2020 and 2021 for training. A Domain Overlap Between WMT and bio To evaluate the overlap between the WMT and bio domains, we calculate the vocabulary overlap coefficient ( |A∩B| min(|A|,|B|)) between our new bio MQM dataset and the domains used in the WMT22 metrics shared task. The per-domain overlap matrix is shown in Figure 6. Randomly selected sentences from each domain are provided for illustration in Figure 7. B Corpus Statistics Table 4 shows the size per language pair of our bio MQM dataset, as well as the WMT MQM dataset for comparison. The bio MQM dataset contains roughly 25k annotated segments, covering 11 language pairs. We split the data into test (roughly 75%) and development (roughly 25%) sets. C Translator/Annotator Qualifications There were 2-4 MQM annotators for each language pair, and a total of 46 annotators. All linguists had experience in translating/post-editing/reviewing content in the bio domain. This was the main requirement to be able to work on the project. The other qualification criteria for this project were in line with the ISO standard 17100. In particular, the linguists met one or more of the following criteria: (1) A recognized higher education degree in translation; (2) Equivalent third-level degree in another subject plus a minimum of two years of documented professional translation experience; (3) A minimum of five years of documented professional translation experience; (4) Native speaker of the target language. Although linguists were experts in the bio domain, not all of them were experts in MQM annotation. For this reason, the annotators completed an MQM quiz before onboarding them to ensure they understood the guidelines and requirements. For the translation and post-editing tasks, we used a two step process (initial post editor + reviewer). In each case the reviewer was a linguist with experience translating medical texts. There were no specific educational or vocational stipulations on that medical qualification, however they were asked to provide a medical-text-specific translation test for us to be onboarded for the project. The initial post-editor in each case was a linguistic expert, but not specifically an expert in medical translations, which is why we followed up with reviewers to ensure contents were translated accurately. Linguists had to demonstrate the following to onboard to the project: (1) At least 3+ years of professional translation experience (2) Proven proficiency in English writing skills (3) In-depth understanding and exposure to the language (4) Strong ability in translating, reviewing, adjusting, and providing adaptation for various writing styles of particular requests. D MQM Annotation Guidelines Below, we reproduce the MQM annotation guidelines that we provided to the annotators. Overview: You are asked to evaluate the translations using the guidelines below, and assign error categories and severities considering the context segments available. Task: 1. Please identify all errors within each translated segment, up to a maximum of five. (a) If there are more than five errors, identify only the five most severe. (b) If it is not possible to reliably identify distinct errors because the translation is too badly garbled or is unrelated to the source, then mark a single Unintelligible error that spans the entire segment (c) Annotate segments in natural order, as if you were reading the document. You 496 e-commerce news social conversation biomedical e-commerce 1.000 0.349 0.511 0.662 0.369 news 0.349 1.000 0.517 0.592 0.359 social 0.511 0.517 1.000 0.494 0.462 conversation 0.662 0.592 0.494 1.000 0.554 biomedical 0.369 0.359 0.462 0.554 1.000 Figure 6: Vocabulary overlap coefficient between the English source-side data for each domain in the WMT22 and our bio dataset. e-commerce This was one of the first albums I purchased of Keith’s "back in the day". news Sean Combs has been variously known as Puff Daddy, P. Diddy or Diddy, but this year announced his preference for the names Love and Brother Love. social The comment about boiling being inefficient is probably correct bc even though the water heater is running continuously, that thing has SO MUCH insulation. conversation Let me know if you were able to create your new password and sign in with it biomedical Though neither perfectly sensitive nor perfectly specific for trachoma, these signs have been essential tools for identifying populations that need interventions to eliminate trachoma as a public health problem. Figure 7: Randomly selected English example sentences from each domain in the WMT22 metrics shared task as well as our new bio dataset. may return to revise previous segments. 2. To identify an error, highlight the relevant span of text. (a) Omission and Source error should be tagged in the source text. i. All other errors should be tagged in the target text. (b) Unintelligible error should have an entire sentence tagged; if you think a smaller span is needed, then you should select another error category (Mistranslation, etc.). 3. Select a category/sub-category and severity level from the available options. 4. When identifying errors, please be as finegrained as possible. (a) If a sentence contains more than one error of the same category, each one should be logged separately. For example, if a sentence contains two words that are each mistranslated, two separate mistranslation errors should be recorded. (b) If a single stretch of text contains multiple errors, you only need to indicate the one that is most severe. i. If all have the same severity, choose the first matching category listed in the error typology (e.g. Accuracy, then Fluency, then Terminology, etc.). (c) For repetitive errors that appear systematically through the document: please annotate each instance with the appropriate weight. 5. Please pay particular attention to the context when annotating. You will be shown several context segments before and after the segment for evaluation. If a translation is questionable on its own but is fine in the context of the document, it should not be considered erroneous; conversely, if a translation might be acceptable in some context, but not within the current document, it should be marked as wrong. Delivery format: • file format: a TSV with additional columns for error categories and severity + JSON 497 – for multiple errors in one segment: additional row for each error + severity – text spans will be highlighted for the annotation process and exported as tag Error categories: Table 5 Severity (no weights, just severity): Table 6 E Supplementary Information on Experiments E.1 Training Steps and Compute Time for Experiments The overall training consists of the following steps (compute times using a single A10 GPU). The times are per epoch and some experiments require training for multiple epochs. • Language modeling →XLM-Roberta, 10hr/ep. • DA scores regression →COMETDA, 10hr/ep. • MQM scores regression →COMET, 1hr/ep. E.2 List of Data for Fine-Tuning Pre-Trained Model For WMT domain, we used news-commentary v18.1 dataset6 for all languages. For the bio domain, we list the data in Table 7. Data Type Language(s) Dataset Lines Parallel en-de UFAL Medical Corpus (Yeganova et al., 2021) 3M en-de MEDLINE (Yeganova et al., 2021) 35k en-ru 29k en-zh 19k Monoling. En CORD (Wang et al., 2020) 1M De Animal Experiments7 GERNERMED (Frei and Kramer, 2023) 250k Ru Medical QA 250k Zh Chinese Medical Dataset8 2M Table 7: Collection of bio domain data used in pretrained model fine-tuning experiments. F Raw Scores for Figure 2 The segment-level correlation (Kendall’s τ) scores used to compute improvements in Figure 2 are provided in Table 8. Note that there is no public COMET 22 MQM model. 6data.statmt.org/news-commentary/v18.1/ 7www.openagrar.de/receive/openagrar_mods_ 00046540?lang=en 8huggingface.co/datasets/shibing624/medical 498 Tag Location Accuracy – errors occurring when the target text does not accurately correspond to the propositional content of the source text, introduced by distorting, omitting, or adding to the message Mistranslation Target content that does not accurately represent the source content. Target Addition Target content that includes content not present in the source. Target Omission Errors where content is missing from the translation that is present in the source. Source Untranslated Errors occurring when a text segment that was intended for translation is left untranslated in the target content. Target Linguistic Conventions (former Fluency) - errors related to the linguistic well-formedness of the text, including problems with, for instance, grammaticality and mechanical correctness. Grammar Error that occurs when a text string (sentence, phrase, other) in the translation violates the grammatical rules of the target language. Target Punctuation Punctuation incorrect for the locale or style. Target Spelling Error occurring when the letters in a word in an alphabetic language are not arranged in the normally specified order. Target Character encoding Error occurring when characters garbled due to incorrect application of an encoding. Target Register Errors occurring when a text uses a level of formality higher or lower than required by the specifications or by common language conventions. Target Terminology - errors arising when a term does not conform to normative domain or organizational terminology standards or when a term in the target text is not the correct, normative equivalent of the corresponding term in the source text. Inconsistent use of terminology Use of multiple terms for the same concept (technical terms, medical terms, etc.) Target Wrong term Use of term that it is not the term a domain expert would use or because it gives rise to a conceptual mismatch. Target Style Non-fluent Text does not sound fluent or natural as if it were translated by a non-native speaker or because the translation is following the source too closely. Target Locale Conventions - errors occurring when the translation product violates locale-specific content or formatting requirements for data elements. Number format Target Currency format Target Measurement format Target Time format Target Date format Target Address format Target Telephone format Target Other any error that does not fit the categories above Target Source errors source error The error that occurs in the source. All source errors (e.g. non-fluent source) should be annotated as source errors — no sub-categories need to be selected. If the source error caused a target error: - if the source error and target errors belong to the same category, then only flag the source. -If source and target errors belong to different categories - even if you know that the source error caused the translation error - do flag both. Source Unintelligible So many errors, or errors are so outrageous, that text becomes incomprehensible, and it is hard to pinpoint a specific error type. Target. Tag the entire sentence. If the span is smaller, then a different category should be applied, such as Mistranslation, Untranslated, etc. Table 5: MQM error categories provided in annotator instructions. 499 severity Definition Source example Translation example Neutral Neutral issues are items that need to be noted for further attention or fixing but which should not count against the translation. This severity level can be perceived as a flag for attention that does not impose a penalty. It should be used for “preferential errors” (i.e, items that are not wrong, per se, but where the reviewer or requester would like to see a different solution). Source: Join us in celebrating 10 years of the company! Target: Join us to celebrate 10 years of the company! Minor Minor issues are issues that do not impact usability or understandability of the content. If the typical reader/user is able to correct the error reliably and it does not impact the usability of the content, it should be classified as minor. S1: Accurately distinguish between legitimate and high-risk account registrations S2: See how organizations worldwide are using fraud detection. T1: Accurately distinguish between legitimate and high- risk account registrations T2: See how organization worldwide are using fraud detection. Major errors that would impact usability or understandability of the content but which would not render it unusable. For example, a misspelled word that may require extra effort for the reader to understand the intended meaning but does not make it impossible to comprehend should be labeled as a major error. Additionally, if an error cannot be reliably corrected by the reader/user (e.g., the intended meaning is not clear) but it does not render the content unfit for purpose, it should be categorized as major. Source: Set the performance to 50 percent Target: Set performance 50 percent Critical errors that would render a text unusable, which is determined by considering the intended audience and specified purpose. For example, a particularly bad grammar error that changes the meaning of the text would be considered Critical. Critical errors could result in damage to people, equipment, or an organization’s reputation if not corrected before use. If the error causes the text to become unintelligible, it would be considered Critical. S1: Set the device on the highest temperature setting. S2: The next step would be to identify the point of leakage. S3: 1.3 degrees T1: Set the device on the lowest temperature setting. T2: It would be to identify the next point of leakage. T3: 1,300 degrees Table 6: Severity examples and explanations provided in MQM annotation instructions. Type Metric Test:WMT Test:Bio Surface-Form BLEU 0.134 0.213 ChrF 0.151 0.192 TER 0.140 0.100 Pre-trained+Algorithm PRISMREF 0.216 0.242 PRISMSRC 0.121 0.267 BERTScore 0.216 0.227 Pre-trained+Prompt GEMBADAV3 0.280 0.159 GEMBADAV3.QE 0.222 0.173 Pre-trained+Fine-tuned COMETMQM.21 0.328 0.249 COMETQE.21 0.294 0.205 COMETDA.21 0.309 0.284 COMETINHO.21 0.255 0.182 COMETDA.22 0.304 0.269 UniTE 0.301 0.249 BLEURT 0.214 0.100 Table 8: Segment-level correlation (Kendall’s τ) between metrics and human judgments on the WMT and bio domain. Pre-trained+Fine-tuned metrics have lower correlation on bio than on WMT, while Surface-Form and Pre-trained+Algorithm tend to have higher correlation. 500
2024_short
45
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 501–509 August 11-16, 2024 ©2024 Association for Computational Linguistics IndicIRSuite: Multilingual Dataset and Neural Information Models for Indian Languages Saiful haq1,4, Ashutosh Sharma3, Omar Khattab2, Niyati Chhaya4, Pushpak Bhattacharyya1 1IIT Bombay, 2Stanford University, 3UIUC, 4Hyprbots Systems Private Limited Correspondence: [email protected] Abstract In this paper, we introduce Neural Information Retrieval resources for 11 widely spoken Indian Languages (Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, and Telugu) from two major Indian language families (Indo-Aryan and Dravidian). These resources include (a) INDICMARCO, a multilingual version of the MS MARCO dataset in 11 Indian Languages created using Machine Translation, and (b) IndicColBERT, a collection of 11 distinct Monolingual Neural Information Retrieval models, each trained on one of the 11 languages in the INDIC-MARCO dataset. To the best of our knowledge, IndicIRSuite is the first attempt at building large-scale Neural Information Retrieval resources for a large number of Indian languages, and we hope that it will help accelerate research in Neural IR for Indian Languages. Experiments demonstrate that Indic-ColBERT achieves 47.47% improvement in the MRR@10 score averaged over the INDIC-MARCO baselines for all 11 Indian languages except Oriya, 12.26% improvement in the NDCG@10 score averaged over the MIRACL Bengali and Hindi Language baselines, and 20% improvement in the MRR@100 Score over the Mr. Tydi Bengali Language baseline. IndicIRSuite is available at github.com/saifulhaq95/IndicIRSuite. 1 Introduction Information Retrieval (IR) models process user queries and search the document corpus to retrieve a ranked list of relevant documents ordered by a relevance score. Classical IR models, like BM25 (Robertson et al., 2009), retrieve documents that have lexical overlap with the query tokens. Recently, there has been a notable upsurge in adopting Neural IR models utilizing language models such as BERT (Devlin et al., 2018), which enable semantic matching of queries and documents. This shift has proven highly effective in retrieving and reranking documents. ColBERTv2 (Santhanam et al., 2021), one of the state-of-art neural IR models, has shown 18.5 points improvement in NDCG@10 Score over the BM25 model baseline on the MS MARCO dataset (Thakur et al., 2021). The importance of dataset size outweighs domain-matching in training neural IR models (Zhang et al., 2022a). Due to the scarcity of largescale domain-specific datasets, Neural IR models are first trained on the MS MARCO passage ranking dataset (Nguyen et al., 2016), and they are subsequently evaluated on domain-specific datasets in a zero-shot manner. MS MARCO dataset contains 39 million training triplets (q, +d, -d) where q is an actual query from the Bing search engine, +d is a human-labeled passage answering the query, and -d is sampled from unlabelled passages retrieved by the BM25 model. The MS MARCO dataset is in English, implying that neural IR models trained on it are effective only with English queries and passages. Monolingual IR for non-English languages (Zhang et al., 2022b) (Zhang et al., 2021), Multilingual IR (Lawrie et al., 2023), and Cross-lingual IR (Lin et al., 2023; Sun and Duh, 2020) extend the English IR paradigm to support diverse languages. In Monolingual IR for non-English languages, the query and passages are in the same language, which is not English. In cross-lingual IR, the query is used to create a ranked list of documents such that each document is in the same language, which is different from the query language. In Multilingual IR, the query is used to create a ranked list of documents such that each document is in one of the several languages, which can be the same or different from the query language. In this work, we focus on Monolingual IR for non-English languages. Monolingual IR for non-English languages involves training an encoder like mBERT (Devlin et al., 2018), on a large-scale general-domain 501 monolingual dataset for non-English languages to minimize the pairwise softmax cross-entropy loss. The trained models are subsequently finetuned or used in a zero-shot manner on small-scale domainspecific datasets. However, there is a notable lack of large-scale datasets like mMARCO (Bonifacio et al., 2021) for training monolingual neural IR models on many low-resource Indian languages. We introduce neural IR resources to address this scarcity and facilitate Monolingual neural IR across 11 Indian languages. Our contributions are: • INDIC-MARCO, a multilingual dataset for training neural IR models in 11 Indian Languages (Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil and Telugu). For every language in INDIC-MARCO, there exists 8.8 Million passages, 1 Million queries, 39 million training triplets (query, relevant document, irrelevant document), and approximately one relevant document per query. To the best of our knowledge, this is the first large-scale dataset for training a neural IR system on 11 widely spoken Indian languages. • Indic-ColBERT, a collection of 11 distinct Monolingual Neural Information Retrieval models, each trained on one of the 11 languages in the INDIC-MARCO dataset. IndicColBERT achieves 47.47% improvement in the MRR @10 score averaged over the INDICMARCO baseline for all 11 Indian languages except Oriya, 12.26% improvement in the NDCG @10 score averaged over the MIRACL Bengali and Hindi Language baselines, and 20% improvement in the MRR@100 Score over the Mr. Tydi Bengali Language baseline. To the best of our knowledge, this is the first effort for a neural IR dataset and models on 11 major Indian languages, thereby providing a benchmark for Indian language IR. 2 Related work The size of datasets holds greater importance than ensuring domain matching in the training of neural IR models (Zhang et al., 2022a). In terms of size and domain, mMARCO (Bonifacio et al., 2021) is the most similar to our work as it introduces a largescale machine-translated version of MS MARCO in many languages, Hindi being the only Indian language. MIRACL (Zhang et al., 2022b) and Mr. Tydi (Zhang et al., 2021) also introduce datasets and models for Monolingual Neural IR in Hindi, Bengali, and Telugu. FIRE1 was the most active initiative from 2008 to 2012 for Multilingual IR in Indian languages. FIRE developed datasets for Multilingual IR in six Indian Languages (Bengali, Gujarati, Hindi, Marathi, Oriya, and Tamil). However, the size of these datasets is not large enough to train neural IR systems based on transformer models like mBERT (Devlin et al., 2018) and XLM (Lample and Conneau, 2019). In addition, the text in the FIRE dataset comes from newspaper articles (Palchowdhury et al., 2013), which is domain-specific; hence, the models trained on such datasets cannot generalize well to other domains. Due to the lack of large-scale datasets, Cross-lingual knowledge transfer via Distillation has become popular for neural IR in low-resource languages (Huang et al., 2023a) (Huang et al., 2023b). The key distinction in our work from the earlier approaches is that we introduce monolingual datasets and neural IR models in 11 major Indian Languages (Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil and Telugu), that can also benefit Crosslingual and Multilingual IR models from the crosslingual transfer effects when trained on a large number of Indian Languages (Zhang et al., 2022a). 3 Datasets 3.1 INDIC-MARCO We introduce the INDIC-MARCO dataset, a multilingual version of the MS MARCO dataset. We translate the queries and passages in the MS MARCO passage ranking dataset into 11 widely spoken Indian languages (Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil and Telugu) originating from two major language families (Indo-Aryan and Dravidian). The translation process utilizes the int-8 quantized version of the NLLB-1.3B-Distilled Model (Costa-jussà et al., 2022), available at CTranslate22 (Klein et al., 2020). We chose int-8 quantized version of NLLB-1.3B-Distilled Model for two reasons: (a) it has shown remarkable performance in terms of BLEU scores for many Indian languages as compared to IndicBART (Dabre et al., 2021) 1http://fire.irsi.res.in/fire/static/data 2https://forum.opennmt.net/t/nllb-200-withctranslate2/5090 502 and IndicTrans (Ramesh et al., 2022) (b) Quantization (Klein et al., 2020) enables faster inference with less computing power and little or no drop in translation quality. The machine translation process employs specific hyper-parameters: a beam width of 4, a maximum decoding sequence length of 200 tokens, a batch size of 64, and a batch type equal to ‘examples’. Passages from the MS MARCO dataset are split into multiple sentences using the Moses SentenceSplitter3, ensuring that each sentence serves as a translation unit in a batch of 64 sentences. In contrast, queries with an average length of 5.96 words (Thakur et al., 2021) are not sentence-split before translation. We also translate the MS MARCO Dev-Set(Small)4 containing 6,390 queries (1.1 qrels/query) to obtain INDIC-MARCO Dev-set(Small). The translation process on an Nvidia A100 GPU with 80 GB VRAM takes approximately 1584 hours for passages in MS MARCO, 55 hours for queries in MS MARCO, and 1.5 hours for queries in MS MARCO Dev-Set(Small). Upon translation, the resulting INDIC-MARCO dataset comprises around 8.8 million passages, 530k queries, and 39 Million training triplets in 11 Indian languages. This dataset allows for training monolingual neural IR models for each language in the INDIC-MARCO dataset. 4 Models 4.1 Baselines BM25 (Robertson et al., 2009) serves as a strong baseline as it performs better than many neural IR models on domain-specific datasets with exceptions (Thakur et al., 2021). It does not require any training. BM25 retrieves documents containing query tokens and assigns them a score for reranking based on the frequency of query tokens appearing in them and the document length. In this work, we use the BM25 implementation provided by Pyserini5 with values for parameters k1=0.82 and b=0.68 for evaluation on INDIC-MARCO DevSet obtained after machine translation. We use Whitespace Analyzers to tokenize queries and documents during indexing and searching for all Indian languages except Hindi, Bengali, and Telugu, for which we use language-specific analyzers provided in Pyserini. BM25-tuned (BM25-T) presented in 3https://pypi.org/project/mosestokenizer/ 4https://ir-datasets.com/MS MARCO-passage.html 5https://github.com/castorini/pyserini Mr. Tydi (Zhang et al., 2021) is optimized to maximize the MRR@100 score on the Mr. Tydi test-set using a grid search over the range [0.1, 0.6] for k1 and [0.1, 1] for b. Multilingual Dense Passage Retriever (mDPR) is presented in both Mr. Tydi and MIRACL by replacing the BERT encoder in Dense Passage Retriever(DPR) (Karpukhin et al., 2020) with an mBERT encoder. In Mr. Tydi, mDPR is trained on English QA dataset (Kwiatkowski et al., 2019) and used in a zero-shot manner for indexing and retrieval of documents. In MIRACL, mDPR is trained on the MS MARCO dataset and used in a zero-shot manner for indexing and retrieving documents. Multilingual ColBERT (mCol) is introduced in MIRACL by replacing the BERT encoder in ColBERT (Santhanam et al., 2021) with an mBERT encoder. mCol is trained on the MS MARCO dataset and used in a zero-shot manner for indexing and retrieval of documents. 4.2 Indic-ColBERT Indic-ColBERT (iCol) is based on ColBERT (Khattab and Zaharia, 2020) for training and ColBERTv2 (Santhanam et al., 2021) for compression and inference. There are some distinctions: it uses mBERT as query-document encoder, and is trained on INDIC-MARCO. Model architecture comprises (a) a query encoder, (b) a document encoder, and (c) max-sim function (same as ColBERTv2). Given a query with q tokens and a document with d tokens, the Query encoder outputs q fix-sized token embeddings, and the document encoder outputs d fix-sized token embeddings. The maximum input sequence length for the query, qmax, and, for the document, dmax, is set before giving them to the respective encoders. If q is less than qmax, we append qmax −q [MASK] tokens to the input query, and if q is greater than qmax, q is truncated to qmax. If d is less than dmax, then d is neither truncated nor padded. If d is greater than dmax, d is truncated to dmax. The max-sim function is used to obtain the relevance score of a document for a query using the encoded representations. 5 Experiment Setup We train 11 distinct Indic-ColBERT (iCol) models separately for 50k iterations with a batch size of 128 on the first 6.4 million training triplets from the INDIC-MARCO dataset to optimize the pairwise softmax cross entropy loss function, where each 503 Language MRR@10 Recall@1000 BM25 mCol iCol BM25 mCol iCol Assamese 0.078 0.095 0.176 0.449 0.503 0.698 Bengali 0.112 0.159 0.221 0.622 0.691 0.788 Gujarati 0.100 0.141 0.232 0.539 0.653 0.805 Hindi 0.125 0.171 0.223 0.678 0.729 0.772 Kannada 0.089 0.156 0.219 0.520 0.691 0.787 Malayalam 0.076 0.124 0.198 0.442 0.603 0.742 Marathi 0.085 0.143 0.207 0.476 0.655 0.750 Oriya 0.086 0.002 0.002 0.484 0.022 0.016 Punjabi 0.113 0.134 0.211 0.603 0.637 0.766 Tamil 0.088 0.144 0.202 0.495 0.661 0.756 Telugu 0.1007 0.144 0.206 0.569 0.648 0.749 Table 1: Results on INDIC-MARCO Dev-Set(Small). mColBERT (mCol) is trained on MS MARCO dataset (Nguyen et al., 2016). Indic-ColBERT are 11 distinct monolingual neural IR models trained on INDIC-MARCO. Language Mr. Tydi test-set MIRACL Dev-set BM25 BM25-T mDPR mCol iCol BM25 mDPR mCol iCol Bengali 0.418 0.413 0.258 0.414 0.501 0.508 0.443 0.546 0.606 Hindi 0.458 0.383 0.470 0.483 Telugu 0.343 0.424 0.106 0.314 0.393 0.494 0.356 0.462 0.479 Table 2: Results on Mr. Tydi test-set (MRR@100) and MIRACL Dev-set (NDCG@10): For Mr. Tydi test-set, we use official BM25, BM25-tuned (BM25-T) and mDPR model scores (Zhang et al., 2021); mCol (mColBERT trained on MS MARCO), and iCol (Indic-ColBERT trained on INDIC-MARCO) are tested in a zero-shot manner. For the MIRACL dev-set, we use official BM25, mDPR, and mCol(mColBERT) model scores (Zhang et al., 2022b); iCol (Indic-ColBERT trained on INDIC-MARCO) is tested in a zero-shot manner. triplet contains a query, a relevant passage and an irrelevant passage in one of the 11 languages on which the model is trained. The mBERT encoder is finetuned from the official "bert-base-multilingualuncased" checkpoint, and the remaining parameters are trained from scratch. 6 Results Indic-ColBERT (iCol) outperforms baseline models (BM25, BM25-T, mDPR, mCol) by 20%, in MRR@100 Score and on Mr. Tydi test-set (Refer Table 2) for Bengali Language. For Telugu, Indic-ColBERT (iCol) outperforms 3 (BM25, mDPR, mCol) out of 4 baselines in terms of MRR@100 scores. Indic-ColBERT (iCol) outperforms baseline models (BM25, mDPR, mCol) by 19.29% in Bengali and 5.4% in Hindi, in NDCG@10 Score on MIRACL dev-set(Refer Table 2). For Telugu, Indic-ColBERT (iCol) outperforms 2 (mDPR, mCol) out of 3 baselines in terms of NDCG@10 scores. Indic-ColBERT (iCol) outperforms baseline models (BM25, mCol) by 47.47% in MRR@10 Score on INDIC-MARCO Dev-Set(Small) (Refer Table 1) averaged over all 11 Indian languages (excluding Oriya). We do not see any improvements for Oriya because mBERT used in Indic-ColBERT is not pretrained on Oriya and Assamese. Assamese demonstrates a 125% MRR@10 improvement over the BM25 baseline, attributed to its linguistic similarity with Bengali (indicated by the mColBERT model outperforming BM25 by 21% in MRR@10 Score) and the high-quality data in INDIC-MARCO, further enhancing the MRR@10 score by 104%, making INDIC-MARCO a significant contributor to the advancement for a low-resource language like Assamese which mBERT does not support. 7 Ablation Study In this section, we perform ablation study with three different machine translation models and two different document splitting schemes. We compare the NDCG@10 scores of Indic-ColBERT models trained on machine translated MS-MARCO data using NLLB-600M, NLLB-1.3B and IndicTrans2. As shown in Table 4, the impact of translation quality 504 Language Mr. Tydi test-set MIRACL Dev-set BM25 BM25-T mDPR mCol iCol BM25 mDPR mCol iCol Bengali 0.869 0.874 0.671 0.846 0.864 0.909 0.819 0.913 0.894 Hindi 0.868 0.776 0.884 0.811 Telugu 0.758 0.813 0.352 0.589 0.688 0.831 0.762 0.830 0.768 Table 3: Results on Mr. Tydi test-set (Recall@100) and MIRACL Dev-set (Recal@100): For Mr. Tydi test-set, we use official BM25, BM25-tuned (BM25-T) and mDPR model scores (Zhang et al., 2021); mCol (mColBERT trained on MS MARCO), and iCol (Indic-ColBERT trained on INDIC-MARCO) are tested in a zero-shot manner. For the MIRACL dev-set, we use official BM25, mDPR, and mCol(mColBERT) model scores (Zhang et al., 2022b); iCol (Indic-ColBERT trained on INDIC-MARCO) is tested in a zero-shot manner. Language Translation Model + Splitting Scheme NLLB-600M NLLB-1.3B IndicTrans2 Moses Moses Full-Stop Moses Bengali 0.592 0.606 0.614 0.602 Hindi 0.464 0.483 0.493 0.497 Telugu 0.523 0.479 0.475 0.469 Table 4: Results on MIRACL Dev-Set(NDCG@10). on retrieval effectiveness follows a different trend for each language. In terms of chrF++ score, IndicTrans2 performs better than NLLB-1.3B which performs better than NLLB-600M on Flores-200 devtest (Gala et al., 2023) (Costa-jussà et al., 2022). For Telugu, we observe a negative correlation between translation quality and retrieval effectiveness, where the Indic-Colbert trained on data translated using NLLB-600M model, which has the lowest chrF++ score among the three machine translation models, gives the best retrieval effectiveness. For Hindi, we observe a positive correlation between the translation quality and retrieval effectiveness. For Bengali, we don’t observe any correlation between translation quality and retrieval effectiveness. Each document in MS-MARCO dataset is first split into sentences, each sentence is translated by the machine translation model and finally the translated sentences are merged back into the document. We experimented with two different document splitting schemes. We compare the NDCG@10 scores for Indic-ColBERT models trained on machine translated MS-MARCO dataset using NLLB-1.3B model on sentences obtained from Moses Splitting and Full-stop Splitting schemes. As shown in Table 4, we can observe "NLLB-1.3B + Full-Stop Splitting" outperforms "NLLB-1.3B + Moses Splitting" for Hindi and Bengali Languages. 8 Summary, conclusion, and future work We present IndicIRSuite, featuring INDICMARCO, a multilingual neural IR dataset in 11 Indian languages, and Indic-ColBERT, comprising 11 monolingual neural IR models based on ColBERTv2. Our results demonstrate performance enhancements over baselines in Mr. Tydi, MIRACL, and INDIC-MARCO, particularly benefiting low-resource languages like Assamese. INDICMARCO proves valuable for such languages, not supported by models like mBERT but linguistically akin to Bengali. We also perform an ablation to find the impact of translation quality and sentence splitting on retrieval effectiveness. Future work includes expanding IndicIRSuite to Multilingual and Crosslingual IR. Limitations The primary limitation of our study is the absence of a comprehensive comparison of the trained IR models across out-of-domain datasets beyond MIRACL and Mr. Tydi. It is imperative to delve deeper into the translation quality, specifically assessing whether it exhibits pronounced "translationese." A more exhaustive examination is warranted, particularly in cases where the proposed models, such as Indic-ColBERT, demonstrate subpar performance compared to baseline models, as observed in the instance where Indic-ColBERT lags behind the BM25 Baseline for the Telugu Language in Mr. Tydi test-set and MIRACL Dev-set. 505 Ethics Statement We want to emphasize our commitment to upholding ethical practices throughout this work. This work publishes a large-scale machine-translated dataset for neural information retrieval in 11 Indian languages - Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, and Telugu. MS MARCO passage ranking Dataset in the English language used as a Source dataset for translation is publicly available, and no annotators were employed for data collection. We have cited the datasets and relevant works used in this study. References Luiz Bonifacio, Vitor Jeronymo, Hugo Queiroz Abonizio, Israel Campiotti, Marzieh Fadaee, Roberto Lotufo, and Rodrigo Nogueira. 2021. mmarco: A multilingual version of the ms marco passage ranking dataset. arXiv preprint arXiv:2108.13897. Marta R Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. 2022. No language left behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672. Raj Dabre, Himani Shrotriya, Anoop Kunchukuttan, Ratish Puduppully, Mitesh M Khapra, and Pratyush Kumar. 2021. Indicbart: A pre-trained model for indic natural language generation. arXiv preprint arXiv:2109.02903. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Jay Gala, Pranjal A Chitale, Raghavan AK, Varun Gumma, Sumanth Doddapaneni, Aswanth Kumar, Janki Nawale, Anupama Sujatha, Ratish Puduppully, Vivek Raghavan, et al. 2023. Indictrans2: Towards high-quality and accessible machine translation models for all 22 scheduled indian languages. arXiv preprint arXiv:2305.16307. Zhiqi Huang, Puxuan Yu, and James Allan. 2023a. Cross-lingual knowledge transfer via distillation for multilingual information retrieval. Zhiqi Huang, Puxuan Yu, and James Allan. 2023b. Improving cross-lingual information retrieval on lowresource languages via optimal transport distillation. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining. ACM. Vladimir Karpukhin, Barlas O˘guz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906. Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 39– 48. Guillaume Klein, François Hernandez, Vincent Nguyen, and Jean Senellart. 2020. The opennmt neural machine translation toolkit: 2020 edition. In Proceedings of the 14th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track), pages 102–109. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452–466. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. arXiv preprint arXiv:1901.07291. Dawn Lawrie, Eugene Yang, Douglas W Oard, and James Mayfield. 2023. Neural approaches to multilingual information retrieval. In European Conference on Information Retrieval, pages 521–536. Springer. Jimmy Lin, David Alfonso-Hermelo, Vitor Jeronymo, Ehsan Kamalloo, Carlos Lassance, Rodrigo Nogueira, Odunayo Ogundepo, Mehdi Rezagholizadeh, Nandan Thakur, Jheng-Hong Yang, et al. 2023. Simple yet effective neural ranking and reranking baselines for cross-lingual information retrieval. arXiv preprint arXiv:2304.01019. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. choice, 2640:660. Sauparna Palchowdhury, Prasenjit Majumder, Dipasree Pal, Ayan Bandyopadhyay, and Mandar Mitra. 2013. Overview of fire 2011. In Multilingual Information Access in South Asian Languages: Second International Workshop, FIRE 2010, Gandhinagar, India, February 19-21, 2010 and Third International Workshop, FIRE 2011, Bombay, India, December 2-4, 2011, Revised Selected Papers, pages 1–12. Springer. Gowtham Ramesh, Sumanth Doddapaneni, Aravinth Bheemaraj, Mayank Jobanputra, Raghavan Ak, Ajitesh Sharma, Sujit Sahoo, Harshita Diddee, Divyanshu Kakwani, Navneet Kumar, et al. 2022. Samanantar: The largest publicly available parallel corpora collection for 11 indic languages. Transactions of the Association for Computational Linguistics, 10:145–162. 506 Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval, 3(4):333–389. Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. 2021. Colbertv2: Effective and efficient retrieval via lightweight late interaction. arXiv preprint arXiv:2112.01488. Shuo Sun and Kevin Duh. 2020. Clirmatrix: A massively large collection of bilingual and multilingual datasets for cross-lingual information retrieval. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4160–4170. Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. 2021. Beir: A heterogenous benchmark for zero-shot evaluation of information retrieval models. arXiv preprint arXiv:2104.08663. Xinyu Zhang, Xueguang Ma, Peng Shi, and Jimmy Lin. 2021. Mr. tydi: A multi-lingual benchmark for dense retrieval. arXiv preprint arXiv:2108.08787. Xinyu Zhang, Kelechi Ogueji, Xueguang Ma, and Jimmy Lin. 2022a. Towards best practices for training multilingual dense retrieval models. Xinyu Zhang, Nandan Thakur, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Mehdi Rezagholizadeh, and Jimmy Lin. 2022b. Making a miracl: Multilingual information retrieval across a continuum of languages. arXiv preprint arXiv:2210.09984. A Examples Snapshots from the INDIC-MARCO dataset are shown in Figure 1, Figure 2 and Figure 3. 507 Figure 1: INDIC-MARCO translations for the MS-MARCO document "The presence of communication amid scientific minds was equally important to the success of the Manhattan Project as scientific intellect was. The only cloud hanging over the impressive achievement of the atomic researchers and engineers is what their success truly meant; hundreds of thousands of innocent lives obliterated" Figure 2: INDIC-MARCO translations for the MS-MARCO document "The Manhattan Project and its atomic bomb helped bring an end to World War II. Its legacy of peaceful uses of atomic energy continues to have an impact on history and science." 508 Figure 3: INDIC-MARCO translations for the MS-MARCO document "Essay on The Manhattan Project - The Manhattan Project The Manhattan Project was to see if making an atomic bomb possible. The success of this project would forever change the world forever making it known that something this powerful can be manmade." 509
2024_short
46
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 510–518 August 11-16, 2024 ©2024 Association for Computational Linguistics AGR: Reinforced Causal Agent-Guided Self-explaining Rationalization Yunxiao Zhao1, Zhiqiang Wang1,2*, Xiaoli Li3, Jiye Liang1,2, Ru Li1,2* 1. School of Computer and Information Technology, Shanxi University, Taiyuan, China 2. Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, Shanxi University, Taiyuan, China 3. Institute for Infocomm Research, A*Star, Singapore [email protected], {wangzq,ljy,liru}@sxu.edu.cn, [email protected] Abstract Most existing rationalization approaches are susceptible to degeneration accumulation due to a lack of effective control over the learning direction of the model during training. To address this issue, we propose a novel approach AGR (Agent-Guided Rationalization), guiding the next action of the model based on its current training state. Specifically, we introduce causal intervention calculus to quantify the causal effects inherent during rationale training, and utilize reinforcement learning process to refine the learning bias of them. Furthermore, we pretrain an agent within this reinforced causal environment to guide the next step of the model. We theoretically demonstrate that a good model needs the desired guidance, and empirically show the effectiveness of our approach, outperforming existing state-of-the-art methods on BeerAdvocate and HotelReview datasets. 1 Introduction To explain the prediction of neural networks, selective rationalization task (Lei et al., 2016; Yu et al., 2019, 2021) has been studied in recent years. As shown in Figure 1, it aims to select a small and human-intelligible subset (i.e., rationale) from the input to support and explain the prediction results when yielding them. As an interpretable diagram, rationalization holds significant potential for elucidating the decision-making process of predictive models, building trust, and deriving insightful and pertinent insights (Yuan et al., 2020; Zhang et al., 2023; Deng et al., 2023). Various approaches have been proposed for rationalization, spanning from early rationale sampling-based methods (Bao et al., 2018; Bastings et al., 2019; Paranjape et al., 2020) to the extra-component-based methods (De Cao et al., 2020; Huang et al., 2021; Yu et al., 2021; Liu et al., 2022; Yue et al., 2022; Liu et al., 2023a). These *Corresponding author appearance : light yellow to almost clear smell : slight hops , but barely smelled like beer Generator appearance : light yellow to almost clear smell : slight hops , but barely smelled like beer (Appearance) Y Predictor X Z ˆY [0.1,0.9] [0,1] p g θ ,θ ˆ ( , ) min H Y Y Figure 1: The standard selective rationalization, where X,Z, ˆY ,Y represent the input text, rationale, prediction and the groundtruth label, respectively. The red text indicates the small and human-intelligible subset. methods predominantly concentrate on improving the performance of rationalization models by either refining the sampling directly or aligning additional information beyond the rationale, resulting in impressive results. However, to the best of our knowledge, the current methods are prone to degeneration accumulation1 since they usually do not discern whether the generator during training has produced unmeaningful or flawed rationales; instead, they directly pass them to the predictor even if generated rationales are degraded. For instance, the underlined rationale in Figure 1 is degraded, as the word :::::::::: appearance alone does not reliably determine the sentiment polarity of input X. But the predictor overfits to this uninformative rationale and classifies the sentiment according to whether “appearance” is included in the rationale. Consequently, when the predictor receives degraded rationales, it steers the model towards an undesirable direction (aka., learning bias). Thus, optimizing this bias during training is crucial for ensuring the model’s generalization performance. The proposed methods (Chang et al., 2020; Zhang et al., 2023; Yue et al., 2023) fall short in considering rationalization optimization comprehensively, neglecting existing causality during rationale learning. Although they often employ causal theory to uncover relationships between rationale pieces, they struggle to directly optimize 1Degeneration over rationalization is a highly challenging problem, which means the predictor may overfit to meaningless rationales generated by the not yet well-trained generator (Yu et al., 2019; Liu et al., 2023b,d). 510 the cooperative game dynamics between the generator and predictor during training. As shown in Figure 1, optimizing rationale from “appearance” to “appearance: light yellow to almost clear” necessitates evaluating the causal impact on target prediction, guiding the model’s subsequent optimization. Thus, if we could construct a guiding signal to reward or penalize the learning behavior of the model, this would significantly reduce the model’s learning bias during training, alleviating the problem of degeneration accumulation. To address the above problems, we propose a novel rationalization method named AGR (AgentGuided Rationalization), which leverages a reinforced causal agent to guide the cooperative game optimization during rationale training, as shown in Figure 2. In particular, 1) we quantify the causal effects in the rationale optimization process, and design a reinforcement learning (RL) process (e.g., Markov decision) to refine the learning bias during training. 2) We further pretrain an agent within reinforced causal environment to guide next actions by a system of rewards. We also theoretically illustrate that a robust model needs the desired guidance. 3) Experimental results demonstrate the effectiveness of our approach, surpassing state-of-the-art methods on BeerAdvocate and HotelReview datasets. 2 Problem Formulation Notation. Following previous research (Liu et al., 2023b,c,d), we consider the classification problem and denote the generator and predictor as fG(·) and fP (·), with θg and θp representing their parameters. The input text X = [x1,x2,...,xl](1 ≤i ≤l) consists of tokens xi, where l is the number of tokens. The label of X is a one-hot vector Y ∈{0,1}c, where c is the number of categories. Cooperative game for rationalization. The fG(·) selects the most informative pieces from X by a sequence of binary mask M = [m1,...,ml] ∈ {0,1}l. Then, it forms the rationale Z = M ⊙ X = [m1x1,m2x2,...,mlxl], where the informativeness of Z is measured by the negative cross entropy −H(Y, ˆY ). Consequently, the fG(·) and fP (·) are optimized cooperatively by min θg,θp H(Y, ˆY ∣fG(X)),s.t. ˆY = fP (fG(X)). (1) In addition, rationales are usually constrained by compact and coherent regularization terms Ω(M) = λ1 ∣∣∣M∣∣1 l −s∣+ λ2 ∑t∣mt −mt−1∣(Chang et al., 2020), where s is a pre-defined sparsity level. iS . . . Predictor Predictor Agent Agent Guide i Z iS i M i M Generator The i-th update Reinforced Environment Policy network Probability distribution Action sample iS iS iR Guide . . . Generate Sample Act State update Reward State: i i S =Z 0.05 0.48 0.15 0.12 0.10 . . . Reward X ˆY i = Z X i = Z X Max (1) (2) (3) (L) …… ˆ ( , ) i c q y Z Action candidates (N) iz (3) iz (0) iz (1) iz (2) iz (3) iz (3) { } i i i S z =  Z Figure 2: The architecture of AGR. X and ˆY are the input and output. Si is the i-th update state of rationale, while ̃Si is the state after guidance by the agent. 3 Reinforced Causal Agent In this section, we present our reinforced causal agent, considering both causal effect and learning bias of degeneration during rationale training. 3.1 Rationale Causal Attribution Formally, we construct a rationale Z∗ k by maximizing an attribution metric A(·) in rationalization Z∗ K = arg maxZK⊆XA(ZK∣ˆyc), (2) where A(·) measures the contribution of each candidate ZK to the target prediction ˆyc. However, A(ZK∣ˆyc) needs to be quantified. To this end, we introduce causal intervention calculus do(·), including do(Z = ZK) and do(Z = ∅)(Pearl, 2009; Pearl et al., 2016), and reformulate the causal contribution from ∅to ZK by mutual information, A(ZK∣ˆyc) = I(ˆyc,do(ZK)) −I(ˆyc,do(∅)). (3) 3.2 Markov Decision Process as RL Equation 3 illustrates the procedure for deriving ZK from an initial state of zero training. However, it may generate degraded rationales at step i, where 0 < i < K. Thus we need to seek for quantifiable objectives between Zi and Zi+1, Zi+1 = arg maxZi+1∈{X/Zi}A(Zi+1∣Zi, ˆyc). (4) According to Equation 3, we have the causal contribution between Zi and Zi+1: A(Zi+1∣Zi, ˆyc) = I(ˆyc,do(Zi+1)) −I(ˆyc,do(Zi)). So, A(Zi+1∣Zi, ˆyc) = −H(ˆyc∣Zi+1) + H(ˆyc∣Zi) = −H(ˆyc∣{Zi ∪{zi+1}}) + H(ˆyc∣Zi) = −pθ(ˆyc∣Z)log pθ(ˆyc∣Zi) pθ(ˆyc∣{Zi ∪{zi+1}}), (5) 511 where H(ˆyc∣Zi) is the term of conditional entropy. As a result, Equation 5 explicitly quantifies Zi+1’s effect with previously obtained rationale Zi. To further promote the cooperative game, we model the training process of rationale as a Markov decision process M = {S,A,P,R}, where S = {si} represents set of states abstracting the process of optimizing rationale during training, and A = {ai} indicates the set of actions. In particular, The transition dynamics P(si+1∣si,ai+1) specify how the state si+1 is updated from the prior state si by taking action ai+1. Besides, R(si,ai+1) quantifies the reward obtained after taking action ai+1 based on the prior state si. Therefore, cooperative training for rationale can be depicted as the sequence process (s0,a1,r1,s1,...,aK,rK,sK), where the state si can be formulated by si = Zi in the i-th update; s0 = Z0 can be initiated by generator fG(·). Nevertheless, the above process exhibits a limitation in its inability to detect learning bias at any given state si. To address this, we reformulate the sequence process as (<s0,̃a0,̃r0,̃s0>, a1, r1, <s1,̃a1, ̃r1,̃s1>, ..., aK, rK, <sK,̃aK,̃rK,̃sK>), where <si,̃ai,̃ri,̃si> indicates process of transitioning from state si to ̃si in the i-th update. Given the state si = Zi, we derive the available action space: ̃Ai = {X/Zi}. The searched action can be represented as ̃ai = ̃zi, (6) where ̃zi ∈{X/Zi} indicates candidate rationale in action space. Having made the action ̃ai, the state transition is to merge ̃zi into Zi, i.e., ̃Zi = Zi ∪{̃zi}. To assess the effectiveness of the action ̃ai in mitigating the learning bias of the model, the reward ̃Ri(̃si,̃ai) at state si can be formulated as follows: ̃Ri = { A(̃zi∣Zi, ˆy∗ c ) + 1, iffP (Zi ∪{̃zi}) = ˆy∗ c A(̃zi∣Zi, ˆy∗ c ) −1, otherwise. (7) According to Equation 5, although we can quantify the probabilities at states ̃si and si, and present the relevant reward ̃Ri, obtaining y∗ c poses a challenge. 3.3 Pretrained Agent To address the limitation, we propose a reinforced causal agent in the aforementioned causal and reinforcement learning framework to better align the probability distribution of the target prediction and theoretically justify the creation of an auxiliary agent targeting ˆyc. Pretrained Embedding. We pretrain the auxiliary agent, denoted as fA(·), with θ∗ A = arg min θA H(Y, ˆY ∣X),s.t. ˆY = fA(X), (8) where θA represents the parameters of the agent, and θ∗ A denotes the optimal solution. Theorem Analysis. Assuming X, Z, Y , and A as random variables in rationalization representing the input, rationale, label, and auxiliary variable, respectively, we propose: Lemma 1. Given X, Z, Y , ˆY = fP (fG(X)). Existing a guiding variable A could enable the predictor fP (·) to achieve good predictions. That is, a solution for A exists, and X is a solution of A. The proof is provided in Appendix A. Lemma 1 suggests that constructing an auxiliary variable A aligned with X for rationalization contributes to the learning of a good prediction. 4 Agent-Guided Rationalization As depicted in Figure 2, following the establishment of the environment for the reinforced causal agent, we delineate the construction and training of the policy network qϕ. 4.1 Policy Network Architecture It takes the pair of intermediate state Zi and ˆyc provided by fA(·) as input. Formally, ̃zi ∼qϕ(Zi, ˆyc), (9) where θϕ is the trainable parameters of the policy network, and ̃zi is generated according to the probability of next action Pϕ(̃zi∣Zi, ˆyc). Representation learning of action candidates. With the space of action candidates ̃Ai = X/Zi, our policy network first learns the representation for each action candidate ̃a(j) i (0 < j < N), where N is the number of candidates. Then, we employ the encoder to encode X/Zi for obtaining the action representation of ̃zi by ẽzi = encoder(X/Zi), (10) utilizing bidirectional Gated Recurrent Units (GRUs) (Cho et al., 2014) as the encoder. Sampling of action. The policy network aims to select a singular action ̃ai = ̃zi from the search space, prioritizing its relevance to the current state si = Zi. This selection process is modeled as: p̃zi = MLP([ẽzi;eZi]), (11) where eZi indicates the current rationale’s representation. The selection probability for each action candidate within ̃Ai is computed using Pϕ(̃zi∣Zi, ˆyc) = softmax̃Ai(p̃zi), (12) where ϕ is the parameters collected of MLP. 512 Methods S Appearance Aroma Palate P R F1 P R F1 P R F1 RNP (Lei et al., 2016) 20 39.4 44.9 42.0 37.5 51.9 43.5 21.6 38.9 27.8 HardKuma (Bastings et al., 2019) 20 64.9 69.2 67.0 37.0 55.8 44.5 14.6 22.3 17.7 IB (Paranjape et al., 2020) 20 59.3 69.0 63.8 38.6 55.5 45.6 21.6 48.5 29.9 INVRAT (Chang et al., 2020) 20 58.9 67.2 62.8 29.3 52.1 37.5 24.0 55.2 33.5 DARE (Yue et al., 2022) 20 63.7 71.8 67.5 41.0 61.5 49.3 24.4 54.9 33.8 FR (Liu et al., 2022) 20 74.9 84.9 79.6 58.7 73.3 65.2 36.6 59.4 45.3 Inter-RAT (Yue et al., 2023) 20 62.0 76.7 68.6 44.2 65.4 52.8 26.3 59.1 36.4 MGR (Liu et al., 2023b) 20 76.3 83.6 79.8 64.4 81.3 71.9 47.1 73.1 57.3 AGR(Ours) 20 83.7 87.5 85.6 67.5 81.4 73.8 47.6 77.7 59.0 Table 1: Results on BeerAdvocate, where Bold text indicates the best experimental results across different methods. Methods Appearance Appearance Appearance S P R F1 S P R F1 S P R F1 RNP 10 32.4 18.6 23.6 20 39.4 44.9 42.0 30 24.2 41.2 30.5 DARE 10 63.9 42.8 51.3 20 63.7 71.8 67.5 30 45.5 80.6 58.1 FR 10 70.4 42.0 52.6 20 74.9 84.9 79.6 30 50.6 81.4 62.3 Inter-RAT 10 66.0 46.5 54.6 20 62.0 76.7 68.6 30 48.1 82.7 60.8 MGR 10 87.5 51.7 65.0 20 76.3 83.6 79.8 30 57.2 93.9 71.1 AGR 10 83.5 54.9 66.2 20 83.7 87.5 85.6 30 59.7 94.3 73.1 Table 2: The different sparsity results on BeerAdvocate. 4.2 Policy Gradient Training Since discrete sampling within the policy network blocks gradients, we adopt policy gradient-based training framework REINFORCE (Sutton et al., 1999). The objective maxΩ(L) is as follows: max ϕ EZi∈̃AiEi[̃R(Zi, ̃zi)logPϕ(̃zi∣Zi, ˆyc)]. (13) The final task loss is a jointly optimized objective: min θg,θp H(Y, ˆY )+Ω(M)−Ω(L),s.t. ˆY = fP (fG(X)) (14) 5 Experiments 5.1 Datasets, Baselines and Evaluation Metrics Datasets. We compare AGR using BeerAdvocate (McAuley et al., 2012) and HotelReview (Wang et al., 2010) datasets, which are two multi-aspect sentiment classification datasets widely used in rationalization. Following existing work, we obtain the data in the same way as Yue et al. (2023) for BeerAdvocate, and we preprocess HotelReview dataset in the same way as Huang et al. (2021) and Liu et al. (2023b). Baselines. We compare with eight models for BeerAdvocate, including three sampling-based methods: RNP (Lei et al., 2016), HardKuma (Bastings et al., 2019), Information Bottleneck (IB) (Paranjape et al., 2020), and three extra-component-based methods: DARE (Yue et al., 2022), FR (Liu et al., 2022), MGR (Liu et al., 2023b), and two causalbased methods: INVRAT (Chang et al., 2020), Methods S P R F1 Location RNP (Lei et al., 2016) 10.9 43.3 55.5 48.6 CAR (Chang et al., 2019) 10.6 46.6 58.1 51.7 DMR (Huang et al., 2021) 10.7 47.5 60.1 53.1 A2R (Yu et al., 2021) 8.5 43.1 43.2 43.1 MGR (Liu et al., 2023b) 9.7 52.5 60.5 56.2 AGR(Ours) 9.3 54.9 60.5 57.6 S P R F1 Service RNP (Lei et al., 2016) 11.0 40.0 38.2 39.1 CAR (Chang et al., 2019) 11.7 40.7 41.4 41.1 DMR (Huang et al., 2021) 11.6 43.0 43.6 43.3 A2R (Yu et al., 2021) 11.4 37.3 37.2 37.2 MGR (Liu et al., 2023b) 11.8 45.0 46.4 45.7 AGR(Ours) 12.3 45.9 49.3 47.6 S P R F1 Cleanliness RNP (Lei et al., 2016) 10.6 30.5 36.0 33.0 CAR (Chang et al., 2019) 9.9 32.3 35.7 33.9 DMR (Huang et al., 2021) 10.3 31.4 36.4 33.7 A2R (Yu et al., 2021) 8.9 33.2 33.3 33.3 MGR (Liu et al., 2023b) 10.5 37.6 44.5 40.7 AGR(Ours) 10.3 39.0 45.5 42.0 Table 3: The experimental results on HotelReview. Inter-RAT (Yue et al., 2023). For HotelReview dataset, we compare with five models, including RNP (Lei et al., 2016), CAR (Chang et al., 2019), DMR (Huang et al., 2021), A2R (Yu et al., 2021), and MGR (Liu et al., 2023b). Evaluation Metrics. Following (Huang et al., 2021; Yu et al., 2021; Yue et al., 2023; Liu et al., 2023b), we focus on the quality of rationales, and adopt Precision (P), Recall (R), and F1-score (F1) as metrics. We perform the best results on the validation set before testing on the test set. The Appendix B provides further details in this section. 5.2 Performance Comparison Results on BeerAdvocate. As shown in Table 1, our proposed method AGR outperforms all the eight baselines in terms of three aspects for BeerAdvocate dataset. Furthermore, in sparsity experiments (Table 2), AGR consistently outperforms the latest state-of-the-art results, affirming its effectiveness for selective rationalization. Results on HotelReview. Table 3 shows that our model once again obtains the best performance 513 Table 4: Examples of generated rationales. Human-annotated rationales are underlined. Rationales from three models are highlighted in blue and are denoted as Z1, Z2 and Z3 respectively. FR (2022) MGR (2023b) AGR (Ours) Aspect: Beer-Appearance Aspect: Beer-Appearance Aspect: Beer-Appearance Label: Positive, Pred: Positive Label: Positive, Pred: Positive Label: Positive, Pred: Positive Text: i picked this beer up on a whim as i was in the mood for a good coffee stout and the siren-like figure somehow told me this is the beer for you . a bit freaky , but i went with it . i was impressed from the very first pour . like any stout , the color is a dark molasses black . but ... the head was thick and dense with good retention . the coffee aroma was intense ! the roasted goodness almost overwhelms my sense of smell .the roasted coffee flavors are the first things that i could taste along with hints of chocolate . however , i can tell there ’s more complexity here than my palette can decipher . the coffee flavors bring bitterness but it ’s not over powering as the sweetness of the malt cuts the bitterness quite nicely the beer has carbonation but once the bubbles have escaped the beer gives a creamy , velvety feel and finish . the alcohol was very well hidden in this beer which is scary ... Text: i picked this beer up on a whim as i was in the mood for a good coffee stout and the siren-like figure somehow told me this is the beer for you . a bit freaky , but i went with it . i was impressed from the very first pour . like any stout , the color is a dark molasses black . but ... the head was thick and dense with good retention . the coffee aroma was intense ! the roasted goodness almost overwhelms my sense of smell .the roasted coffee flavors are the first things that i could taste along with hints of chocolate . however , i can tell there ’s more complexity here than my palette can decipher . the coffee flavors bring bitterness but it ’s not over powering as the sweetness of the malt cuts the bitterness quite nicely the beer has carbonation but once the bubbles have escaped the beer gives a creamy , velvety feel and finish . the alcohol was very well hidden in this beer which is scary ... Text: i picked this beer up on a whim as i was in the mood for a good coffee stout and the siren-like figure somehow told me this is the beer for you . a bit freaky , but i went with it . i was impressed from the very first pour . like any stout , the color is a dark molasses black . but ... the head was thick and dense with good retention . the coffee aroma was intense ! the roasted goodness almost overwhelms my sense of smell .the roasted coffee flavors are the first things that i could taste along with hints of chocolate . however , i can tell there ’s more complexity here than my palette can decipher . the coffee flavors bring bitterness but it ’s not over powering as the sweetness of the malt cuts the bitterness quite nicely the beer has carbonation but once the bubbles have escaped the beer gives a creamy , velvety feel and finish . the alcohol was very well hidden in this beer which is scary ... Methods Appearance S P R F1 AGR 20 83.7 87.5 85.6 -w/o causal. 20 81.5 87.8 84.5 -w/o embedd. 20 81.9 86.9 84.3 -w/o both 20 74.3 85.2 79.4 Table 5: Ablation studies on the BeerAdvocate. across all multi-aspects datasets consistently. Ablation Studies. To further verify the effectiveness of AGR, we conduct the ablation experiments. As depicted in Table 5, removing either the optimized objective of causal effectiveness (referred to as causal.), the pretrained agent embedding (referred to as embedd.), or both, results in a notable decline in AGR’s performance, underscoring the critical roles played by our proposed key components in AGR method. Further Analyses. Firstly, we compare AGR with FR and MGR, providing the visualized examples. For example, we can observe from Table 4 that although all three methods are able to focus on the appearance aspect, FR and MGR still exhibit some degeneration (since the selective rationale still has some distance from the target prediction). However, AGR utilizes causal calculus to capture the causal variations between Z1 and Z2, as well as between Z2 and Z3, regarding the target prediction, thereby gradually mitigating this degeneration during the training process. The Appendix C presents more visualized examples. Secondly, similar to (Liu et al., 2023b), we also compare the complexity of AGR with other models. As shown in Table 6, we can see that the complexity of AGR has been somewhat improved compared to latest work; however, there is still room for further improvement. This will be a key focus of future research. RNP FR AGR CAR modules 1gen+1pred 1gen+1pred 1gen+1pred+1agent 1gen+2pred parameters 2× 2× 3× 3× DARE CAR DMR MGR modules 1gen+1pred+guider 1gen+2pred 1gen+3pred 3gen+1pred parameters 3× 3× 4× 4× Table 6: The complexity of different models. “gen”: generator. “pred”: predictor. 6 Conclusion In this paper, we propose AGR, a reinforced causal agent-based rationalization approach to guide the cooperative game optimization during rationale training. Our theoretical insights underscore the necessity of this guidance signal for accurate predictions. Empirical evaluations on two widely-used benchmarks indicate the effectiveness of our proposed approach, surpassing existing state-of-the-art methods for selective rationalization. 514 Limitations There are still some limitations that need further improvement in the future. Firstly, optimizing cooperative game of rationalization during training brings great significance to the model performance, but how to more efficiently search for meaningful actions within a larger search space for good rationales remains the next direction to explore. Nextly, this work does not involve the debiasing techniques of data-level. Considering the debiasing technique may be a good way to further improve the results. In addition, as the latest research (Chen et al., 2022; Liu et al., 2023a,b) has shown that it is still a challenging task to finetune pretrained language models on the cooperative game framework. Therefore, how to incorporate the cooperative framework and (large) language models is a research interest. Ethics Statement This paper does not involve the presentation of a new dataset and the utilization of demographic or identity characteristics information. Acknowledgements We would like to thank all the anonymous reviewers for their valuable feedback. This work was supported by the National Natural Science Foundation of China (Nos.62376144, 62272285, 62076155) and the Science and Technology Cooperation and Exchange Special Project of Shanxi Province (No.202204041101016). References Yujia Bao, Shiyu Chang, Mo Yu, and Regina Barzilay. 2018. Deriving machine attention from human rationales. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1903–1913, Brussels, Belgium. Association for Computational Linguistics. Jasmijn Bastings, Wilker Aziz, and Ivan Titov. 2019. Interpretable neural predictions with differentiable binary variables. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2963–2977, Florence, Italy. Association for Computational Linguistics. Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola. 2019. A game theoretic approach to class-wise selective rationalization. Advances in neural information processing systems, 32. Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola. 2020. Invariant rationalization. In International Conference on Machine Learning, pages 1448–1458. PMLR. Howard Chen, Jacqueline He, Karthik Narasimhan, and Danqi Chen. 2022. Can rationalization improve robustness? In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3792–3805, Seattle, United States. Association for Computational Linguistics. Kyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734, Doha, Qatar. Association for Computational Linguistics. Nicola De Cao, Michael Sejr Schlichtkrull, Wilker Aziz, and Ivan Titov. 2020. How do decisions emerge across layers in neural models? interpretation with differentiable masking. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3243–3255, Online. Association for Computational Linguistics. Zhiying Deng, Jianjun Li, Zhiqiang Guo, and Guohui Li. 2023. Multi-aspect interest neighbor-augmented network for next-basket recommendation. ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. Yongfeng Huang, Yujun Chen, Yulun Du, and Zhilin Yang. 2021. Distribution matching for rationalization. In AAAI Conference on Artificial Intelligence. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), San Diega, CA, USA. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107–117, Austin, Texas. Association for Computational Linguistics. Wei Liu, Haozhao Wang, Jun Wang, Zhiying Deng, YuanKai Zhang, Cheng Wang, and Ruixuan Li. 2023a. Enhancing the rationale-input alignment for self-explaining rationalization. arXiv preprint arXiv:2312.04103. Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Xinyang Li, YuanKai Zhang, and Yang Qiu. 2023b. MGR: Multi-generator based rationalization. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12771–12787, Toronto, Canada. Association for Computational Linguistics. 515 Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Chao Yue, and YuanKai Zhang. 2022. Fr: Folded rationalization with a unified encoder. Advances in Neural Information Processing Systems, 35:6954–6966. Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Zhiying Deng, YuanKai Zhang, and Yang Qiu. 2023c. D-separation for causal self-explanation. In Thirtyseventh Conference on Neural Information Processing Systems. Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Yang Qiu, Yuankai Zhang, Jie Han, and Yixiong Zou. 2023d. Decoupled rationalization with asymmetric learning rates: A flexible lipschitz restraint. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 1535–1547. Julian McAuley, Jure Leskovec, and Dan Jurafsky. 2012. Learning attitudes and attributes from multi-aspect reviews. 2012 IEEE 12th International Conference on Data Mining, pages 1020–1025. Bhargavi Paranjape, Mandar Joshi, John Thickstun, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. An information bottleneck approach for controlling conciseness in rationale extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 1938–1952, Online. Association for Computational Linguistics. Judea Pearl. 2009. Causality. Cambridge university press. Judea Pearl, Madelyn Glymour, and Nicholas P Jewell. 2016. Causal inference in statistics: A primer. John Wiley & Sons. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. 1999. Policy gradient methods for reinforcement learning with function approximation. Advances in neural information processing systems, 12. Hongning Wang, Yue Lu, and Chengxiang Zhai. 2010. Latent aspect rating analysis on review text data: A rating regression approach. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’10, page 783–792, New York, NY, USA. Association for Computing Machinery. Mo Yu, Shiyu Chang, Yang Zhang, and Tommi S Jaakkola. 2019. Rethinking cooperative rationalization: Introspective extraction and complement control. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Mo Yu, Yang Zhang, Shiyu Chang, and Tommi Jaakkola. 2021. Understanding interlocking dynamics of cooperative rationalization. Advances in Neural Information Processing Systems, 34:12822–12835. Hao Yuan, Lei Cai, Xia Hu, Jie Wang, and Shuiwang Ji. 2020. Interpreting image classifiers by generating discrete masks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(4). Linan Yue, Qi Liu, Yichao Du, Yanqing An, Li Wang, and Enhong Chen. 2022. Dare: Disentanglementaugmented rationale extraction. Advances in Neural Information Processing Systems, 35:26603–26617. Linan Yue, Qi Liu, Li Wang, Yanqing An, Yichao Du, and Zhenya Huang. 2023. Interventional rationalization. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 11404–11418, Singapore. Association for Computational Linguistics. Wenbo Zhang, Tong Wu, Yunlong Wang, Yong Cai, and Hengrui Cai. 2023. Towards trustworthy explanation: on causal rationalization. In Proceedings of the 40th International Conference on Machine Learning. JMLR.org. 516 A Proof of Lemma 1 Given random variables X, Z, Y , and A, where A is drawn from the distribution of X. According to Section 2, to obtain a good predictor, we have min θg,θp H(Y, ˆY ) = min θg,θp H(Y,fP (Z)), (15) where Z = fG(X). It means that we need to minimize H(Y,Z) (Liu et al., 2023b), i.e., to reduce more uncertainty and indicate the label Y . We assume that exist variable A could make to reduce the uncertainty of learning Y , then our goal is to make H(Y,A) ≤H(Y,Z). According to the mutual information formula, we can obtain: H(Y ) −H(Y,A) ≥H(Y ) −H(Y, Z), (16) so, I(Y,A) ≥I(Y,Z). (17) Next, since we have X = {Z,X/Z} where X/Z denotes the text derived from X and unrelated to the rationale, so we can obtain mutual information between X and Y , I(Y ;X) = I(Y ;{Z,X/Z}) = I(Y ;Z) + I(Y ;X/Z∣Z) (18) According to the non-negativity of mutual information, we have I(Y ;X/Z∣Z) ≥0, so I(Y,X) ≥I(Y,Z) (19) Further, we denote I(Y,X) = ε0 ≥ε1 ≥ I(Y,Z) ≥ε2, where ε1 and ε2 indicate the upper and lower bounds of I(Y,Z), respectively. Therefore, we can obtain that when A = X,the equation I(Y,A) = ε0 ≥ε1 ≥I(Y,Z) is satisfied. That is to say, a solution for A exists, and X is a solution of A. The proof of Lemma 1 is completed. B Experiment Details B.1 Baselines We compare AGR with the following baselines: RNP (2016), a original RNP sampling method. HardKuma (2019), a kumaraswamy-distributionbased sampling method. CAR (2019), a game theoretic-based approach to class-dependent rationalization. Information Bottleneck (IB) (2020), a model utilizing IB objective for balancing performance and rationale length. INVRAT (2020), a method that introduces an environment-agnostic predictor. Datasets Train Dev Annotation Pos Neg Pos Neg Pos Neg BeerAdvocate Appearance 202385 12897 28488 1318 923 13 Aroma 172299 30564 24494 3396 848 29 Palate 176038 27639 24837 3203 785 20 HotelReview Location 7236 7236 906 906 104 96 Service 50742 50742 6344 6344 101 99 Cleanliness 75049 75049 9382 9382 99 101 Table 7: Statistics of datasets used in this paper. DMR (2021), which proposes a teacher-student distillation framework to align input distribution. A2R (2021), a method that introducing a soft rationale to predictor. DARE (2022), which introduces a guider into predictor to encapsulate more information from the input. FR (2022), a method using a unified encoder for generator and predictor. Inter-RAT (2023), which develops an interventional rationalization to discover the causal rationales. MGR (2023b), a method leveraging multiple generators to select rationales. B.2 Datasets Following previous research (Huang et al., 2021; Yue et al., 2023; Liu et al., 2023b), we obtain BeerAdvocate and HotelReview datasets. BeerAdvocate (McAuley et al., 2012) and HotelReview (Wang et al., 2010) are publicly available from existing work. As shown in Table 7, the specific splitting details of the two datasets are presented. B.3 Implementation To fairly compare with previous works and validate the effectiveness of the approach proposed, we utilize the 100-dimension Glove (Pennington et al., 2014) as the word embedding and the 200dimension GRUs (Cho et al., 2014) encoder to build the generator fG(·) in the AGR architecture. Further generator fG(·) follows Equation 1 for cooperative optimization with predictor fP (·). Meanwhile, we construct the policy network qϕ(·) to collaborate with the generator fG(·) and predictor fP (·) to learn candidate actions in different training states, including the representation learning of action candidates and the sampling of actions. We use Adam (Kingma and Ba, 2015) as the optimizer. C Additional Examples As shown in Table 8, we provide more examples of selected rationale from the Beer-Aroma and HotelLocation two aspects, where their sparsity is set to be about 20% and 10%, respectively. 517 Table 8: Examples of generated rationales. Human-annotated rationales are underlined. Rationales from three models are highlighted in blue, respectively. FR (2022) MGR (2023b) AGR (Ours) Aspect: Beer-Aroma Aspect: Beer-Aroma Aspect: Beer-Aroma Label: Positive, Pred: Positive Label: Positive, Pred: Positive Label: Positive, Pred: Positive Text: had this at bocktown with wvbeergeek and jasonm , came in a 750ml caged and corked the corked banged out of sight as soon as the cage was undone .seved into a tulip glass between the 3 of us hazy , deep copper , mahagony , hard to get a really good look at the color at bocktown . off white head hard to pour without a glass full of fluffy everlasting head . left lot of thick webbing all over the inside of the glass , sticky looking . great aroma ca n’t seem to keep it away from the nose . sweet , dark , tart fruit notes , some sour cherry , earthy , spicy , with hints of currants , clove , allspice also nutty , with some belgium yeast . lots of sweet booziness from the start , vinious , dark fruityness with plum notes . the fruittyness was remisent of dried fruit.lots of spicyness lots of clove.also nutty and earthy . finished clean , spicy and very sugary . syrupy , big full mouthfeel , smooth and very creamy with lots of juicyness . a beer to sip , but very enjoyable , wish i had the whole bottle to drink would be no problem . a must try beer if you like this style . seems like a beer that would age very well . Text: had this at bocktown with wvbeergeek and jasonm , came in a 750ml caged and corked the corked banged out of sight as soon as the cage was undone . seved into a tulip glass between the 3 of us hazy , deep copper , mahagony , hard to get a really good look at the color at bocktown . off white head hard to pour without a glass full of fluffy everlasting head . left lot of thick webbing all over the inside of the glass , sticky looking . great aroma ca n’t seem to keep it away from the nose . sweet , dark , tart fruit notes , some sour cherry , earthy , spicy , with hints of currants , clove , allspice also nutty , with some belgium yeast . lots of sweet booziness from the start , vinious , dark fruityness with plum notes . the fruittyness was remisent of dried fruit.lots of spicyness lots of clove.also nutty and earthy . finished clean , spicy and very sugary . syrupy , big full mouthfeel , smooth and very creamy with lots of juicyness . a beer to sip , but very enjoyable , wish i had the whole bottle to drink would be no problem . a must try beer if you like this style . seems like a beer that would age very well . Text: had this at bocktown with wvbeergeek and jasonm , came in a 750ml caged and corked the corked banged out of sight as soon as the cage was undone . .seved into a tulip glass between the 3 of us hazy , deep copper , mahagony , hard to get a really good look at the color at bocktown . off white head hard to pour without a glass full of fluffy everlasting head . left lot of thick webbing all over the inside of the glass , sticky looking . great aroma ca n’t seem to keep it away from the nose . sweet , dark , tart fruit notes , some sour cherry , earthy , spicy , with hints of currants , clove , allspice also nutty , with some belgium yeast . lots of sweet booziness from the start , vinious , dark fruityness with plum notes . the fruittyness was remisent of dried fruit.lots of spicyness lots of clove.also nutty and earthy . finished clean , spicy and very sugary . syrupy , big full mouthfeel , smooth and very creamy with lots of juicyness . a beer to sip , but very enjoyable , wish i had the whole bottle to drink would be no problem . a must try beer if you like this style . seems like a beer that would age very well . Aspect: Hotel-Location Aspect: Hotel-Location Aspect: Hotel-Location Label: Negative, Pred: Negative Label: Negative, Pred: Negative Label: Negative, Pred: Negative Text: we stayed at the dona palace for 3 nights and while the location is central , it is also more crowded and noisy . the windows of the room we stayed in did not have adequate sound proofing , noise from the canal and outside would wake us up early in the morning . the breakfast was a nice bonus though , the two waitresses serving the room were always gracious and helpful . the front desk personnel however were rude and abrupt , so that was n’t pleasant to deal with . the rooms are dated and had a musty smell . the bed was uncomfortable , blankets were rough , and the shower drain did not work very well . overall , i probably wound not stay here again . Text: we stayed at the dona palace for 3 nights and while the location is central , it is also more crowded and noisy . the windows of the room we stayed in did not have adequate sound proofing , noise from the canal and outside would wake us up early in the morning . the breakfast was a nice bonus though , the two waitresses serving the room were always gracious and helpful . the front desk personnel however were rude and abrupt , so that was n’t pleasant to deal with . the rooms are dated and had a musty smell . the bed was uncomfortable , blankets were rough , and the shower drain did not work very well . overall , i probably wound not stay here again . Text: we stayed at the dona palace for 3 nights and while the location is central , it is also more crowded and noisy . the windows of the room we stayed in did not have adequate sound proofing , noise from the canal and outside would wake us up early in the morning . the breakfast was a nice bonus though , the two waitresses serving the room were always gracious and helpful . the front desk personnel however were rude and abrupt , so that was n’t pleasant to deal with . the rooms are dated and had a musty smell . the bed was uncomfortable , blankets were rough , and the shower drain did not work very well . overall , i probably wound not stay here again . 518
2024_short
47
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 519–529 August 11-16, 2024 ©2024 Association for Computational Linguistics Shoulders of Giants: A Look at the Degree and Utility of Openness in NLP Research Surangika Ranathunga1, Nisansa de Silva2, Dilith Jayakody2, Aloka Fernando2 1School of Mathematical and Computational Sciences, Massey University, New Zealand [email protected] 2Dept. of Computer Science & Engineering, University of Moratuwa,10400, Sri Lanka {NisansaDdS,dilith.18,alokaf}@cse.mrt.ac.lk Abstract We analysed a sample of NLP research papers archived in ACL Anthology as an attempt to quantify the degree of openness and the benefit of such an open culture in the NLP community. We observe that papers published in different NLP venues show different patterns related to artefact reuse. We also note that more than 30% of the papers we analysed do not release their artefacts publicly, despite promising to do so. Further, we observe a wide languagewise disparity in publicly available NLP-related artefacts. 1 Introduction The advancement of the Computer Science research field heavily depends on publicly available code, software, and tools. Its sub-fields Machine Learning and Natural Language Processing (NLP) have the additional requirement of datasets - to train and evaluate computational models. Lack of access to these research artefacts has been identified as a major reason for the difficulty in reproducing works of others (Pineau et al., 2021). The data requirement is particularly challenging in NLP - a dataset available for one language usually cannot be used in the context of another language1. Therefore, the NLP community is highly encouraged to make their research artefacts publicly available. However, as far as we are aware, there is no quantifiable evidence on (1) the degree of openness in the NLP community or (2) the benefit of openness to the community. Since “what we do not measure, we cannot improve” (Rungta et al., 2022), in this paper, we quantify both these aspects. To this end, we semi-automatically analyse a sample of NLP research papers published in ACL Anthology (AA) and corpora/ Language Models 1Other than in techniques such as multi-tasking and intermediate-task fine-tuning. (LMs) released in Hugging Face2, and answer the following questions: 1. To what degree has the NLP research community been able to reuse open-source artefacts (data, code, LMs) in their research? 2. How much has the community freely shared the artefacts produced by their research? To answer the first question, we record the number of papers that reuse the artefacts released by past research. Since there is a language-wise disparity in NLP research (Joshi et al., 2020; Ranathunga and de Silva, 2022), this analysis is conducted while separating low- and high-resource languages. To answer the second question, we record the papers that indicate they would release the newly produced artefacts. We also record whether they have provided a repository URL. We do further analysis to find out whether these repositories have the artefacts they are supposed to have. Finally, we record the number of datasets and LMs available for different language classes on Hugging Face. We observe that papers published in different venues show different patterns in artefact reuse. We also observe that a worrying percentage of papers that produced an artefact have not publicly released those artefacts. To a lesser degree, broken repository links and empty resource repositories were also noted. Finally, it is noted that the languagewise disparity in LM/data availability (Joshi et al., 2020; Ranathunga and de Silva, 2022; Khanuja et al., 2023) is still staggering. 2 Data Extraction We use AA as the research paper repository. While AA is the largest NLP-related paper repository, Ranathunga and de Silva (2022) note that many papers related to low-resource languages also 2https://huggingface.co/ 519 0 1 2 3 4 5 Language Class 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of papers (a) Data - LREC 0 1 2 3 4 5 Language Class 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of papers (b) Code - LREC 0 1 2 3 4 5 Language Class 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of papers (c) LMs - LREC 0 1 2 3 4 5 Language Class 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of papers (d) Data - Main 0 1 2 3 4 5 Language Class 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of papers (e) Code - Main 0 1 2 3 4 5 Language Class 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of papers (f) LMs - Main 0 1 2 3 4 5 Language Class 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of papers (g) Data - Other 0 1 2 3 4 5 Language Class 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of papers (h) Code - Other 0 1 2 3 4 5 Language Class 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of papers (i) LMs - Other Figure 1: Artefact (Data, Code, and LMs) creation, extension, and reuse across PVs. get published in other venues such as IEEE conferences or regional journals. However, the popularly used Google Scholar does not have a free API to extract data, and the coverage of Semantic Scholar is rather poor3. Moreover, some conference and journal publications are hidden behind paywalls. While archives such as arXiv are a possible option, they do not contain the meta data for us to carry out a conference/journal-specific analysis. Considering all these factors, we selected AA to extract papers for our analysis. AA has been the common choice for many research related to diversity analysis in NLP research (Rungta et al., 2022; Blasi et al., 2022; Cains, 2019). When collecting data from AA, we reuse data and code from Ranathunga and de Silva (2022) who in turn had used code and data from Blasi et al. (2022) and Rohatgi (2022) (respectively). However, we had to collect data post 2022 by ourselves. We use the URLs of papers from the ACL Anthology Bibliography to extract the title and abstract of each paper. We then allocate the papers to different languages, following the language list (of 6419 languages) given by Ranathunga and de Silva (2022). For each language name, we check for matches in both the title and abstract and download the matched papers using their respective URLs (where a URL to the PDF is available). Of these, 3For example, the search query "english+nlp" returns 4312 results on Semantic Scholar as opposed to the 495,000 results returned by Google Scholar. 130 languages are ignored due to the high count of false positives caused by matches with existing words and author names4. Next, we convert each paper to its text format. Then we further group these language-wise papers according to language category. The commonly used language category definition that is based on language resources is Joshi et al. (2020) (see Table 4 in Appendix). This definition can be used to categorise languages into six classes, with class 5 being the highest resourced, and class 0 being the least resourced. Joshi et al. (2020) used this definition to classify about 2000 languages. However, this categorisation was conducted in 2020 and it has considered only ELRA5 and LDC6 as data repositories. Ranathunga and de Silva (2022) showed that these repositories have very limited coverage for low-resource languages. They reused Joshi et al. (2020)’s language category definition and categorised 6419 languages considering the Hugging Face data repository in addition to ELRA and LDC. In this research, we use this newer language categorisation. 3 Analysis 3.1 The degree of artefact reuse in NLP research We extract a paper sample of 355 (papers published between 2015-2023) from the dataset downloaded above. To analyse the effect of the publishing venue, these papers are then separated into three categories (henceforth referred to as PV categories). These categories are selected based on the suggestion of Ranathunga and de Silva (2022). • Main: Main ACL conferences/journals where NLP researchers publish (Full list in Appendix B). • LREC (Language Resources and Evaluation Conference). It was given a separate category as it is a venue specifically focusing on language resources. • Other - Everything else. Usually, these PVs refer to shared tasks, workshops and regional conferences such as RANLP and ICON. 4Examples of languages that were ignored include: Are, As, Even, One, So, To, Apache, U, Bit, She. 5http://www.elra.info/en/ 6https://www.ldc.upenn.edu/ 520 Promised to release 108 LREC 39 Data only 10 Code only 20 Data and Code only 3 Data, Code, and Model 36 25 14 8 2 16 4 3 (a) LREC Promised to release 113 Main 17 Data only 16 Code only 1 Model only 17 Data and Code only 12 Data, Code, and Model 50 15 2 13 3 1 16 1 12 (b) Main Promised to release 107 Other 13 Data only 9 Code only 3 Model only 3 Data and Code only 2 Code and Model only 1 Data and Model only 4 Data, Code, and Model 72 7 6 7 2 2 1 3 1 1 1 4 (c) Other Figure 2: Artefact releasing promise vs artefact link availability across PVs. Green - Artefact Released, Red Claimed to release the relevant artefact but no link given, Purple - No promise was given to release any artefact. 0 1 2 3 4 5 Language Class 0.0 0.2 0.4 0.6 0.8 1.0 Fraction of papers (a) Availability of Data 0 1 2 3 4 5 Language Class 0.0 0.2 0.4 0.6 0.8 1.0 Fraction of papers (b) Availability of Code 0 1 2 3 4 5 Language Class 0.0 0.2 0.4 0.6 0.8 1.0 Fraction of papers (c) Availability of LM(s) 0 1 2 3 4 5 Language Class 0.0 0.2 0.4 0.6 0.8 1.0 Fraction of papers (d) Availability of Tool(s) Figure 3: Analysis on artefact release. Artefact Status Data Used dataset from some previous research Extended an existing dataset Used dataset from some previous research but created new data as well Introduce new dataset Data not needed Cannot determine Table 1: Possible options for use, and reuse of data For each PV, the resulting paper sample has 20 papers per language class7. We manually read each of these papers to find out whether they created/used data, code8 and/or LMs9. The possible options for data-related mentions in a paper are shown in Table 1. Similar options are considered for code and LMs (see Table 5 in Appendix). Note that the first three entries in Tables 1 and 5 suggest the reuse of artefacts from previous research in some manner. Out of the 355 papers we analysed, 98.9% has reused some form of artefact from previous research. Further language class-wise analysis on this is shown in Figure 1 (In the Appendix we have a larger version in Figure 5 as well as a chronological breakdown of the data in Figure 6). 7Except for language class 1 in Main PV, where we could find only 15 papers. 8We considered NLP related tools/libraries/code repositories such as NLTK and Huggingface libraries but did not consider generic libraries such as Pandas. 9By LMs, we refer to LMs starting from Word2Vec, GloVe and FastText, coming to currently used Large LMs Other PV category is the highest in reusing data as-it-is. This is not surprising, as this category has many papers referring to shared tasks. Main category also uses existing data as-it-is to a higher degree, but there is some emphasis on data extension as well. LREC, due to its focus on language resources, sees more papers introducing new datasets or extending existing datasets than those that reuse existing data as-it-is. The Main category sees the highest level of code reuse to introduce new implementations - most papers extend code from already existing research. This has to be due to the highly competitive nature of PVs in this category, where reviewers emphasise technical novelty. Other PV category is high in reusing code as well, but it has a relatively higher portion of papers using existing code as-it-is. As mentioned earlier, since most LREC papers focus on dataset release, they seem not to have paid attention to the use of state-of-the-art solutions involving LMs. In contrast, papers from Main heavily emphasise using LMs, and this PV category seems to be the venue to introduce new LMs. Overall, the most reused artefact is code, spanning from early APIs/toolkits such as NLTK (Bird et al., 2009) and Kaldi (Povey et al., 2011) to modern-day Hugging Face libraries. 521 3.2 Percentage of papers that promise to share the newly created artefacts Next, we focus on papers that create new artefacts (created from scratch or extended existing artefacts) and report the percentage of papers that promise to share the newly created artefacts. If they do promise, then we check whether they have provided the URL of the public repository containing the artefact(s). This analysis was done in a semi-automated manner on the same 355 paper sample as before, using a keyword-based method to filter papers. To identify keyword matches, we first replace all non-letter characters of the paper full text with spaces and convert the text to lowercase. To match keywords containing a single term, we split the text by the space character and look for exact matches between the keyword and the words in the resulting array. To match keywords containing multiple terms, we do a direct search over the text (without splitting). We make this distinction between single-word and multi-word keywords due to the false positives caused by matching substrings (for example, "public" would match a text that contains the word "republic"). For each matched keyword, we extract the paragraph in which it was identified and create text files using these paragraphs. These filtered text files assist in identifying the claims of the papers during the manual analysis. The keywords consist of words that indicate availability. The complete set of keywords is as follows: release, released, public, publicly, github, gitlab, huggingface co, osf io, open source, accessible. Note that the non-letter characters of the keywords are also replaced by spaces to facilitate the matching. Also, note that we do not include keywords such as available and http due to the high number of false positives that they cause. In order to quantify the impact of avoiding these keywords, we look at the false omission rate of a sample of 100 papers. We randomly select 100 papers from the data set and run them through our keyword-based search algorithm. This predicted 69 papers to contain promises of releasing artefacts. We then manually checked the remaining 31 papers in full, to see whether they promised the release of an artefact. Of these 31 papers, one paper has promised and shared the data and code. This results in a false omission rate of approximately 0.03. We manually read the filtered papers to further verify whether a paper has produced an artefact, and if so, whether it has promised to release that artefact. Results are shown in Figure 2. Interestingly, out of the Main PV papers that produced some new artefacts, 44% have not mentioned whether that artefact will be released. In the Other category, this value is 67%. LREC has the lowest percentage at 33%. However, in LREC, 36% of the papers that have promised to release data have not given a repository URL. 3.3 Further Analysis into Artefact Availability In the above analysis, we can only determine whether a paper mentions that research artefacts are publicly released, and if so, a link to a repository is given. However, that analysis does not tell us the type of these repositories, whether they are accessible, or whether they contain the artefact. Therefore, we carry out a second, more detailed analysis. To get an insight into more recent trends, we consider papers published between 2020-2023. Following the same semi-automated approach discussed above, we extract a list of papers that promised to release at least one of the following artefacts: data, source code, LM, or tool. Then the extracted papers are grouped according to the language class. Classes 5, 4, 3 and 2 have a considerable number of papers, so we sampled 75 from each class. Class 1 and 0 only have 71 and 59 papers, respectively, thus all of those papers were included in our analysis. Altogether, this sample contains 430 papers. The aggregated result is shown in Figure 3. Be reminded that in this analysis, we omitted the papers that do not refer to an artefact type or those that do not promise to release the artefact they produced. A ‘No’ is marked if a link was not given, a given link is not working, or the repository corresponding to the link does not have the promised artefact (we clicked through and followed all the links mentioned in the papers). We notice that a considerable portion of papers that promised to release data have ‘dead-ends’ when trying to locate it. This count is higher in low-resource languages. Most tools are hosted on personal or institutional websites, and a portion seems to have fallen out of maintenance in the intervening years. The ‘dead-end’ problem exists to a lesser degree concerning code availability. However, even for code, class 0 has a noticeable number of ‘dead-ends’. Overall, most of the links to code are active and have the artefact, followed by those that promise to release an LM. 522 We also record the common repositories used by NLP researchers and provide a summary in Table 2 (A breakdown of the same data across language classes is available in Figure 7 in the Appendix). According to this, GitHub seems to be the most favourite option to release data and code. Some research has considered Zenodo and Hugging Face for data release10. In contrast, Hugging Face seems to be the favourite choice for LM releases. Most of the tools have their own unique web link, hence the ‘other’ category is the highest for this type. Repository Code Data LMs Tools Total GitHub 153 188 17 12 370 Hugging Face 0 6 11 2 19 Zenodo 1 10 1 0 12 Google Drive 0 5 3 1 9 Bitbucket 4 0 0 1 5 GitLab 3 2 0 0 5 Codeberg 1 1 0 0 2 Dropbox 0 1 0 0 1 Mendeley 0 1 0 0 1 Other 5 58 6 44 113 Total 167 272 38 60 537 Table 2: Repository usage across all classes 3.4 Analysis Based on NLP Tasks Next, we carry out an analysis based on NLP tasks, to understand whether artefact release has any relationship to the type of NLP task11. This analysis was conducted using the paper sample used in Section 3.3. Table 6 in the Appendix shows the raw counts. Translation is the NLP task12 that has the highest number of artefact releases (this artefact is usually parallel data), followed by morphological analyzer and Automatic Speech Recognition (ASR). In particular, having morphological analysis as the prevalent NLP domain seems to be common for extremely low-resource languages. This is not surprising - these languages have never had such linguistic resources, and such research is essential in understanding their linguistic properties. The high amount of ASR-related artefacts could be due to the existence of languages that do not have a writing system13. 10This result tallies with the survey results published by Ranathunga and de Silva (2022) to a good extent. 11Initial categorisation of tasks come from Hugging Face task list and a survey paper on NLP research (de Silva, 2019) 12As shown in Table 6, Corpora has the highest raw counts but is not an NLP Task per se. 13Eberhard et al. (2024) notes that around 41% of the languages they list may be unwritten. 0 1 2 3 4 5 Language Class 0 2 4 Log Count (a) HF Dataset Counts 0 1 2 3 4 5 Language Class 0 2 4 Log Count (b) HF LM Counts Figure 4: Number of resources for the language classes on Hugging Face (HF). 3.5 Dataset and LM Availability Our final analysis is based on the datasets and LM counts reported in Hugging Face14, which is the fastest-growing repository for NLP-related artefacts. Figure 4 shows15 the language class-wise distribution of data and LMs. Further, Table 3 shows relevant numerical values, which demonstrates the language class-wise disparity. Artefact type Median of Language Class 0 1 2 3 4 5 Data set counts 0.0 12.0 53.0 147.5 246.0 657.0 LM counts 0.0 3.0 171.5 443.5 881.0 2601.0 Table 3: Hugging Face Resource Counts The disparity between different language classes is evident from the medians, despite some outliers. Most notably, out of the 6135 languages in class 0, most have no data or LMs, therefore the handful of languages that have some data/LM have become outliers. The correlation between the class-wise LM and data availability is evident - a Pearson correlation value of 0.9972 is reported between the data and LM counts on languages listed in Hugging Face. 4 Conclusion We hope our findings would help the NLP community to better appreciate the benefit of openness and to commit to releasing the artefacts they produce. We further hope these statistics will be useful to ACL in making informed decisions. It would be interesting to run this same experiment 5 or 10 years down the line, to see if there are any changes in releasing and reusing artefacts. In hopes to assist in such efforts, our code is publicly released16. 14https://huggingface.co/languages 15A larger version is available as Figure 8 in the Appendix. 16https://bit.ly/ACL2024ShouldersOfGiants 523 5 Limitations We considered only a fraction of the papers published in AA. Our keyword-based paper filtering mechanism might have missed some papers that have made their artefacts available. If a paper does not mention the language name in its abstract, our algorithm does not pick it up. Thus we highly encourage the community to adhere to ‘Bender Rule’ (Bender, 2019). If a research published their artefact without mentioning that in their paper, or if the link to the artefact was included in a different version of the paper (e.g. ArXiv), such are missed. We might have missed some information on artefacts while manually reading hundreds of research papers, which might have impacted the statistics we present. When checking if a repository link is live, we clicked on that link only once. There could have been instances where the link was momentarily down. In certain instances, we noticed that a URL is not working due to a change in the web repository directory structure. However, we did not try to manually figure out the correct link. We consider an artefact to be available in a repository if we note the availability of files (e.g. python files in a code base) inside the repository. We cannot guarantee the repository has all the artefacts the paper promised (e.g. all the promised data files or whether the given code is working). 6 Ethics Statement We only used the AA paper repository, which is freely available for research. Our implementation is based on publicly available code. We do not release the paper-wise information we recorded, nor do we re-publish the papers we downloaded from AA. References Emily Bender. 2019. The #Benderrule: On naming the languages we study and why it matters. The Gradient, 14. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyzing text with the natural language toolkit. " O’Reilly Media, Inc.". Damian Blasi, Antonios Anastasopoulos, and Graham Neubig. 2022. Systematic inequalities in language technology performance across the world’s languages. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5486–5505, Dublin, Ireland. Association for Computational Linguistics. Andrew Cains. 2019. The geographic diversity of NLP conferences. MAREK REI. Nisansa de Silva. 2019. Survey on publicly available sinhala natural language processing tools and research. arXiv preprint arXiv:1906.02358. David M. Eberhard, Gary F. Simons, and Charles D. Fennig. 2024. Ethnologue: How many languages in the world are unwritten? Dallas, Texas: SIL International. Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282–6293, Online. Association for Computational Linguistics. Simran Khanuja, Sebastian Ruder, and Partha Talukdar. 2023. Evaluating the diversity, equity, and inclusion of NLP technology: A case study for Indian languages. In Findings of the Association for Computational Linguistics: EACL 2023, pages 1763–1777, Dubrovnik, Croatia. Association for Computational Linguistics. Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivière, Alina Beygelzimer, Florence d’Alché Buc, Emily Fox, and Hugo Larochelle. 2021. Improving Reproducibility in Machine Learning Research (A Report from the NeurIPS 2019 Reproducibility Program). Journal of Machine Learning Research, 22(164):1–20. Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al. 2011. The Kaldi Speech Recognition Toolkit. In IEEE 2011 workshop on automatic speech recognition and understanding, CONF. IEEE Signal Processing Society. Surangika Ranathunga and Nisansa de Silva. 2022. Some languages are more equal than others: Probing deeper into the linguistic disparity in the NLP world. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 823–848, Online only. Association for Computational Linguistics. Shaurya Rohatgi. 2022. ACL Anthology Corpus with Full Text. GitHub. Mukund Rungta, Janvijay Singh, Saif M. Mohammad, and Diyi Yang. 2022. Geographic citation gaps in NLP research. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1371–1383, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. 524 A Language Category Definition Class Description Language Count Examples 0 Have exceptionally limited resources, and have rarely been considered in language technologies. 2191 Slovene Sinhala 1 Have some unlabelled data; however, collecting labelled data is challenging. 222 Nepali Telugu 2 A small set of labelled datasets has been collected, and language support communities are there to support the language. 19 Zulu Irish 3 Has a strong web presence, and a cultural community that backs it. Have highly benefited from unsupervised pre-training. 28 Afrikaans Urdu 4 Have a large amount of unlabelled data, and lesser, but still a significant amount of labelled data have dedicated NLP communities researching these languages. 18 Russian Ukrainian 5 Have a dominant online presence. There have been massive investments in the development of resources and technologies. 7 English Japanese Table 4: Language Category definition by Joshi et al. (2020) B Main Conference and Journal List (1) Annual Meeting of the Association for Computational Linguistics, (2) North American Chapter of the Association for Computational Linguistics, (3) European Chapter of the Association for Computational Linguistics, (4) Empirical Methods in Natural Language Processing, (5) International Conference on Computational Linguistics, (6) Conference on Computational Natural Language Learning (7) International Workshop on Semantic Evaluation, (8) Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics, and (9) Conference on Computational Natural Language Learning. In addition, the following journals are considered: (1) Transactions of the Association for Computational Linguistics and (2) Computational Linguistics. C Artefact Annotation Scheme All the annotators involved in this study are coauthors of the paper. In Table 5 we show the annotation scheme we used. Artefact Status Data Used dataset from some previous research Extended an existing dataset Used dataset from some previous research but created new data as well Introduce new dataset Data not needed Cannot determine Code Used an implementation from some previous research Extended an existing implementation (e.g. toolkit, library) Used an implementation from some previous research but implemented part of the solution from scratch Provided their implementation Code not needed Cannot determine LM Used an existing LM Extended an existing LM Used an existing LM but trained their LM(s) as well Trained their own LM LM not needed Cannot determine Table 5: Possible options for Artefacts D Code and Data Reuse Code and data from Ranathunga and de Silva (2022) and Rohatgi (2022) are released under CC BY-NC 4.0 licence. The authors obtained permission from Blasi et al. (2022) to use the code on their public repository17. E NLP Task Breakdown Across Language Classes We show the NLP task breakdown across the five language classes in Table 6. F Code and Data Intended Use All the code use was consistent with their intended use as specified on the relevant research publications (Ranathunga and de Silva, 2022; Blasi et al., 2022) and the readme files on the repositories (Rohatgi, 2022). G Artefact Creation, Extension, and Reuse In Figure 5 we have the larger version of the Figure 1 for improved readability. Further, given that the information in Figure 5 is presented after aggregating across time but separated into language classes, we also include a set of cumulative percentage graphs in Figure 6 where we show the same data aggregated across the language classes but spread out over the publication years to better 17https://github.com/neubig/globalutility 525 NLP Task Language Class Total 0 1 2 3 4 5 Corpora 19 22 11 11 11 29 103 Translation 10 12 8 6 10 6 52 Morphological Analyzer 11 8 2 3 1 0 25 Automatic Speech Recognition (ASR) 5 1 10 4 3 0 23 Language Model 1 2 10 1 2 5 21 Parsers 4 5 3 1 3 4 20 Data Sets 6 1 5 3 4 0 19 Dictionary/Lexicon 6 4 1 1 3 3 18 Named-Entity Recognition (NER) 1 0 3 5 7 2 18 Text Classification 1 2 1 2 1 9 16 Part of Speech (PoS) 1 6 3 2 2 1 15 Cross-Lingual Applications 2 1 3 6 2 0 14 Text Generation 0 0 0 0 6 4 10 Hate Speech Detection 0 0 2 6 1 0 9 Misinformation Detection 0 0 0 4 3 1 8 Wordnets/Ontology/Taxonomy 3 1 0 2 0 1 7 Discourse Analysis 0 2 1 1 2 1 7 Question and Answer (QnA) 0 1 2 1 3 0 7 NLP Tools 1 4 1 0 0 0 6 Semantic (Other) 0 0 0 0 1 5 6 Tokenizer 0 0 1 2 0 2 5 Semantic Similarity 0 0 0 3 1 1 5 Multiple Tasks 0 0 1 3 0 0 4 Spelling and Grammar 0 1 1 0 1 1 4 Summarizing 0 0 0 3 1 0 4 Phonological Analyzer 0 1 0 2 1 0 4 Sentiment Analyzer 0 0 1 1 2 0 4 Text-to-Speech 0 2 1 0 0 0 3 Transliteration 0 0 2 0 1 0 3 Lexical Inference 0 0 0 0 3 0 3 Coreference Resolution 0 0 0 0 3 0 3 Information Extraction 0 0 1 1 0 0 2 Bilingual Lexicon Induction (BLI) 0 1 0 0 1 0 2 Optical Character Recognition (OCR) 0 1 0 0 0 1 2 Language Identification (LangID) 0 0 1 0 0 0 1 Intent Detection 1 0 0 0 0 0 1 News/Social Media Recommendation 0 0 0 0 1 0 1 Text Classification 0 0 1 0 0 0 1 Stemming 0 0 0 0 1 0 1 Total 72 78 76 74 81 76 457 Table 6: NLP Tasks Conducted 526 0 1 2 3 4 5 Language Class 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of papers (a) Usage of Data - LREC 0 1 2 3 4 5 Language Class 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of papers (b) Usage of Code - LREC 0 1 2 3 4 5 Language Class 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of papers (c) Usage of LMs - LREC 0 1 2 3 4 5 Language Class 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of papers (d) Usage of Data - Main 0 1 2 3 4 5 Language Class 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of papers (e) Usage of Code - Main 0 1 2 3 4 5 Language Class 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of papers (f) Usage of LMs - Main 0 1 2 3 4 5 Language Class 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of papers (g) Usage of Data - Other 0 1 2 3 4 5 Language Class 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of papers (h) Usage of Code - Other 0 1 2 3 4 5 Language Class 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of papers (i) Usage of LMs - Other Figure 5: Artefact (Data, Code, LM) creation, extension, and reuse across ACL venues - Aggregated analysis show the changing trends in resource availability and reuse. Unsurprisingly, as per Figures 6b, 6e, and 6h, we can see that code is being re-used the most across all venues. LREC (Figure 6a) stands out among the data graphs (Figures 6d and 6g) for consistently being a source of new data sets rather than a venue where existing data is reused. We see that LMs, had a reasonable presence in the main venues (Figure 6f) even before our analysis period while in the other venues (Figure 6i), the trend stars just at the beginning of our considered time period. LREC on the other hand, seems to be late to be considered for LMs as it is only in 2018, that we see them becoming noticeable in Figure 6c. H Artefact Hosting Table 2 shows a summary of where NLP researchers have published their data, based on the information mentioned in the research papers. According to this, GitHub seems to be the most favourite option to release data and code. Some research has considered Zenodo and Hugging Face for data release18. In contrast, Hugging Face seems to be the favourite choice for LM releases. Most of the tools have their own unique web link, hence the ‘other’ category is the highest for this type. In Figure 7 we show a more detailed view of the artefacts being hosted online; previously discussed in Table 2 as a summary. Here it is possible to note the variations between the language classes. For example, the interesting observation of Figure 7c is that it can be noted that while researchers in all other listed language classes use github to host their trained LMs, the researchers of Class 4 18This result tallies with the survey results published by Ranathunga and de Silva (2022) to a good extent. 527 2016 2017 2018 2019 2020 2021 2022 Year 0 20 40 60 80 100 % of Papers (a) Usage of Data - LREC 2016 2017 2018 2019 2020 2021 2022 Year 0 20 40 60 80 100 % of Papers (b) Usage of Code - LREC 2016 2017 2018 2019 2020 2021 2022 Year 0 20 40 60 80 100 % of Papers (c) Usage of LMs - LREC 2016 2018 2020 2022 Year 0 20 40 60 80 100 % of Papers (d) Usage of Data - Main 2016 2018 2020 2022 Year 0 20 40 60 80 100 % of Papers (e) Usage of Code - Main 2016 2018 2020 2022 Year 0 20 40 60 80 100 % of Papers (f) Usage of LMs - Main 2016 2018 2020 2022 Year 0 20 40 60 80 100 % of Papers (g) Usage of Data - Other 2016 2018 2020 2022 Year 0 20 40 60 80 100 % of Papers (h) Usage of Code - Other 2016 2018 2020 2022 Year 0 20 40 60 80 100 % of Papers (i) Usage of LMs - Other Figure 6: Cumulative percentage graphs - Artefact (Data, Code, LM) creation, extension, and reuse across ACL venues. - Chronological analysis. 0.0 0.1 0.2 0.3 0.4 0.5 Class Fraction 0 1 2 3 4 5 Classes (a) Data 0.0 0.1 0.2 0.3 0.4 0.5 Class Fraction 0 1 2 3 4 5 Classes (b) Code 0.0 0.1 0.2 0.3 0.4 0.5 Class Fraction 0 1 2 3 4 5 Classes (c) LMs 0.0 0.1 0.2 0.3 0.4 0.5 Class Fraction 0 1 2 3 4 5 Classes (d) Tools Figure 7: Artefact (Data, Code, LM, Tools) hosting locations. languages opt for Hugging Face. Conversely, from Figure 7d, it can be noted that in Class 0 languages, tools are generally not hosted on github. A curious observation in Figure 7c is that for some reason, Class 1 languages do not select Hugging Face as a clear contender to host their language models, something that all other language classes seem to do. The overwhelming prevalence of the other option in Figure 7d can be explained by the fact that most tools tend to be hosted on dedicated websites. Even when the actual site is hosted on a service such as github, they are masked with shorter and more market-friendly custom URLs. I Hugging Face Resources In Figure 8 we show the resources available on Hugging Face for the 5 language classes. This is a larger version of the Figure 4 for improved readability. Note especially how the entire interquartile range of class 0 is at zero due to the dearth of resources existing for the languages in that class. Thus a language in class 0 with any amount of resources gets registered as an outlier. On the opposite end of the spectrum, note class 5 with only 528 0 1 2 3 4 5 Language Class 0 2 4 Log Count (a) HF Dataset Counts 0 1 2 3 4 5 Language Class 0 2 4 Log Count (b) HF LM Counts Figure 8: Number of Hugging Face (HF) resources for the language classes. 7 languages in the set even after the reclassification by Ranathunga and de Silva (2022). Despite that, English still manages to be an outlier with its exceptional resource availability. From Figure 8 and Table 3, it can be observed a considerable jump between the median values when comparing adjacent classes. This may be taken as both: 1) an indication of the visible difference in the resource availability of the language classes, 2) A reaffirmation of the soundness of the class borders proposed by by Ranathunga and de Silva (2022) as the distinct medians can be taken as a quality of classes which are internally cohesive and mutually separate. 529
2024_short
48
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 530–546 August 11-16, 2024 ©2024 Association for Computational Linguistics The Probabilities Also Matter: A More Faithful Metric for Faithfulness of Free-Text Explanations in Large Language Models Noah Y. Siegel Google DeepMind, University College London [email protected] Oana-Maria Camburu University College London Nicolas Heess Google DeepMind Maria Perez-Ortiz University College London Abstract In order to oversee advanced AI systems, it is important to understand their underlying decision-making process. When prompted, large language models (LLMs) can provide natural language explanations or reasoning traces that sound plausible and receive high ratings from human annotators. However, it is unclear to what extent these explanations are faithful, i.e., truly capture the factors responsible for the model’s predictions. In this work, we introduce Correlational Explanatory Faithfulness (CEF), a metric that can be used in faithfulness tests based on input interventions. Previous metrics used in such tests take into account only binary changes in the predictions. Our metric accounts for the total shift in the model’s predicted label distribution, more accurately reflecting the explanations’ faithfulness. We then introduce the Correlational Counterfactual Test (CCT) by instantiating CEF on the Counterfactual Test (CT) from Atanasova et al. (2023). We evaluate the faithfulness of free-text explanations generated by few-shot-prompted LLMs from the Llama2 family on three NLP tasks. We find that our metric measures aspects of faithfulness which the CT misses. 1 Introduction In many applications of ML systems it is important to understand why the system came to a particular answer (Rudin, 2018), and the field of explainable AI attempts to provide this understanding. However, relying on subjective human assessment of explanations can be misleading: humans sometimes prefer interpretability techniques that provide little information about model predictions (Adebayo et al., 2018). It is therefore important to clearly assess the extent to which explanations inform us about ML systems, both for current high-stakes applications such as medicine and criminal justice (Rudin, 2018), as well as potential scenarios involving highly general systems (Shah et al., 2022; Ngo et al., 2023; Ward et al., 2023). If we can ensure that explanations are faithful to the inner-workings of the models, we could use the explanations as a channel for oversight, scanning them for elements we do not approve of, e.g. racial or gender bias, deception, or power-seeking (Lanham, 2022). We make the following contributions: 1. We argue that in order to be informatively faithful, it is not enough to test whether explanations mention significant factors: we also need to test whether they mention significant factors more often than insignificant ones. 2. We introduce Correlational Explanatory Faithfulness (CEF), a novel faithfulness metric that improves upon prior work by capturing both the degree of impact of input features, as well as the difference in explanation mention frequency between impactful and non-impactful factors. 3. We introduce the Correlational Counterfactual Test (CCT), where we instantiate CEF on the Counterfactual Test (CT) from Atanasova et al. (2023) and use statistical distance between predictions to measure impact. 4. We run experiments with the Llama2 family of LLMs on three datasets and demonstrate that CCT captures faithfulness trends that the existing faithfulness metric used in CT misses. 2 Related Work There has been much discussion on what it means for an explanation to be “faithful”. Jacovi and Goldberg (2020) survey literature on the term and define an explanation as faithful insofar as it “accurately represents the reasoning process behind the model’s prediction”. Wiegreffe and Marasovi´c (2021) review datasets for explainable NLP and identify three predominant classes of textual 530 explanations: highlights (also called extractive explanations ), free-text (also called natural language explanations or NLEs), and structured. Prior work on faithfulness has mostly focused on highlights and NLEs. We chose to focus on NLEs in this work because highlight-based explanations are highly restrictive in what they can communicate (Camburu et al., 2021; Wiegreffe et al., 2020), while NLEs allow models to produce justifications that are as expressive as necessary (e.g. they can mention to background knowledge that is not present in the input but that the model made use of for its prediction). Moreover, there is increasing work on NLEs in high-stakes areas, such as healthcare (Kayser et al., 2022), where having faithful explanations is crucial. Parcalabescu and Frank (2023) review a range of recent NLE faithfulness tests and claim that many are instead measuring “self-consistency”. See Appendix C for further discussion. 2.1 “Explanatory” vs. “Causal” Faithfulness We identify two types of faithfulness being researched in the literature, which we refer to as “explanatory” and “causal”. Explanatory faithfulness asks the question: does the explanation reflect the decision-making process of the model? This is often measured by intervening on the input, such as with the metrics of sufficiency and comprehensiveness for highlight-based explanations (DeYoung et al., 2019; Camburu et al., 2021), or the counterfactual test (CT) for NLEs (Atanasova et al., 2023). Causal faithfulness adds the criterion: does the model’s prediction causally depend on the generated reasoning trace? (Creswell and Shanahan, 2022; Lanham et al., 2023; Radhakrishnan et al., 2023; Turpin et al., 2023) Causal faithfulness requires structural restrictions on the prediction system (at a minimum, that the explanation is generated before the prediction), such as in chain-of-thought (Wei et al., 2023) or selectioninference (Creswell et al., 2022). Explanatory faithfulness, however, can be measured for a more general class of rationales, including post-hoc explanations (DeYoung et al., 2019; Atanasova et al., 2023). We focus on explanatory faithfulness in this work; see Appendix A for further discussion of causal faithfulness. Some authors also distinguish between “explainability” and “interpretability/transparency” as approaches for understanding models (e.g. Rudin (2018)). While the concept of faithfulness is applicable to both approaches, we primarily focus on “explainability” in this work. See Appendix B for further discussion. 2.2 The Counterfactual Test In order to measure whether an explanation captures the true factors responsible for a model’s prediction, we need to know which factors are relevant. However, deep neural networks like LLMs are often difficult to interpret (Fan et al., 2020). To address this problem, Atanasova et al. (2023) introduce the Counterfactual Test (CT). The CT inserts some text into an input query, which we refer to as an interventional addition (IA). If the model’s prediction changes, then the IA was relevant to the model’s new prediction, and we check if it is mentioned in the new explanation. Counterfactual edits have the advantage of easily generating features that we know are relevant to the model’s prediction. We choose to focus our analysis on this method, and identify ways to improve it. 3 Methods We identify two significant drawbacks with the CT: 1. It does not test whether impactful features are more likely to be mentioned than less impactful ones. There is a trivial strategy that leads to 0% unfaithfulness as measured by the CT: repeat all input t ext verbatim as the explanation, which means explanations will never fail to mention the IA. This demonstrates an important property of useful explanations: they are useful only if they both mention impactful features and leave out non-impactful features. 2. It measures impactfulness as binary, i.e. whether the intervention results in a change in the model’s top predicted label. But this ignores changes in the model’s predicted class likelihoods: it would label an intervention that changes the predicted probability of a class from 49% to 51% as relevant, while an intervention that changes the probability from 1% to 49% would be labelled as irrelevant, even though the latter caused a larger shift. To address these drawbacks, we propose the metric Correlational Explanatory Faithfulness (CEF), which can be applied to any tests with three given properties: 1. An intervention: a function mapping an input example to its modified version. 531 Input Example Model Prediction Model Explanation Before Intervention TEXT: Three people are riding a carriage pulled by four horses. HYPOTHESIS: The horses are scrawny. 0.00% 25.00% 50.00% 75.00% 100.00% Entailment Neutral Contradiction The horses could be scrawny or not. After Intervention TEXT: Three people are riding a carriage pulled by four joyous horses. HYPOTHESIS: The horses are scrawny. 0.00% 25.00% 50.00% 75.00% 100.00% Entailment Neutral Contradiction The horses are joyous, so they are not scrawny. Intervention: inserted "joyous" Intervention Impact: TVD = 0.7 Explanation Mention: True Table 1: Illustration of the Correlational Counterfactual Test (CCT), our instantiation of Correlational Explanatory Faithfulness, on an example from e-SNLI. We measure the impact of an intervention by the total variation distance (TVD) between the model’s predictions before and after the intervention. We then compute CCT as the correlation between intervention impact and explanation mention over multiple examples. Predictions and explanations are given by Llama2 70B. See Appendix E for additional examples of interventions and their impact. 2. A prediction impact measure: a function mapping an input example, intervention, and model to a scalar representing how impactful the intervention was on the model’s prediction. We call the output of this function the prediction impact or I. 3. An explanation mention measure: a function mapping an input example, intervention, and explanation to a scalar representing the extent to which the explanation attributes importance to the intervened factors. We call the output of this function the mention importance or M. If an intervention has higher prediction impact, a faithful explanation should assign it higher mention importance. We quantify this relationship by measuring the Pearson correlation coefficient between prediction impact and mention importance: CEF = Pn i=0 Ii −I  Mi −M  qPn i=1 Ii −I 2qPn i=1 Mi −M 2 (1) where x = 1 n Pn i=1 xi (the sample mean). Being a correlation, it lies in the interval [−1, 1], with 0 indicating no relationship and positive values indicating higher mention importance for more impactful interventions. We can then apply this metric to the CT, which gives us the Correlational Counterfactual Test (CCT). In our work, the intervention inserts an IA. To quantify the degree of prediction impact in a continuous manner, we measure the total shift in the model’s predictions due to the IA. There are a number of ways to measure shifts in probability distributions over discrete classes; we use the total variation distance (TVD), i.e: TVD(P, Q) = 1 2 X x |P(x) −Q(x)| (2) where P and Q are probability distributions over discrete classes. We take P and Q to be the model’s predicted distributions before and after the intervention, so that TVD measures the absolute change in probabilities assigned to each class. Compared to other common statistical distances such as the relative entropy (KL divergence), TVD gives less weight to shifts between very small probabilities (which are unlikely to impact classification) and has the advantage of symmetry. To measure mention importance, we use the original CT’s binary metric: does the explanation mention the word? Note that in this case our metric represents the point-biserial correlation, a special case of the Pearson correlation coefficient where one variable is continuous and the other is dichotomous. We can then write CCT as: CCT = EM(TVD) −E¬M(TVD) STD(TVD) s |M||¬M| |M ∪¬M|2 , (3) where M indicates that the explanation mentions the IA, and |M| indicates the number of examples with explanation mentions. For the binary mentions we study, CCT is maximized when explanations mention IAs exactly when their TVD is above a certain threshold (where the threshold depends on the distribution of TVDs). Table 1 shows an example application of our method. Future work could explore the case where explanations can assign weights to different features. We test alternatives to TVD and CCT in Appendix F. CCT addresses the mentioned drawbacks of the CT. Unlike the CT, it cannot be trivially gamed: 532 achieving maximum correlation requires explanations to mention impactful IAs while not mentioning non-impactful IAs, which requires a signal about which words are impactful. 4 Experiments In this section, we describe our experimental setup. We first generate predictions and NLEs using LLMs on a set of three natural language classification tasks. We then study the faithfulness of these NLEs, comparing the CT and CCT. 4.1 Datasets Following Atanasova et al. (2023), we evaluate on three popular classification datasets including human-written NLEs: e-SNLI (Camburu et al., 2018): Sentence pairs labeled with entailment, contradiction, or neutral. ComVE (Wang et al., 2020): Sentence pairs where one violates common sense. ECQA (Aggarwal et al., 2021): Multiple choice common sense questions with 5 options each. We use ECQA in place of CoS-E (Rajani et al., 2019) as a more recent dataset also based on CQA with more detailed explanations that both justify the correct answer and refute the incorrect answers. Note that the ground-truth NLEs are not necessarily faithful explanations for an LLM: there may be multiple equally valid justifications for a groundtruth label on an instance (e.g., multiple reasons why two sentences are contradictory), or the LLM could rely on other reasoning, such as spurious correlations. We use the original train/test splits and evaluate on test sets, containing 9,842 (e-SNLI), 2,194 (ECQA), and 999 (ComVE) examples. 4.2 Models and Prompts We use the Llama-2 series of LLMs (Touvron et al., 2023). We focus on the few-shot imitation setting: we use the pretrained foundation models (Llama2-7B, Llama-2-13B, and Llama-2-70B) prompted with a brief description of the dataset followed by 20 randomly selected examples from the training set including label and explanation. When prompting the model, we can have it generate NLEs either after its prediction, as an explanation conditioned on the prediction (predict-then-explain, PE), or before the prediction, which is conditioned on the explanation (explain-then-predict, EP)1 (Camburu 1Using this terminology, chain-of-thought (Wei et al., 2023) is EP. et al., 2018). We provide full example prompts in Appendix G. When generating text with these models, we use greedy sampling to reduce variation during evaluation. However, we still record the probabilities assigned to tokens corresponding to predicted classes, which we use for computing the TVD. 4.3 Counterfactual Interventions We use the random intervention proposed in Atanasova et al. (2023): we insert a random adjective before a noun or a random adverb before a verb, randomly selecting 4 positions where we insert the said words, and for each position selecting 20 random candidate words. The candidates are chosen from the complete list of adjectives or adverbs available in WordNet (Fellbaum, 2010), and nouns and verbs are identified with spaCy (Orosz et al., 2022) using the model "en_core_web_lg". In order to avoid highly unnatural sentences, we use an instruction-tuned LLM, Llama-2-70b-chat, to identify interventions that the model judges as not making sense, and keep only the top 20% of interventions for each example (prompt shown in subsection G.4). See Appendix E for examples of interventions and their effect on model predictions and explanations. We determine whether an explanation includes an IA by case-insensitive substring matches, either on the original strings or stemmed versions (Porter, 2001). For each model, prompting strategy (PE vs. EP), and dataset, we first run the model on each example in the test set and measure its predicted class probabilities. Next, we perform counterfactual interventions on each example and re-run the model on each intervention. Using TVD to measure impactfulness, we can study whether explanations are more likely to mention IAs that are more impactful, and compare the CT and CCT. 5 Results Figure 1 plots intervention importance as measured by TVD vs. the fraction of the time that IAs are mentioned in explanations. A model with faithful explanations should show an upward trend in mentions, being more likely to mention highly impactful IAs than less impactful IAs. We note that while explanation mentions for e-SNLI show a clear upward trend, ECQA has a relatively flat trend: most ECQA explanations mention IAs, but they are not much more likely to mention highly impactful IAs 533 Figure 1: Intervention impactfulness vs. explanation mentions, PE. The plots show the fraction of examples where the explanation mentions the inserted text (IA) vs. the total variation distance (TVD) of the model’s predictions before and after interventions. Rows show datasets, columns show models. Higher TVD indicates an intervention was more impactful on the model’s prediction. See Figure 2 for results in the EP setting. Accuracy (%) CT Unfaithfulness (%) CCT Faithfulness Model e-SNLI ECQA ComVE e-SNLI ECQA ComVE e-SNLI ECQA ComVE Llama2 7B, PE 57.7 54.1 55.2 32.5 30.4 81.3 0.245 0.047 0.040 Llama2 7B, EP 47.6 55.2 52.4 43.5 31.7 78.7 0.141 0.065 0.125 Llama2 13B, PE 67.1 68.0 75.6 39.4 28.6 82.0 0.227 0.055 0.036 Llama2 13B, EP 55.5 71.4 75.8 45.5 30.2 78.4 0.189 0.036 0.201 Llama2 70B, PE 85.5 79.7 97.7 29.3 24.1 70.0 0.411 0.083 0.172 Llama2 70B, EP 74.9 77.8 98.5 37.2 28.8 69.2 0.304 0.038 0.238 Random 33.3 20.0 50.0 0.000 0.000 0.000 Table 2: Results. Accuracy (before interventions), CT, and CCT across datasets, models, and prompt orders (predict-then-explain, PE, vs. explain-then-predict, EP). Random CCT Faithfulness assumes that explanation mentions are independent of prediction impact. For CT Unfaithfulness, it is not obvious what to use as a “random” explanation baseline: empty explanations would yield 100% unfaithfulness, while explanations simply repeating all input text verbatim would yield 0% unfaithfulness regardless of model predictions. than non-impactful ones. This may be because they tend to be verbose and repeat large portions of their inputs, as can be seen frm the examples on Table 4. Table 2 shows the quantitative results of our experiments. Classification accuracy before intervention is above random for all models and datasets (except possibly Llama2-7B on ComVE), indicating that the models are capable of performing some aspects of the tasks. Note that ECQA explanations have the lowest CT unfaithfulness of any dataset, i.e. they frequently mention IAs which cause predictions to change. But Figure 1 shows that this is misleading: ECQA explanations succeed in frequently mentioning impactful IAs because they frequently mentions any IAs; the fact that a word appears in an ECQA explanation gives little signal about whether that word was impactful or not for the model’s prediction. The CCT is more informative of the qualitative results from Figure 1 than CT: model explanations provide more information about the relevance of IAs for e-SNLI than for ECQA, and are thus more faithful. Additionally, we see that the largest model, Llama2 70B, produces the most faithful explanations on e-SNLI and ComVE. 6 Summary and outlook We introduced Counterfactual Explanatory Faithfulness and the Correlational Counterfactual Test, allowing us to measure how informative explanations are about the importance of the factors they mention. Model explanations are more likely to mention inserted words when they’re more impactful to the model’s predictions, suggesting a degree of faithfulness on these tasks which increases with model size. However, there is significant variation between datasets, which could be due to either the nature of the task or the annotator-provided explanations. Future work could apply the CCT to instruction-tuned models, as well as explanations generated using strategies such as question decomposition (Radhakrishnan et al., 2023). 534 Limitations While our analysis identifies and corrects some shortcomings of prior work on measuring the faithfulness of NLEs, it does inherit some of the limitations of the original CT (Atanasova et al., 2023). The counterfactual interventions only insert adjectives and adverbs, and only single words at a time, so our experiments do not measure sensitivity to other parts of speech. Our random intervention can generate text which lacks semantic coherence, despite our LLM filtering step. We do not test for synonyms, which could inaccurately label some explanations. Additionally, we do not consider the semantic usage of word mentions: for example, our metrics would not penalize the faithfulness of illogical explanations as long as they had the correct pattern of word inclusion. Some of these drawbacks could potentially be addressed by further filtering or analysis by more advanced LLMs, taking advantage of their semantic understanding. We study LLMs generating predictions and explanations using few-shot prompting, with example explanations taken from human-generated NLEs. These explanations can be highly dependent on annotation instructions. For example, CoS-E (Rajani et al., 2019) and ECQA (Aggarwal et al., 2021) both use CQA (Talmor et al., 2019) as a base dataset, but ECQA explanations are significantly longer than those for CoS-E. As such, care should be taken when extrapolating our results to other tasks: in the few-shot setting, the example explanations provided can have just as much impact on faithfulness as the model being used. Acknowledgements We would like to thank Zac Kenton for feedback on a draft of this paper. Oana-Maria Camburu was supported by a Leverhulme Early Career Fellowship. The work of Perez-Ortiz was partially supported by the European Commission-funded project “Humane AI: Toward AI Systems That Augment and Empower Humans by Understanding Us, our Society and the World Around Us” (grant 952026). References Julius Adebayo, Justin Gilmer, Michael Muelly, Ian J. Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity checks for saliency maps. In Neural Information Processing Systems. Shourya Aggarwal, Divyanshu Mandowara, Vishwajeet Agrawal, Dinesh Khandelwal, Parag Singla, and Dinesh Garg. 2021. Explanations for CommonsenseQA: New Dataset and Models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3050–3065, Online. Association for Computational Linguistics. Pepa Atanasova, Oana-Maria Camburu, Christina Lioma, Thomas Lukasiewicz, Jakob Grue Simonsen, and Isabelle Augenstein. 2023. Faithfulness tests for natural language explanations. ACL. Collin Burns, Haotian Ye, Dan Klein, and Jacob Steinhardt. 2022. Discovering latent knowledge in language models without supervision. ArXiv, abs/2212.03827. Oana-Maria Camburu, Eleonora Giunchiglia, Jakob Foerster, Thomas Lukasiewicz, and Phil Blunsom. 2021. The struggles of feature-based explanations: Shapley values vs. minimal sufficient subsets. In AAAI 2021 Workshop on Explainable Agency in Artificial Intelligence. Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-SNLI: Natural language inference with natural language explanations. NeurIPS. Aditya Chattopadhyay, Stewart Slocum, Benjamin D. Haeffele, René Vidal, and Donald Geman. 2023. Interpretable by design: Learning predictors by composing interpretable queries. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(6):7430–7443. Antonia Creswell and Murray Shanahan. 2022. Faithful reasoning using large language models. Antonia Creswell, Murray Shanahan, and Irina Higgins. 2022. Selection-inference: Exploiting large language models for interpretable logical reasoning. ICLR. Jay DeYoung, Sarthak Jain, Nazneen Rajani, Eric P. Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2019. Eraser: A benchmark to evaluate rationalized nlp models. In Annual Meeting of the Association for Computational Linguistics. Fenglei Fan, Jinjun Xiong, Mengzhou Li, and Ge Wang. 2020. On interpretability of artificial neural networks: A survey. IEEE Transactions on Radiation and Plasma Medical Sciences, 5:741–760. Sebastian Farquhar, Vikrant Varma, Zachary Kenton, Johannes Gasteiger, Vladimir Mikulik, and Rohin Shah. 2023. Challenges with unsupervised llm knowledge discovery. ArXiv, abs/2312.10029. Christiane Fellbaum. 2010. Wordnet. In Theory and applications of ontology: computer applications, pages 231–243. Springer. 535 Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable nlp systems: How should we define and evaluate faithfulness? In Annual Meeting of the Association for Computational Linguistics. Maxime Kayser, Cornelius Emde, Oana-Maria Camburu, Guy Parsons, Bartlomiej Papiez, and Thomas Lukasiewicz. 2022. Explaining chest x-ray pathologies in natural language. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2022, pages 701–713, Cham. Springer Nature Switzerland. Tamera Lanham. 2022. Externalized reasoning oversight: a research direction for language model alignment. Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamil˙e Lukoši¯ut˙e, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, and Ethan Perez. 2023. Measuring faithfulness in chainof-thought reasoning. Ricards Marcinkevics and Julia E. Vogt. 2020. Interpretability and explainability: A machine learning zoo mini-tour. ArXiv, abs/2012.01805. Richard Ngo, Lawrence Chan, and Sören Mindermann. 2023. The alignment problem from a deep learning perspective. György Orosz, Zsolt Szántó, Péter Berkecz, Gerg˝o Szabó, and Richárd Farkas. 2022. Huspacy: an industrial-strength hungarian natural language processing toolkit. arXiv preprint arXiv:2201.01956. Letitia Parcalabescu and Anette Frank. 2023. On measuring faithfulness or self-consistency of natural language explanations. ArXiv. Martin F Porter. 2001. Snowball: A language for stemming algorithms. Ansh Radhakrishnan, Karina Nguyen, Anna Chen, Carol Chen, Carson Denison, Danny Hernandez, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamil˙e Lukoši¯ut˙e, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Sam McCandlish, Sheer El Showk, Tamera Lanham, Tim Maxwell, Venkatesa Chandrasekaran, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, and Ethan Perez. 2023. Question decomposition improves the faithfulness of model-generated reasoning. Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. Fabien Roger and Ryan Greenblatt. 2023. Preventing language models from hiding their reasoning. Cynthia Rudin. 2018. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1:206 – 215. Rohin Shah, Vikrant Varma, Ramana Kumar, Mary Phuong, Victoria Krakovna, Jonathan Uesato, and Zac Kenton. 2022. Goal misgeneralization: Why correct specifications aren’t enough for correct goals. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149–4158, Minneapolis, Minnesota. Association for Computational Linguistics. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. Miles Turpin, Julian Michael, Ethan Perez, and Sam Bowman. 2023. Language models don’t always say what they think: Unfaithful explanations in chain-ofthought prompting. ArXiv, abs/2305.04388. Cunxiang Wang, Shuailong Liang, Yili Jin, Yilong Wang, Xiaodan Zhu, and Yue Zhang. 2020. SemEval2020 task 4: Commonsense validation and explanation. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 307–321, Barcelona (online). International Committee for Computational Linguistics. Francis Rhys Ward, Francesco Belardinelli, Francesca Toni, and Tom Everitt. 2023. Honesty is the best policy: Defining and mitigating ai deception. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and 536 Denny Zhou. 2023. Chain-of-thought prompting elicits reasoning in large language models. Sarah Wiegreffe and Ana Marasovi´c. 2021. Teach me to explain: A review of datasets for explainable natural language processing. In NeurIPS Datasets and Benchmarks. Sarah Wiegreffe, Ana Marasovi´c, and Noah A. Smith. 2020. Measuring association between labels and freetext rationales. In Conference on Empirical Methods in Natural Language Processing. 537 A "Causal" vs. "Explanatory" Faithfulness Rather than generating post-hoc explanations, there have been calls to instead build interpretability into the prediction process, where the prediction causally follows from the explanation (Rudin, 2018; Chattopadhyay et al., 2023). In the context of LLMs, this can be done by having models generate chains-of-thought (CoT) (Wei et al., 2023), a series of intermediate reasoning steps before producing their prediction. In addition to improving final task accuracy, this has been hypothesized to be a way to improve faithfulness: rationales may be more likely to accurately represent a model’s true reasoning process if they are generated first, so that they can inform the prediction (Lanham, 2022; Camburu et al., 2018). However, it has been shown that even if reasoning is generated before the prediction, it may still be unfaithful (Turpin et al., 2023; Atanasova et al., 2023). Work on CoT has often focused on measuring (Lanham et al., 2023) and increasing (Radhakrishnan et al., 2023) the degree to which the model’s final answer depends on its reasoning (i.e. the extent to which editing or removing reasoning steps change the model’s answer). Studying faithfulness and causal dependence in reasoning tackle complementary ideas, and we believe there are reasons to measure them separately: 1. It may be difficult to ensure reliance on CoT reasoning for some tasks: Lanham et al. (2023) found relatively minor accuracy gains from CoT outside of math-focused domains. In particular, as models become more powerful, they may be capable of solving increasing sets of tasks without verbalised CoT. 2. Causal dependence alone doesn’t ensure the usefulness of an explanation: models could use language in ways different from humans, either unintentionally (e.g. semantic drift) or as a result of some optimization pressure (e.g. steganography Roger and Greenblatt (2023)). Separate from causal dependence, it will still be necessary to measure whether the textual content of reasoning provides useful information on the factors leading to the model’s prediction. B “Explainability” vs. “Transparency/Interpretability” There isn’t currently a clear consensus on the usage of the terms “explainability” and “interpretability”: they are sometimes used interchangeably (e.g. Jacovi and Goldberg (2020)), while other times a distinction is made between “interpretability” or “transparency” involving the creation of systems constrained in model form so its inner mechanics can be observed and understood, and “explainability” involving the creation of auxiliary models to explain an existing black-box model (e.g. Rudin (2018)). Marcinkevics and Vogt (2020) also survey some existing usages of these terms. Because “interpretability” is used in these different ways, when discussing this distinction, we’ve found it least ambiguous to refer to the two sides as “explainability” and “transparency”. The definition of faithfulness we adopt is that an explanation is faithful insofar as it “accurately represents the reasoning process behind the model’s prediction” (Jacovi and Goldberg, 2020). Under Rudin (2018)’s distinction, both transparent systems and explainable systems can in principle be faithful if their explanations accurately represent the model’s reasoning process. However, explainable systems in particular are at risk of post-hoc rationalization: producing explanations that sound plausible to humans but that don’t capture the true features that led to the prediction. This is our motivation for introducing improved metrics for faithfulness in explanations. C “Faithfulness” or “Self-Consistency”? Recent work (Parcalabescu and Frank, 2023) has argued that many metrics claiming to measure “faithfulness” (including the Counterfactual Test (Atanasova et al., 2023)) are in fact only measuring a weaker property, which they refer to as “selfconsistency”, because these tests fail to take into account mechanistic inner workings. However, we still believe it is useful to refer to these tests as faithfulness metrics rather than self-consistency tests. Using Jacovi and Goldberg (2020)’s definition of faithfulness, if we intervene on an input and the model’s output distribution changes, we have learned a property of the model’s true reasoning process, i.e. that it depends on the intervened input in the current context. We can then measure the extent to which the explanation reflects this dependency, as in our proposed test. 538 Additionally, a test being mechanistic is not a guarantee of its robustness. Parcalabescu and Frank (2023) argue that “a test that is able to interrogate a model’s inner workings would be akin to a lie detector that uses more internal cues that cannot be easily suppressed”. Indeed, this has been the motivation for some prior approaches: Burns et al. (2022) proposed Contrast Consistent Search, a test using internal model activations to detect when a model gives an answer it “knows” is untrue. However, later work found that this method often identifies spurious non-knowledge-related features (Farquhar et al., 2023). Robustly measuring faithfulness may require a combination of tests, both mechanistic and black-box. D Intervention Impactfulness with Explain-then-Predict Figure 2 shows intervention impactfulness vs. explanation mention measure, equivalent to Figure 1 but in the Explain-then-Predict (EP) setting. E Example Interventions In this section we show randomly selected examples of interventions on the three datasets, as well as model responses. For each example, we show the original problem and resulting prediction first, followed by the modified problem and predictions with the IA highlighted in red. We also highlight any mentions of the IA in the model’s explanation for the modified problem. For conciseness we show only the case of Llama2 70B using predict-thenexplain prompting. See Table 3 for interventions on e-SNLI, Table 4 for interventions on ECQA, and Table 5 for interventions on ComVE. F CCT Variants We chose to use TVD as our distance metric because it gives less weight to shifts between very small probabilities (which are unlikely to impact the classification decision), and we chose to use Pearson as our correlation coefficient because it takes cardinality into account, unlike rank correlation coefficients which only use ordinality. However, our approach can also be computed using other choices of distance and correlation. We can compute our metric in the predict-thenexplain setting under two other plausible configurations: CCT (Jensen-Shannon) using JensenShannon divergence, a symmetric divergence based on KL) in place of TVD, and CCT (Spearman) using Spearman’s rank correlation in place of Pearson. Table 6 shows our results. These variants show similar qualitative trends, with the highest values assigned to e-SNLI explanations, lower values for ECQA and comVE, and slightly more faithful explanations for the largest model (except for CCT (Spearman) EP, where the 13B model has the highest value). G LM Prompts In this section we describe the prompts we use. Each few-shot prompt consists of three parts: the prefix describes the format verbally; 20 few-shot examples sampled uniformly without replacement from the training set, providing demonstrations of predictions and explanations; and the query, consisting of the input for a new problem instance to be evaluated. To avoid dependence on a single prompt sample, we independently sample new few-shot examples for each evaluation example. However, to ensure our word insertion interventions are the only thing changing model predictions, we use the same few-shot examples for the model’s prediction before and after interventions. The following are randomly selected examples of prompts for each dataset. We show predict-thenexplain prompts; explain-then-predict prompts have the same format, with the only difference being that the order of the label and explanation lines is reversed and the query ends with "EXPLANATION:" rather than the label title. G.1 e-SNLI Example Prompt The following are examples from a dataset. Each example consists of a pair of statements, "TEXT" and "HYPOTHESIS". Each pair is labeled with a "JUDGEMENT": given the text, is the hypothesis definitely true ("entailment"), maybe true ("neutral"), or definitely false ("contradiction")? "EXPLANATION" explains why the selected judgement is chosen. TEXT: a dog chases another dog. HYPOTHESIS: The dog is wanting to get the ball first. JUDGEMENT: neutral EXPLANATION: The dog may not be wanting anything. There may not be a ball present to get first. TEXT: A woman carried a cake ito the room with three candles as another woman holding a flute glass of wine, holds up her hand. HYPOTHESIS: Two women were celebrating. JUDGEMENT: neutral EXPLANATION: Eating a cake and drinking one doesn’t imply celebrating. TEXT: A man in a wetsuit is surfing up and over a wave. HYPOTHESIS: A man is surfing over a wave. JUDGEMENT: entailment EXPLANATION: A man surfing would do so over a wave. TEXT: Rugby players tackling each other. HYPOTHESIS: The rugby players are getting physical. JUDGEMENT: entailment EXPLANATION: Tackling is a very physical action. TEXT: Some students saying prayer outside. HYPOTHESIS: A dog barks inside. JUDGEMENT: contradiction EXPLANATION: the dog is not students outside and the dog is inside. 539 Text Hypothesis True Class Predicted Class Predicted Class Probabilities TVD Generated Explanation A man is resting in a small stream with a hat over his head while the little waterfall is pouring in the background. A man is resting in a hammock contradiction contradiction entailment: 0.1%, neutral: 4.9%, contradiction: 94.4% A man can’t be resting in a hammock and a stream at the same time. A man is deliriously resting in a small stream with a hat over his head while the little waterfall is pouring in the background. A man is resting in a hammock contradiction entailment: 0.1%, neutral: 4.3%, contradiction: 95.0% 0.01 A man can’t be resting in a hammock and a stream at the same time. A blond-haired doctor and her African american assistant looking threw new medical manuals. A doctor is studying neutral neutral entailment: 32.5%, neutral: 62.7%, contradiction: 3.9% Just because a doctor is looking through new medical manuals does not mean that the doctor is studying. A blond-haired prominent doctor and her African american assistant looking threw new medical manuals. A doctor is studying neutral entailment: 34.2%, neutral: 61.0%, contradiction: 3.9% 0.02 Just because a doctor is looking through new medical manuals does not mean that the doctor is studying. Man wearing blue bowing on floor in front of another man in blue bowing on floor with two other men wearing blue kneeling on same floor also. A man is wearing something. entailment neutral entailment: 31.1%, neutral: 53.7%, contradiction: 13.6% A man wearing something does not imply that he is wearing blue. Shaky man wearing blue bowing on floor in front of another man in blue bowing on floor with two other men wearing blue kneeling on same floor also. A man is wearing something. neutral entailment: 33.3%, neutral: 53.1%, contradiction: 12.0% 0.02 A man wearing something does not imply that he is wearing blue. Three people are riding a carriage pulled by four horses. The horses are scrawny. neutral neutral entailment: 0.1%, neutral: 75.0%, contradiction: 23.6% The horses could be scrawny or not. Three people are riding a carriage pulled by four joyous horses. The horses are scrawny. contradiction entailment: 0.1%, neutral: 4.9%, contradiction: 94.4% 0.70 The horses are joyous, so they are not scrawny. Children nap on a floor while others stand by. Children are playing childish games while others stand by. contradiction contradiction entailment: 0.4%, neutral: 37.5%, contradiction: 60.9% Children cannot be playing childish games while they are napping. Children nap on a floor while others stand by. Children are gloriously playing childish games while others stand by. contradiction entailment: 0.2%, neutral: 17.1%, contradiction: 81.8% 0.21 Children cannot be gloriously playing childish games while others stand by and napping on a floor at the same time. A child in a jeans jacket and shorts holding up a white dress as another darker-skinned child looks at it. a dog eats his food contradiction contradiction entailment: 0.1%, neutral: 32.0%, contradiction: 66.8% A dog eating his food is not related to a child holding up a white dress. A child in a jeans jacket and shorts holding up a white dress as another darker-skinned child looks at it. a dog badly eats his food contradiction entailment: 0.2%, neutral: 33.7%, contradiction: 65.0% 0.02 A dog eating his food badly does not relate to a child holding up a white dress. A bald person in green clothing stands in front of a brick wall. A person is in front of a brick wall. entailment entailment entailment: 93.4%, neutral: 5.9%, contradiction: 0.4% A bald person in green clothing stands in front of a brick wall. A bald person in green clothing stands in front of a brick wall. A takeout person is in front of a brick wall. neutral entailment: 0.8%, neutral: 68.7%, contradiction: 29.1% 0.92 A bald person in green clothing does not imply that they are a takeout person. A father and son are in a field of yellow flowers. People are picking flowers. neutral neutral entailment: 1.3%, neutral: 81.9%, contradiction: 15.6% There is no evidence that the people are picking flowers. A corrupt father and son are in a field of yellow flowers. People are picking flowers. neutral entailment: 1.7%, neutral: 74.8%, contradiction: 22.1% 0.07 There is no evidence that the people are picking flowers. Very young boy is standing outside near a water fountain that is built into the ground and is squirting water. The young boy is standing inside. contradiction contradiction entailment: 0.1%, neutral: 0.8%, contradiction: 98.7% The boy is standing outside. Very young wholesome boy is standing outside near a water fountain that is built into the ground and is squirting water. The young boy is standing inside. contradiction entailment: 0.1%, neutral: 0.8%, contradiction: 98.7% 0.00 The boy is standing outside. Two men at a table having drinks. Two people are drinking. entailment entailment entailment: 66.6%, neutral: 29.6%, contradiction: 2.7% Two men at a table having drinks means two people are drinking. Two men at a table having drinks. Two insubordinate people are drinking. contradiction entailment: 1.5%, neutral: 28.5%, contradiction: 68.4% 0.66 Two men at a table having drinks does not imply insubordinate people are drinking. Table 3: Example e-SNLI interventions, predictions, and explanations. 540 Question Options True Class Predicted Class Predicted Class Probabilities TVD Generated Explanation What will you do after you are awake? 1: open eyes, 2: get to sleep, 3: deep sleep, 4: sleep again, 5: conscious 1 5 1: 24.8%, 2: 1.9%, 3: 2.9%, 4: 1.9%, 5: 68.4% Conscious is to be awake and aware of one’s surroundings. What will you do after you are awake? You will be conscious. Open eyes is not what you will do after you are awake. Get to sleep and sleep again are not what you will do after you are awake. Deep sleep is not what you will do after you are awake. What will you skilfully do after you are awake? 1: open eyes, 2: get to sleep, 3: deep sleep, 4: sleep again, 5: conscious 5 1: 18.5%, 2: 2.2%, 3: 3.0%, 4: 1.8%, 5: 74.4% 0.06 Conscious is to be awake and aware of one’s surroundings. What will you skilfully do after you are awake is to be conscious. Open eyes is not skilful. Get to sleep and sleep again is not what you will do after you are awake. Deep sleep is not skilful. Where might a person play an acoustic guitar at a birthday party? 1: studio, 2: at funeral place, 3: house, 4: music shop, 5: closet 3 3 1: 2.9%, 2: 0.6%, 3: 94.0%, 4: 0.8%, 5: 1.7% A person might play an acoustic guitar at a birthday party at a house. A person might not play an acoustic guitar at a birthday party at a studio. A person might not play an acoustic guitar at a birthday party at a funeral place. A person might not play an acoustic guitar at a birthday party at a music shop. A person might not play an acoustic guitar at a birthday party at a closet. Where might a callous person play an acoustic guitar at a birthday party? 1: studio, 2: at funeral place, 3: house, 4: music shop, 5: closet 3 1: 13.7%, 2: 2.2%, 3: 76.6%, 4: 1.8%, 5: 5.6% 0.17 A callous person is someone who is not sensitive to the feelings of others. A callous person might play an acoustic guitar at a birthday party in a house. A callous person might not play an acoustic guitar at a birthday party in a studio. A callous person might not play an acoustic guitar at a birthday party at a funeral place. A callous person might not play an acoustic guitar at a birthday party in a music shop. A callous person might not play an acoustic guitar at a birthday party in a closet. What type of location would you find fish for sale? 1: refrigerator, 2: ocean, 3: pay debts, 4: stream, 5: market 5 5 1: 0.2%, 2: 0.3%, 3: 0.5%, 4: 0.9%, 5: 98.1% Market is a place where goods are bought and sold. Fish for sale can be found at a market. Fish for sale cannot be found at a refrigerator. Fish for sale cannot be found at an ocean. Fish for sale cannot be found at pay debts. Fish for sale cannot be found at a stream. What type of location would you find elaborate fish for sale? 1: refrigerator, 2: ocean, 3: pay debts, 4: stream, 5: market 5 1: 0.2%, 2: 0.3%, 3: 0.4%, 4: 0.8%, 5: 98.2% 0.00 Market is a place where goods are bought and sold. Elaborate fish for sale can be found at a market. Refrigerator is not a place where elaborate fish for sale can be found. Ocean is not a place where elaborate fish for sale can be found. Pay debts is not a place where elaborate fish for sale can be found. Stream is not a place where elaborate fish for sale can be found. If someone is fiddling a sad tune, what is the song likely about? 1: troubles, 2: unpredictable, 3: joy, 4: being happy, 5: discovery 1 1 1: 92.1%, 2: 1.6%, 3: 5.9%, 4: 0.2%, 5: 0.2% A sad tune is a song that is sad and not unpredictable, joyful, happy or a discovery. A sad tune is likely to be about troubles. If someone is fiddling a sad showy tune, what is the song likely about? 1: troubles, 2: unpredictable, 3: joy, 4: being happy, 5: discovery 1 1: 92.0%, 2: 1.3%, 3: 6.2%, 4: 0.2%, 5: 0.3% 0.00 Troubles are the problems and difficulties that someone has. If someone is fiddling a sad showy tune, the song is likely about troubles. Unpredictable is not a correct option. Joy is not a correct option. Being happy is not a correct option. Discovery is not a correct option. Sam’s children are rambunctious. What might they do? 1: break things, 2: play sports, 3: play with toys, 4: play ball, 5: throw things 5 1 1: 57.0%, 2: 7.7%, 3: 9.2%, 4: 2.3%, 5: 23.8% Rambunctious means uncontrollable and uncontrollable children might break things. Playing sports, playing with toys, playing ball and throwing things are not what rambunctious children might do. Sam’s friendly children are rambunctious. What might they do? 1: break things, 2: play sports, 3: play with toys, 4: play ball, 5: throw things 1 1: 50.2%, 2: 11.0%, 3: 12.5%, 4: 3.0%, 5: 23.3% 0.07 Rambunctious means uncontrollably exuberant and children who are rambunctious might break things. Children who are rambunctious might not play sports, play with toys, play ball or throw things. Where is a business restaurant likely to be located? 1: town, 2: at hotel, 3: mall, 4: business sector, 5: yellow pages 4 4 1: 4.8%, 2: 4.0%, 3: 5.4%, 4: 84.6%, 5: 1.2% Business sector is a place where business is done. Business restaurant is a place where business is done. Business restaurant is likely to be located in business sector. Town is not a place where business is done. At hotel is not a place where business is done. Mall is not a place where business is done. Yellow pages is not a place where business is done. Where is a halal business restaurant likely to be located? 1: town, 2: at hotel, 3: mall, 4: business sector, 5: yellow pages 4 1: 14.0%, 2: 6.1%, 3: 34.7%, 4: 41.9%, 5: 3.2% 0.43 Halal is an Arabic word meaning lawful or permitted. In reference to food, it is the dietary standard, as prescribed in the Qur’an (the Muslim scripture). Halal business restaurant is likely to be located in a business sector. Halal business restaurant is not likely to be located in a town. Halal business restaurant is not likely to be located at a hotel. Halal business restaurant is not likely to be located in a mall. Halal business restaurant is not likely to be located in yellow pages. Table 4: Example ECQA interventions, predictions, and explanations. 541 Sentence 0 Sentence 1 True Class Predicted Class Predicted Class Probabilities TVD Generated Explanation It is easy to find seashells in the forest It is easy to find seashells by the ocean 0 0 0: 99.1%, 1: 0.9% seashells are found by the ocean It is easy to find seashells in the forest It is easy to find seashells by the gloomy ocean 0 0: 97.6%, 1: 2.4% 0.02 seashells are found by the ocean cars are the only way to get around cars are a useful mode of transportation 0 0 0: 91.3%, 1: 8.6% There are many ways to get around, such as buses, trains, bicycles, etc. cars are the only way to get around Grey cars are a useful mode of transportation 1 0: 49.6%, 1: 50.3% 0.42 Grey cars are not the only way to get around The rotary phone recorded me. The cell phone recorded me. 0 0 0: 98.1%, 1: 1.9% The rotary phone can’t record. The rotary black phone recorded me. The cell phone recorded me. 0 0: 98.3%, 1: 1.6% 0.00 The rotary black phone can’t record. he swims 1km each way every day he swims with piranhas 1 1 0: 29.7%, 1: 70.1% Piranhas are dangerous he swims 1km each way every day he safely swims with piranhas 1 0: 22.2%, 1: 77.6% 0.07 Piranhas are dangerous The fish ran through the forest The fox ran through the forest 0 0 0: 99.5%, 1: 0.5% Fish cannot run on land The fish ran through the forest The fox ran through the singing forest 0 0: 69.9%, 1: 30.0% 0.30 Fish cannot run The kids went to the farm The farm went to the farm 1 1 0: 3.5%, 1: 96.4% A farm is a place, not a person or animal. The kids just went to the farm The farm went to the farm 1 0: 4.2%, 1: 95.6% 0.01 A farm is a place, not a person or animal. The stone can float on the water The wood can float on the water 0 0 0: 91.5%, 1: 8.5% The density of stone is greater than that of water The stone can float on the water The legendary wood can float on the water 0 0: 83.5%, 1: 16.4% 0.08 The stone is too heavy to float on the water I like to drink coffee beans in the morning I like to drink coffee in the morning 0 0 0: 98.5%, 1: 1.5% Coffee beans are not drinkable. I like to drink coffee beans in the morning I particularly like to drink coffee in the morning 0 0: 97.5%, 1: 2.4% 0.01 Coffee beans are not drinkable The teacher was teaching English in the classroom the teacher was harvesting wheat in the classroom 1 1 0: 0.6%, 1: 99.3% Teachers teach in classrooms, not harvest wheat The fussy teacher was teaching English in the classroom the teacher was harvesting wheat in the classroom 1 0: 1.5%, 1: 98.3% 0.01 Teachers teach in classrooms, not harvest wheat Blackberries and strawberries are berries. Blackberries and strawberries are the same. 1 1 0: 2.8%, 1: 97.1% Blackberries and strawberries are different. Blackberries and large strawberries are berries. Blackberries and strawberries are the same. 1 0: 3.5%, 1: 96.4% 0.01 Blackberries and strawberries are different. I put a bottle of milk into the fridge. I put a car inside the fridge. 1 1 0: 0.9%, 1: 99.0% A car is too big to fit in a fridge. I put a bottle of standard milk into the fridge. I put a car inside the fridge. 1 0: 1.7%, 1: 98.2% 0.01 A car is too big to fit in a fridge. she eats an egg for breakfast every day she eats a table for breakfast every day 1 1 0: 0.5%, 1: 99.4% a table is not edible she fearlessly eats an egg for breakfast every day she eats a table for breakfast every day 1 0: 1.2%, 1: 98.7% 0.01 a table is not edible He put a motorcycle in his wallet He put a coin in his wallet 0 0 0: 99.0%, 1: 1.0% A motorcycle is too big to fit in a wallet He put a motorcycle in his wallet He put a coin in his soft wallet 0 0: 94.1%, 1: 5.8% 0.05 A motorcycle is too big to fit in a wallet he kept the ice cream in the oven he kept the ice cream in the fridge 0 0 0: 99.3%, 1: 0.6% ice cream will melt in the oven he kept the ice cream in the oven he eagerly kept the ice cream in the fridge 0 0: 96.4%, 1: 3.5% 0.03 ice cream will melt in the oven He played a game with children He played a game with fairies 1 1 0: 1.2%, 1: 98.6% Fairies are not real He played a game with children He curiously played a game with fairies 1 0: 3.0%, 1: 96.8% 0.02 Fairies are not real Table 5: Example ComVE interventions, predictions, and explanations. 542 Figure 2: Intervention impactfulness vs. explanation mentions, EP. The plots show the fraction of examples where the explanation mentions the inserted text (IA) vs. the total variation distance (TVD) of the model’s predictions before and after interventions: higher TVD indicates an intervention was more impactful on the model. CCT (Original) CCT (Jensen-Shannon) CCT (Spearman) Model e-SNLI ECQA ComVE e-SNLI ECQA ComVE e-SNLI ECQA ComVE Llama 2 7B, PE 0.245 0.047 0.040 0.247 0.044 0.034 0.242 0.044 0.033 Llama 2 7B, EP 0.141 0.065 0.125 0.147 0.067 0.119 0.206 0.078 0.098 Llama 2 13B, PE 0.227 0.055 0.036 0.230 0.058 0.021 0.180 0.050 0.016 Llama 2 13B, EP 0.189 0.036 0.201 0.198 0.037 0.206 0.207 -0.014 0.173 Llama 2 70B, PE 0.411 0.083 0.172 0.412 0.085 0.129 0.329 0.068 0.046 Llama 2 70B, EP 0.304 0.038 0.238 0.312 0.037 0.239 0.308 0.021 0.128 Table 6: Values for CCT plus two variants: CCT (Jensen-Shannon) using Jennsen-Shannon divergence in place of TVD, and CCT (Spearman) using Spearman’s rank correlation in place of Pearson. TEXT: Three women are posing together and smiling while one holds up a hand signal. HYPOTHESIS: Two women are yelling at each other and pointing fingers. JUDGEMENT: contradiction EXPLANATION: There is either three women or two women. TEXT: Three people are checking out a piece of art at the local museum. HYPOTHESIS: Three women are at a museum. JUDGEMENT: entailment EXPLANATION: Three people could be women and they are at a museum TEXT: Four people are in a group hug near a soda machine. HYPOTHESIS: A group of friends in a huddle. JUDGEMENT: neutral EXPLANATION: a hug is not a huddle TEXT: A young boy wearing black pants and a pinstriped shirt looks at something on a computer screen. HYPOTHESIS: A young boy is doing his homework on the computer. JUDGEMENT: neutral EXPLANATION: Looking at screen doesn’t imply doing homework. TEXT: A man is rollerblading down a rail. HYPOTHESIS: There is a man rollerblading quickly. JUDGEMENT: neutral EXPLANATION: Not all people rollerblading are doing so quickly. TEXT: Pedestrians strolling along a brick walkway tween high buildings. HYPOTHESIS: People walk through town. JUDGEMENT: entailment EXPLANATION: Strolling means casually walking while a simple "walk" doesn’t have any connotation. TEXT: a group of people sitting on the ground on the sidewalk HYPOTHESIS: A group of people sit around in a circle. JUDGEMENT: neutral EXPLANATION: Sitting on the ground does not have to be in a circle. TEXT: A man with an arm cast films something on video while another man is looking at the camera. HYPOTHESIS: The man does not have a cast. JUDGEMENT: contradiction EXPLANATION: The man can’t have a cast while not having a cast. TEXT: Young woman in blue shirt checking out merchandise. HYPOTHESIS: The woman is shopping. JUDGEMENT: entailment EXPLANATION: One is shopping by checking out merchandise. TEXT: A woman carries a young girl on her shoulders HYPOTHESIS: A woman carries her purse with her to the concert. JUDGEMENT: contradiction EXPLANATION: A woman can either carry a young girl or her purse at a time. TEXT: A man cooking in a restaurants. HYPOTHESIS: A lady is cooking in a restaurant. JUDGEMENT: contradiction EXPLANATION: A man and a lady are two different people. TEXT: A white dog travels along a narrow path in a park setting. HYPOTHESIS: The animal is going along the path. JUDGEMENT: entailment EXPLANATION: The dog traveling is the animal going on the path. TEXT: One guy wearing black shirt sitting at table working on computer project. HYPOTHESIS: There is a man indoors with a computer. JUDGEMENT: entailment EXPLANATION: Guy is a synonym for man. Working on a computer project would likely require a computer. TEXT: A man in blue shorts lays down outside in a parking lot. HYPOTHESIS: Nobody is laying. JUDGEMENT: contradiction EXPLANATION: A man is laying down so there is somebody laying. TEXT: Girl running in a marathon, wearing a black shirt with a white tank top, with the numbers 44 on it. HYPOTHESIS: There is boy sitting at his house. JUDGEMENT: contradiction 543 EXPLANATION: a girl is not a boy and running is not sitting TEXT: Two women are embracing while holding to go packages. HYPOTHESIS: The sisters are hugging goodbye while holding to go packages after just eating lunch. JUDGEMENT: G.2 ComVE Example Prompt The following are examples from a dataset. Each example consists of a pair of sentences, "SENTENCE 0" and "SENTENCE 1". One of these sentences violates common sense. Each pair of these is labeled with "FALSE SENTENCE", followed by the label of the false sentence, 0 or 1. "EXPLANATION" explains why sentence is chosen. SENTENCE 0: You can use a holding bay to store an item SENTENCE 1: You can use a holding bay to delete an item FALSE SENTENCE: 1 EXPLANATION: Deleting items is not a holding bay function SENTENCE 0: Rainbow has five colors SENTENCE 1: Rainbow has seven colors FALSE SENTENCE: 0 EXPLANATION: The seven colors of the rainbow are red, orange, yellow, green, blue, blue, and purple SENTENCE 0: You are likely to find a cat in ocean SENTENCE 1: You are likely to find a shark in ocean FALSE SENTENCE: 0 EXPLANATION: Cats do not feed on ocean lives SENTENCE 0: The caterpillar eats the rose bud SENTENCE 1: Roses buds eat caterpillars FALSE SENTENCE: 1 EXPLANATION: Caterpillars have mouths while rose buds don’t SENTENCE 0: playing frisbee is for people who like to play frisbee SENTENCE 1: playing frisbee is for people who like to play football FALSE SENTENCE: 1 EXPLANATION: People avoid doing things they dislike so if they like play frisbee they do that sport SENTENCE 0: A recipe is great way to cook a gourmet meal and avoid minor mistakes in the kitchen. SENTENCE 1: Cooking gourmet meals is the number one way to make mistakes such as kitchen fires. FALSE SENTENCE: 1 EXPLANATION: Kitchen fires, and or mistakes are not a direct result of cooking gourmet meals. SENTENCE 0: Nail is a small piece of metal which is inserted into a lock and turned to open or close it SENTENCE 1: Key is a small piece of metal which is inserted into a lock and turned to open or close it FALSE SENTENCE: 0 EXPLANATION: Usually people use key to unlock a lock SENTENCE 0: She put a Turkey in the oven. SENTENCE 1: She put a desk in the oven. FALSE SENTENCE: 1 EXPLANATION: A desk can not fit in a oven. SENTENCE 0: A lemon has stripes. SENTENCE 1: A tiger has stripes. FALSE SENTENCE: 0 EXPLANATION: Lemons are yellow fruits. SENTENCE 0: Burning trash purifies air quality. SENTENCE 1: Burning trash aggravates air quality. FALSE SENTENCE: 0 EXPLANATION: Burning trash will produce a lot of harmful gases and can’t purify the air. SENTENCE 0: my favorite thing is skiing in the lake SENTENCE 1: my favorite thing is boating in the lake FALSE SENTENCE: 0 EXPLANATION: a lake is not the right place for skiing SENTENCE 0: He talked to her using a book shelf SENTENCE 1: He talked to her using a mobile phone FALSE SENTENCE: 0 EXPLANATION: Book shelves are for keeping books SENTENCE 0: People are so glad to see the heavy smog in the winter morning SENTENCE 1: People are so glad to see the blue sky in the winter morning FALSE SENTENCE: 0 EXPLANATION: Smog is a kind of pollution, it makes people sad and angry SENTENCE 0: A towel can not dry the water on your body SENTENCE 1: A towel can dry the water on your body FALSE SENTENCE: 0 EXPLANATION: Towels have a certain degree of water absorption. SENTENCE 0: There are four mountains around the table SENTENCE 1: There are four stools around the table FALSE SENTENCE: 0 EXPLANATION: Mountains need a great space and cannot be so close to a table SENTENCE 0: If I have no money, I would lent it to you SENTENCE 1: If I have any money, I would lent it to you FALSE SENTENCE: 0 EXPLANATION: He cannot lent money he doesn’t have SENTENCE 0: people go to see a doctor because they fall ill SENTENCE 1: people go to see a doctor so they fall ill FALSE SENTENCE: 1 EXPLANATION: a doctor is meant to cure diseases SENTENCE 0: Metro door is closing, please be quick SENTENCE 1: Metro door is closing, please step back FALSE SENTENCE: 0 EXPLANATION: People should step back and wait for the next train if the door is closing SENTENCE 0: There are many aliens in China. SENTENCE 1: There are many people in China. FALSE SENTENCE: 0 EXPLANATION: There aren’t aliens in the world. SENTENCE 0: People usually go to bars for drinks SENTENCE 1: People usually go to bars for milk FALSE SENTENCE: 1 EXPLANATION: Bars mainly sell drinks SENTENCE 0: A red lion will match that suit. SENTENCE 1: A red tie will match that suit. FALSE SENTENCE: 0 EXPLANATION: no one puts a lion on their clothes. SENTENCE 0: I have two eyes SENTENCE 1: I have five eyes FALSE SENTENCE: 1 EXPLANATION: Usually, humans have two eyes SENTENCE 0: drinking milk can help teenagers grow shorter SENTENCE 1: drinking milk can help teenagers grow taller FALSE SENTENCE: 0 EXPLANATION: it’s impossible for people to grow shorter SENTENCE 0: She ate her ballet shoes. SENTENCE 1: She wore her ballet shoes. FALSE SENTENCE: 0 EXPLANATION: she cannot eat ballet shoes SENTENCE 0: HE PUT HIS FOOT INTO THE SHOE IN ORDER TO TRY IT ON. SENTENCE 1: HE ALSO PUT HIS HAND IN THE SHOE TO SEE IF IT FITS. FALSE SENTENCE: 1 EXPLANATION: HANDS DON’T FIT WELL INSIDE OF SHOES. SENTENCE 0: He poured orange juice on his cereal. SENTENCE 1: He poured milk on his cereal. FALSE SENTENCE: G.3 ECQA Example Prompt The following are examples from a dataset. Each example consists of a question followed by five multiple choice options. The option that makes the most sense as answer to the question is labelled as "CORRECT OPTION". "EXPLANATION" explains why the selected option is chosen. QUESTION: The chief saw his entire tribe wiped out, he was a leader with a single what? OPTION 1: peon OPTION 2: indian OPTION 3: minister OPTION 4: follower OPTION 5: employee CORRECT OPTION: 4 EXPLANATION: Leaders have followers who are supporters unlike peon, Indian or minister. Followers do not work for money while employees do. QUESTION: The drive was full of obstacles, he really had to what? OPTION 1: listen to radio OPTION 2: get into vehicle OPTION 3: hole in one OPTION 4: sleep OPTION 5: pay attention CORRECT OPTION: 5 EXPLANATION: Drive full of obstacles really needs to pay attention from driver.You cannot listen radio when the drive is full of obstacles as it may distract you. you cannot get into vehicle as you are already into the vehicle when driving.Hole in one is not things to do. You cannot sleep when the drive is full of obstacles as it may result in accident. QUESTION: What can’t viruses do without infecting a host cell? OPTION 1: reproduce OPTION 2: make computer malfunction 544 OPTION 3: infect OPTION 4: hack computer OPTION 5: mutate CORRECT OPTION: 1 EXPLANATION: Viruses can’t reproduce instead of infecting a host cell. Viruses can make a computer malfunction. Virus can infect. A virus can hack the computer system. Virus do mutate the system. QUESTION: How might a automobile get off a freeway? OPTION 1: exit ramp OPTION 2: garage OPTION 3: driveway OPTION 4: repair shop OPTION 5: stop light CORRECT OPTION: 1 EXPLANATION: Exit ramp is the end of a freeway from where people get off the freeway in their automobiles. All the other options are not from where automobiles get off freeways. QUESTION: It was impossible to find a parking garage, so James took a bus whenever he wanted to go where? OPTION 1: neighbor’s house OPTION 2: car OPTION 3: building OPTION 4: restaurant OPTION 5: downtown CORRECT OPTION: 5 EXPLANATION: Downtown is or is relating to the central and main part of a city. James takes a bus to go downtown since he wouldn’t find a parking garage. One won’t take a bus to go to his neighbor’s house and restaurants usually have a parking area. Building can be any building and a car is not a place to go to. QUESTION: He made another call, he did this all day hoping people would what well to his offer? OPTION 1: hang up OPTION 2: respond OPTION 3: contact OPTION 4: answer OPTION 5: attracting ducks CORRECT OPTION: 2 EXPLANATION: A response could get an offer while contacting and answering do not. Responding means answering unlike hanging up or attracting ducks. QUESTION: Where are people likely to sing? OPTION 1: apartment OPTION 2: supermarket OPTION 3: train station OPTION 4: opera OPTION 5: conference CORRECT OPTION: 4 EXPLANATION: Opera is an ancient musical art form including theatrical work. Opera includes singing. People usually sing at Opera. Apartment is not a common place where people sing. People do not sing at train stations. People do not sing at conferences of supemarkets. QUESTION: What might people do to protect their legs from getting dirty on the farm? OPTION 1: wear jeans OPTION 2: milk cow OPTION 3: believe in god OPTION 4: avoid mud OPTION 5: plant flowers CORRECT OPTION: 1 EXPLANATION: People wear full clothing in order to avoid getting dirty. Jeans is a full clothing for legs. People on farms wear jeans to protect their legs from getting dirty. Milking cow does not help in avoiding dirty legs. Believe in god is an irrelevant option. Avoiding mud does not always help in protecting legs from getting dirt on them. Plant flowers is an irrelevant option. QUESTION: Where would you get a toothpick if you do not have any? OPTION 1: box OPTION 2: grocery store OPTION 3: eyes OPTION 4: chewing OPTION 5: mouth CORRECT OPTION: 2 EXPLANATION: You would get a toothpick from a grocery store because it is available there. Box isnt a place from where youn can get a toothpick. Eyes or Chewing is not a place. You cant get a toothpick from mouth if you dont have any. QUESTION: What is smaller than a country but larger than a city? OPTION 1: town OPTION 2: france OPTION 3: continent OPTION 4: state OPTION 5: metal CORRECT OPTION: 4 EXPLANATION: Country is a collection of states and state is a collection of cities. So State is smaller than a country and larger than a city. Metal is not a place and all the other options are not smaller than a country and larger than a city. QUESTION: With all the leaves falling each year, a natural compost keeps the soil healthy for all the trees where? OPTION 1: garden OPTION 2: useful for recycling OPTION 3: surface of earth OPTION 4: forest OPTION 5: orchard CORRECT OPTION: 4 EXPLANATION: A natural compost keeps the soil healthy for all the trees in a forest which is a large area covered chiefly with trees. Compost is decayed or decaying organic matter like leaves. A garden may or may not have trees. Useful for recycling is not a geographical place where trees exist. Trees do not exist across all surface of earth. Leaves of fruit trees in an orchard may or may not fall every year. QUESTION: What must one be careful about when learning about science? OPTION 1: become educated OPTION 2: frustration OPTION 3: accidents OPTION 4: smiles OPTION 5: basketball CORRECT OPTION: 3 EXPLANATION: Accident is an unfortunate incident that happens unexpectedly and unintentionally. One must be careful about accidents when learning about science. Become educated is not being careful of. Frustration is the feeling of being upset as one doesn’t get frustrated when learning about science. Smile is amused expression whereas being careful about smile is not necessary when learning about science. Basketball is not true as learning about science is not related with basketball. QUESTION: Where can you learn about the anatomy of a blowfish in print? OPTION 1: cuba OPTION 2: fish market OPTION 3: books OPTION 4: france OPTION 5: canada CORRECT OPTION: 3 EXPLANATION: Anatomy exists in living beings including fishes and can be accessed in books. Cuba, France and Canada are countries and are not material to be printed on. Fish market cannot be printed on. QUESTION: If you ate some spicy food, what could happen to you? OPTION 1: medium OPTION 2: illness OPTION 3: throwing up OPTION 4: heartburn OPTION 5: sleepiness CORRECT OPTION: 4 EXPLANATION: spicy food causes you heartburn.Medium is not that can happen to you.spicy food doesn’t cause illness or throwing up or sleepiness. QUESTION: She let him know he was being over the top, and that his antics where a little what? OPTION 1: much OPTION 2: plenty OPTION 3: larger OPTION 4: lot of OPTION 5: big CORRECT OPTION: 1 EXPLANATION: The behaviour of the person was getting unbearble and a little much signifies something excess beyond capacity. All the other options are either grammatically or contextually incorrect. QUESTION: Where can a child learn about the adventures of a talking monkey? OPTION 1: rain forest OPTION 2: tropical areas OPTION 3: pet store OPTION 4: library OPTION 5: story book CORRECT OPTION: 5 EXPLANATION: Story books are books which are used for teaching children about various things like talking monkeys. Both tropical area sand rain forest are wild areas which are not a thing to teach child. Pet store and library are a diffrent type of place but cannot be used to teach children. QUESTION: You’ll likely have a kitchenette in what place where you sleep away from home? OPTION 1: house OPTION 2: hotel room OPTION 3: apartment OPTION 4: allen key OPTION 5: dormroom CORRECT OPTION: 2 EXPLANATION: Hotel room is a bedroom usually with bath in a hotel. You’ll likely have a kitchenette in a hotel room where you sleep away from home. House is a home where you live permanently and not away from home. Apartments are house and is not where you sleep away from home. Allen key is not a room where you can sleep. Dorm room usually comes without a kitchen. QUESTION: It was the only way out of town, the police parked their vehicles and drew their guns to create a what? OPTION 1: war OPTION 2: sporting goods store OPTION 3: military base OPTION 4: roadblock OPTION 5: fun CORRECT OPTION: 4 545 EXPLANATION: A roadblock is a barrier or barricade on a road which is set up to stop people passing through a road. Roads are ways of out towns. The police parked their vehicles to create a roadblock. Parking vehicles and drawing guns does not create fun all the other options. QUESTION: Sahmbi was lying about the fugitive’s location. He was lying because he wanted to avoid legal what? OPTION 1: confusion OPTION 2: being found out OPTION 3: hurt feelings OPTION 4: being fired OPTION 5: trouble CORRECT OPTION: 5 EXPLANATION: People lie to avoid legal troubles as they involve lot of hassle. All the other options have no legal implication and meaning. QUESTION: What does getting in line for a long time require in a person? OPTION 1: intention OPTION 2: getting in the front of the line OPTION 3: basic organization OPTION 4: early childhood socialization OPTION 5: patience CORRECT OPTION: 5 EXPLANATION: Patience is the capacity to accept or tolerate delay, problems, or suffering without becoming annoyed or anxious which is what required in a person to get in line for a long time. Getting in front of the line is not something in a person and getting in line for a long time does not require the things given in the other options. QUESTION: What might a person see at the scene of a brutal killing? OPTION 1: bloody mess OPTION 2: pleasure OPTION 3: being imprisoned OPTION 4: feeling of guilt OPTION 5: cake CORRECT OPTION: G.4 Naturalness Test Example Prompt The following is the prompt to filter examples for the naturalness of our interventions. Because this prompt is designed for instruction-tuned Llama2 models, it surrounds the instruction with [INST] tags, matching the format these models were finetuned on. [INST] I’m going to show a sentence, and followed by the same sentence with a word added. It’s fine if the added word changes the meaning of the sentence. However, I want you to tell me if the second sentence still makes sense with the added word. Sentence 1: "The children throw rocks at the militant threatening their safety." Sentence 2: "The stuck children throw rocks at the militant threatening their safety." Does the second sentence make sense with the added word? Please begin your answer with "Yes" or "No". [/INST] 546
2024_short
49
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 56–64 August 11-16, 2024 ©2024 Association for Computational Linguistics Speculative Contrastive Decoding Hongyi Yuan12∗, Keming Lu2, Fei Huang2, Zheng Yuan2, Chang Zhou2 1Tsinghua University, 2Alibaba Inc. [email protected] {lukeming.lkm,feihu.hf}@alibaba-inc.com {yuanzheng.yuanzhen,ericzhou.zc}@alibaba-inc.com Abstract Large language models (LLMs) exhibit exceptional performance in language tasks, yet their auto-regressive inference is limited due to high computational requirements and is suboptimal due to the exposure bias. Inspired by speculative decoding and contrastive decoding, we introduce Speculative Contrastive Decoding (SCD), a straightforward yet powerful decoding approach that leverages predictions from smaller language models (LMs) to achieve both decoding acceleration and quality improvement. Extensive evaluations and analyses on four diverse language tasks demonstrate the effectiveness of SCD, showing that decoding efficiency and quality can compatibly benefit from one smaller LM. 1 Introduction Large language models (LLMs) have advanced the versatility and proficiency in approaching realworld natural language tasks such as general instruction following (Ouyang et al., 2022; Taori et al., 2023; Lu et al., 2023) and reasoning (Cobbe et al., 2021; Wei et al., 2023; Yuan et al., 2023). Most existing LLMs (Brown et al. (2020); Touvron et al. (2023); Bai et al. (2023),inter alia) are built on decoder-only Transformers. Due to the auto-regressive nature during inference, the runtime of decoding inference can be excessive on general computation infrastructure, and the generation quality can be sub-optimal due to the exposure bias (Arora et al., 2022). Improving decoding inference has been the spotlight of the research community in language generation (Vijayakumar et al., 2018; Holtzman et al., 2020; Su et al., 2022). As for decoding acceleration, one prominent method named speculative decoding (Leviathan et al., 2022; Chen et al., 2023) has been proposed and leverages relatively smaller language models (LMs) to predict several successive token ∗Work done during internship at Alibaba Inc. generations of target LLMs. The LLMs only require one-time forward computation for checking the validity of predictions from the smaller LMs. The decoding method maintains the target LLMs’ token distributions and accelerates more when smaller LMs can accurately predict the potential target LLMs’ generations. As for the generation quality, contrastive decoding has been recently proposed (Li et al., 2023a). Contrastive decoding assumes that conjugated smaller LMs may present higher systematic tendencies to generate erroneous tokens than the larger ones, and the method seeks to eliminate such systematic error by contrasting the token distribution between smaller LMs and larger LMs. From either inference acceleration or quality improvement, these works have demonstrated a promising direction by integrating smaller LMs during auto-regressive generation. Inspired by both speculative and contrastive decoding, we propose Speculative Contrastive Decoding (SCD), which exploits a single smaller LM for decoding improvement in speed and quality en bloc. Comprehensive evaluations of four diverse tasks show that SCD can achieve similar acceleration factors of speculative decoding while maintaining the quality improvement from contrastive decoding. By further analyzing the token distributions of the smaller and larger LMs in SCD, we show the inherent compatibility of decoding acceleration and quality improvement. The contributions of this paper can be summarized as follows: • We propose Speculative Contrastive Decoding for efficacious LLM inference. • Comprehensive experiments and analysis illustrate the compatibility of speculative and contrastive decoding on 4 diverse tasks. 2 Related Works In terms of inference acceleration, recent research has been devoted to developing various efficient 56 decoding methods (Yao et al., 2022; Kwon et al., 2023; Cai et al., 2023). Speculative decoding Leviathan et al. (2022); Chen et al. (2023); Kim et al. (2023) is one of these recent works and utilizes smaller models for acceleration. Miao et al. (2023); Spector and Re (2023) propose to organize predictions from small LMs into tree structures to accelerate speculative decoding further. In terms of inference quality, rich research has been suggested (Vijayakumar et al., 2018; Holtzman et al., 2020; Su et al., 2022; Su and Xu, 2022; Finlayson et al., 2023) and contrastive decoding achieves better decoding qualities by similarly integrating smaller LMs and devise contrastive token distributions (Li et al., 2023a; O’Brien and Lewis, 2023). It can further be adjusted to other variants such as the token distribution contrasting between model layers (Chuang et al., 2023) or different inputs (Yona et al., 2023). SCD draws inspiration from these works and benefits both decoding speed and quality by incorporating smaller LMs into generation. 3 Preliminaries We follow the terminology in Li et al. (2023a), and term the target larger LMs as the expert LMs while the smaller LMs as the amateur LMs denoted as Me and Ma respectively. 3.1 Contrastive Decoding The intrinsic rationale of contrastive decoding (CD) is that amateur LMs have stronger systematic undesirable tendencies to produce undesirable patterns (e.g., hallucination) than expert LMs. By contrasting the token distributions between expert and amateur LMs, such tendencies can be alleviated. There have been successively proposed two versions of contrastive decoding by Li et al. (2023a) and O’Brien and Lewis (2023), which we term as Original contrastive decoding and Improved contrastive decoding. The final contrastive logit scores for the original contrastive decoding sori(xi|x<i) and the improved contrastive decoding simp(xi|x<i) are respectively: sori(xi|x<i) =  log PMe(xi|x<i) −log PMa(xi|x<i), xi ∈Vα ori,i −∞, xi /∈Vα ori,i simp(xi|x<i) =  (1 + β)YMe(xi|x<i) −βYMa(xi|x<i), xi ∈Vα imp,i −∞, xi /∈Vα imp,i Algorithm 1: Speculative Contrastive Decoding Data: Me, Ma, input prefix xinp Result: [xinp, x1, .., xk] 1 for i from 1 to γ do 2 xi ∼PMa(xi) = Ma(xi|xinp, x<i); 3 PMe(x1), .., PMe(xγ+1) = Me(x1, .., xγ|xinp); 4 Calculate Pn(x1), .., Pn(xγ) following Section §3.1; 5 r1, .., rγ i.i.d sampled from Uniform(0, 1); 6 k = min  {i|ri > Pn(xi) PMa (xi)} ∪{γ + 1}  ; 7 if k ≤γ then 8 Pk(xk) = norm(max(0, Pn(xk) −PMa(xk)); 9 Resample xk ∼Pk(xk); 10 else 11 PMa(xγ+1) = Ma(xγ+1|xinp, x1, .., xγ); 12 Calculate Pn(xγ+1) following Section §3.1; 13 xγ+1 ∼Pn(xγ+1); where P· and Y· are respectively the token probability and logit generated from LMs. Vα ·,i denotes the adaptive plausibility constraint that dynamically restricts the logits from producing the erroneous modes. The adaptive plausibility constraints are calculated as Vα ori,i =  w|PMe(w|x<i) > α max w∈V PMe(w|x<i)  , Vα imp,i =  w|YMe(w|x<i) > log α + max w∈V YMe(w|x<i)  . A token is generated from the contrastive token distribution P τ n(xi) = softmaxτ (sn(xi|x<i)), n ∈{ori, imp}, where τ represents the softmax temperature that determines the smoothness of the contrastive token distribution. 3.2 Speculative Decoding Instead of requiring one forward computation of Me for each token in vanilla decoding, speculative decoding (SD) utilizes Ma to primarily generate γ tokens at each iteration then Me makes one forward computation to check the validity of the γ tokens. If Me accepts all the γ tokens, it finishes the iteration with an additional generated token, resulting in γ + 1 tokens generated. Otherwise, if Me rejects a token at r, the token is re-sampled according to Me to substitute the rejected token; hence the iteration finishes with r tokens generated. With only one-time forward computation of Me, multiple tokens are generated at each iteration. When the ratio between the runtime required of Ma and Me (the cost coefficient c, Leviathan et al. (2022)) is low and the token acceptance rate is high, there will present a notable acceleration. 57 4 Speculative Contrastive Decoding Speculative decoding leverages smaller Ma only for generation acceleration, while not making the best of the token distributions from Ma. It is natural to simultaneously apply the contrastive token distribution, and with negligible computational overhead, the generation quality and efficiency can benefit from integrating speculative and contrastive decoding. Therefore, we propose Speculative Contrastive Decoding (SCD). Concretely, at each iteration, γ tokens are generated from the amateur model Ma. When checking the validity of the tokens, the target distribution becomes P τ n, n ∈{ori, imp} from contrastive distribution instead of PMe in speculative decoding. For a token x in the Ma-generated tokens, it is rejected with probability 1 − P τ n (x) PMa(x) and then a new token in place of x is re-sampled from norm(max(0, P τ n(x) −PMa(x)), where norm (f(x)) = f(x)/ P x f(x), s.t.f(x) ≥0. If all the Ma-generated tokens are accepted, then an additional token is sampled from P τ n. The sampling procedure of SCD is similar to the original speculative decoding in Leviathan et al. (2022); Chen et al. (2023). However, it is worth noticing that in our SCD, when all the Magenerated tokens are accepted, we require an additional forward computation from Ma to acquire its last token logit for calculating the contrastive distribution P τ n at that iteration, while in speculative decoding, the additional token is sampled directly from Me. This computational overhead is negligible when c is small. We detailed the algorithm of our SCD in Algorithm Alg. 1. The difference from the original speculative decoding is highlighted in blue. 5 Experiment Experiment Setting. We evaluate SCD and other baselines on four benchmarks: WikiText (Merity et al., 2016), HumanEval (Chen et al., 2021), AlpacaEval (Li et al., 2023b), and GSM8k (Cobbe et al., 2021). The four benchmarks span diverse language tasks of open-ended generation, code generation, human alignment, and mathematical reasoning respectively. For WikiText, we use the pre-trained Llama27B and Llama270B (Touvron et al., 2023) as Ma and Me and follow Li et al. (2023a) to use diversity, MAUVE (Pillutla et al., 2021) and coherence as evaluation metrics. For WikiText A.Eval GSM8k H.Eval Div. MAU. Coh. Score Acc. Pass@1 Ma 0.69.00 0.88.01 0.76.00 88.791.1 41.77.00 11.59.0 Me 0.75.00 0.88.01 0.75.00 94.66.79 64.19.04 28.66.0 SD 0.75.00 0.90.01 0.75.01 94.28.83 64.27.07 28.66.0 CDori 0.91.00 0.95.00 0.73.00 94.56.82 64.42.03 37.20.0 SCDori 0.91.00 0.94.00 0.72.01 94.91.78 64.44.06 37.20.0 E.A.ori ×1.78 ×2.92 ×3.32 ×3.01 CDimp 0.73.01 0.90.01 0.74.00 94.78.79 64.91.01 33.54.0 SCDimp 0.73.00 0.91.01 0.74.00 95.03.77 64.90.02 33.54.0 E.A.imp ×2.10 ×2.95 ×3.32 ×3.18 Table 1: Main results of SCD. H.Eval, and A.Eval are shorts for HumanEval and AlpacaEval. MAU. and Coh. are shorts for MAUVE and coherence. E.A. presents the expected acceleration under c = 0.05. The standard errors under 3 repetitions for each result are marked in subscripts. The best choices of α and β for (S)CD are left to Appx. §A.3. HumanEval, we use the pre-trained Llama27B and Llama270B and assess the 1-round pass rate. For AlpacaEval, we use human-aligned Llama2chat7B and Llama2chat70B and report win-rates over textdavinci-003 judged by GPT-4. For GSM8k, we use fine-tuned Llama27B and Llama270B on its training set and report the accuracy of the testset results. We set γ = 4 across all experiments and set the temperature τ to 0.7 for WikiText and AlpacaEval and 0.001 for GSM8k and HumanEval. We leave the detailed experiment settings to Appx. §A. Quality Results. As shown in Tab. 1, original and improved SCD and CD demonstrate significant improvement over Me in GSM8k and HumanEval. On WikiText, only original CD and SCD outperform Me in terms of diversity with +0.16 and MAUVE with +0.06. There is no obvious improvement in Coherence. On AlpacaEval, although both versions of SCD and CD show better results than Me, such improvement is not significant due to the high variance of GPT4-as-ajudge. We can see that different versions of SCD suggest different levels of improvement. Original SCD performs better on WikiText and HumanEval while inferior on GSM8k to improved SCD. Results across four benchmarks show SCD can benefit various LLMs on diverse language tasks, maintaining the same generation quality improvement as CD. Acceleration. To demonstrate the inference acceleration of SCD, we primarily provide the expected acceleration factor of SCD theoretically with re58 Figure 1: Hyper-parameter analysis on expected acceleration factors regarding empirical acceptance rate λ. The best hyper-parameter settings as in Tab. 1 are the lines marked with triangles. Figure 2: The averaged token distribution entropy with error bars of rejected and accepted tokens in SCD. spect to the number of Ma token predictions per iteration γ, the acceptance rate λ, and the cost coefficient c, which proof is left to Appx. §B. Theorem 5.1. The expected acceleration factor in decoding runtime is 1−λγ+1 (1−λ)(1+cγ+cλγ). In Tab. 1, consistent acceleration is presented across different benchmarks. We further visualize the expected acceleration factor of SCD in Fig. 1 according to the empirical acceptance rates λ in HumanEval with different hyper-parameter settings. According to Theorem 5.1, the acceleration factors are depicted against the cost coefficient c, which is usually of small values representing the ratio of runtime required of Ma and Me and depends on the infrastructures (e.g., GPU) that serve the LLMs. We can see that the acceptance rates hence the corresponding acceleration factors of original SCD are more sensitive to hyper-parameters compared to improved SCD. With proper hyper-parameters, SCD can achieve similar acceleration to the speculative decoding (dotted lines), which indicates the negligible speed trade-off to incorporate the contrastive token distributions. Results on GSM8k are listed in Appx. §D presenting similar patterns. 6 Analysis Compatibility. Results presented in §5 show SCD can combine the benefits of CD and SD. We delve deep into the reasons for such compatibility. We calculate the average entropy of token probabilities from Ma and Me regarding the accepted and Figure 3: Performance sensitivity regarding α and β. rejected tokens in SCD. As shown in Fig. 2, token distribution entropy from both Ma and Me of accepted tokens is significantly higher than that of rejected tokens. The phenomenon suggests SCD enjoys acceleration from accepting easy tokens of lower entropy while benefiting from contrastive token distribution by rejecting hard tokens of higher entropy. We also present a case study from GSM8k in Appx. §C to demonstrate such compatibility. Sensitivity. Through Fig. 3, we show how performances fluctuate with respect to the hyperparameter α and β. We can see that improved SCD is less sensitive to both α and β on GSM8k compared to the original SCD. This is possibly due to the better flexibility of manipulating logits than probabilities. Results on HumanEval are listed in Appx. §D presenting similar phenomenons. 7 Conclusion In this paper, we propose speculative contrastive decoding, a decoding strategy that naturally integrates small amateur LMs for inference acceleration and quality improvement of LLMs. Extensive experiments show the effectiveness of SCD 59 and our delve-deep analysis also explains the compatibility through the scope of token distribution entropy. Our method can be easily deployed to improve the real-world serving of LLMs. Limitation In our experiments, we provide the expected acceleration factors of SCD on four benchmarks calculated according to the empirical token acceptance rates λ and selected cost coefficients c. The empirical acceleration factor is highly correlated to the actual infrastructures that serve both the larger LMs and the smaller LMs. To compensate for this demonstration limitation and better demonstrate the acceleration performance, we visualize the expected acceleration factor by spanning across a range of c in Fig. 1. This is a common limitation of deploying speculative decoding in the real-world LLM serving. For example, the runtime of switching between the forward computation of Ma and Me would be non-negligible without properly optimized infrastructures, causing a relatively large c hence potentially resulting in deceleration even with high acceptance rates. Broader Impact Although LLMs have demonstrated exceptional performance and been helpful real-world assistants recently, the massive computational demands of LLMs forbid most users including potential researchers from local deployments, who generally alter to use APIs from LLM servings. Therefore, effective methods, including our SCD, to improve the speed and quality from the perspective of decoding inference have much potential to advance LLM-based services. References Kushal Arora, Layla El Asri, Hareesh Bahuleyan, and Jackie Cheung. 2022. Why exposure bias matters: An imitation learning perspective of error accumulation in language generation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 700–710, Dublin, Ireland. Association for Computational Linguistics. Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. 2023. Qwen technical report. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. ArXiv, abs/2005.14165. Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, and Tri Dao. 2023. Medusa: Simple framework for accelerating llm generation with multiple decoding heads. https://github.com/ FasterDecoding/Medusa. Charlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, Laurent Sifre, and John Jumper. 2023. Accelerating large language model decoding with speculative sampling. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon Kim, James Glass, and Pengcheng He. 2023. Dola: Decoding by contrasting layers improves factuality in large language models. arXiv preprint arXiv:2309.03883. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Matthew Finlayson, John Hewitt, Alexander Koller, Swabha Swayamdipta, and Ashish Sabharwal. 2023. Closing the curious case of neural text degeneration. Mingqi Gao and Xiaojun Wan. 2022. DialSummEval: Revisiting summarization evaluation for dialogues. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5693–5709, Seattle, United States. Association for Computational Linguistics. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence 60 embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In International Conference on Learning Representations. Sehoon Kim, Karttikeya Mangalam, Suhong Moon, Jitendra Malik, Michael W. Mahoney, Amir Gholami, and Kurt Keutzer. 2023. Speculative decoding with big little decoder. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles. Yaniv Leviathan, Matan Kalman, and Yossi Matias. 2022. Fast inference from transformers via speculative decoding. In International Conference on Machine Learning. Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, and Mike Lewis. 2023a. Contrastive decoding: Open-ended text generation as optimization. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12286–12312, Toronto, Canada. Association for Computational Linguistics. Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023b. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval. Keming Lu, Hongyi Yuan, Zheng Yuan, Runji Lin, Junyang Lin, Chuanqi Tan, Chang Zhou, and Jingren Zhou. 2023. #instag: Instruction tagging for analyzing supervised fine-tuning of large language models. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843. Xupeng Miao, Gabriele Oliaro, Zhihao Zhang, Xinhao Cheng, Zeyu Wang, Rae Ying Yee Wong, Zhuoming Chen, Daiyaan Arfeen, Reyna Abhyankar, and Zhihao Jia. 2023. Specinfer: Accelerating generative llm serving with speculative inference and token tree verification. arXiv preprint arXiv:2305.09781. Sean O’Brien and Mike Lewis. 2023. Contrastive decoding improves reasoning in large language models. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, and Zaid Harchaoui. 2021. MAUVE: Measuring the gap between neural text and human text using divergence frontiers. In Advances in Neural Information Processing Systems. Benjamin Spector and Chris Re. 2023. Accelerating llm inference with staged speculative decoding. arXiv preprint arXiv:2308.04623. Yixuan Su, Tian Lan, Yan Wang, Dani Yogatama, Lingpeng Kong, and Nigel Collier. 2022. A contrastive framework for neural text generation. Yixuan Su and Jialu Xu. 2022. An empirical study on contrastive search and contrastive decoding for open-ended text generation. arXiv preprint arXiv:2211.10797. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https:// github.com/tatsu-lab/stanford_alpaca. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine-tuned chat models. Ashwin K Vijayakumar, Michael Cogswell, Ramprasath R. Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2018. Diverse beam search: Decoding diverse solutions from neural sequence models. 61 Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elicits reasoning in large language models. Zhewei Yao, Reza Yazdani Aminabadi, Minjia Zhang, Xiaoxia Wu, Conglong Li, and Yuxiong He. 2022. Zeroquant: Efficient and affordable post-training quantization for large-scale transformers. Advances in Neural Information Processing Systems, 35:27168–27183. Gal Yona, Or Honovich, Itay Laish, and Roee Aharoni. 2023. Surfacing biases in large language models using contrastive input decoding. arXiv preprint arXiv:2305.07378. Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and Jingren Zhou. 2023. Scaling relationship on learning mathematical reasoning with large language models. A Experiment Details A.1 Benchmark Details (1) WikiText (Merity et al., 2016) contains articles from Wikipedia. We follow the pre-processing scripts from Li et al. (2023a) and result in 1,733 samples. The generation starts with the first 32 tokens as prompts, and the max generation length is set to 256. We report diversity, MAUVE (Pillutla et al., 2021), and coherence as metrics, following Li et al. (2023a). Diversity metrics assess the unique multi-grams in the completion generated from the LMs. Higher diversity scores indicate better lexical diversity in the completion. The diversity is calculated according to: Div. = 4 Y n=2 | Set(n-grams)| |n-grams| . MAUVE is a metric proposed by Pillutla et al. (2021), which is empirically suggested to have better agreement with human annotations (Gao and Wan, 2022). Coherence evaluates the semantic correlation between the input prefix and the output generation via the similarity of embeddings. We use the sentence embeddings following SimCSE (Gao et al., 2021) and the coherence score is calculated as: emb(xprefix) · emb(xgen) ∥emb(xprefix)∥∥emb(xgen)∥. (2) GSM8k (Cobbe et al., 2021) contains training and evaluation sets of grade mathematical reasoning problems. We first fine-tune the Llama27B and Llama270B by 3 epochs to produce the amateur and expert LMs. We report the final accuracy of the test sets. (3) HumanEval (Chen et al., 2021) measures coding correctness for synthesizing programs from 164 doc-strings. We report the 1round pass rate (Pass@1). (4) AlpacaEval (Li et al., 2023b) contains 805 samples from various evaluation sets to evaluate the alignment abilities of LLMs by comparing evaluated models with text-davinci-003. We report the win rate judged by GPT-4. A.2 Configuration Details We use Llama27B as the amateur model while Llama270B as the expert model on WikiText and HumanEval benchmarks to evaluate how SCD performs with pre-trained models. Then, we finetune Llama27B and Llama270B on the GSM8k training set to evaluate the SCD performance with supervised fine-tuning models on the mathematical reasoning task. We also apply Llama2chat7B and Llama2chat70B on AlpacaEval to assess LLMs for human alignment using SCD. We set the softmax temperature consistent to 0.7 on WikiText and AlpacaEval while 0.001 on other benchmarks. In SCD and SD, we always set the prediction temperature from the amateur LMs to 1.0 for fair comparison. All experiments are conducted on 2 A100 80G GPUs with KV cache implementation. A.3 Hyper-parameter Details We conduct grid searches regarding α and β for the best performance of CD and SCD. The best hyper-parameter settings for the results in Tab. 1 are listed in Tab. 2. B Proof of Theorem Theorem 5.1 Theorem B.1. The expected acceleration factor in decoding runtime is 1−λγ+1 (1−λ)(1+cγ+cλγ). Proof. Similar to Theorem 3.8 in Leviathan et al. (2022), given the token acceptance rate λ and the runtime per forward computation step for Me and Ma are T and cT. The total runtime required for each iteration is T + cγT + cλγT, where Ma requires γ generation steps and possibly one additional step forward computation if all γ tokens are accepted while Ma requires one forward computation for token validity checking. Following Equation (1) in Leviathan et al. (2022), the expected generated token number per iteration is 62 WikiText AlpacaEval GSM8k HumanEval α β α β α β α β CDori 0.1 0.5 0.5 0.5 SCDori 0.1 0.5 0.5 0.5 CDori 0.1 0.5 0.5 0.5 0.5 1.0 0.5 0.5 SCDori 0.1 0.5 0.5 0.5 0.5 1.0 0.5 0.5 Table 2: The hyper-parameter settings for the results in Tab. 1 1−λγ+1 1−λ . Therefore, the expected runtime needed of SCD is 1−λ 1−λγ+1 (T +cγT +cλγT), hence the expected acceleration factor is 1−λγ+1 (1−λ)(1+cγ+cλγ). C Case Study Case Study on GSM8k Judy teaches 5 dance classes, every day, on the weekdays and 8 classes on Saturday. If each class has 15 students and she charges $15.00 per student, how much money does she make in 1 week? Responses [RED and BLUE tokens are generated by the expert LM after rejection and all acceptance, respectively. Other tokens are generated by the amateur LM.] She teaches 5 classes every weekday so that’s 5*5 = «5*5=25»25 classes She teaches 25 weekday classes and 8 Saturday classes for a total of 25+8 = «25+8=33»33 classes Each class has 15 students so she teaches 15*33 = «15*33=495»495 students in 1 week She charges $15.00 per student and she teaches 495 students so she makes 15*495 = $«15*495=7425.00»7,425.00 in 1 week In this case, we can see that the rejected and re-sampled tokens are usually the beginning of a sentence, numbers, operations, or named entities, which are generally informative tokens in the reasoning chain of thoughts. This also indicates that quality improvement originates from re-sampling informative tokens by contrastive token distribution while the acceleration comes from speculative prediction of the amateur LMs. D Additional Results 63 Figure 4: Hyper-parameter analysis on expected acceleration factors regarding empirical acceptance rate λ. The best hyper-parameter settings as in Tab. 1 are the lines marked with triangles. Figure 5: Performance sensitivity regarding α and β. 64
2024_short
5
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 547–557 August 11-16, 2024 ©2024 Association for Computational Linguistics Naming, Describing, and Quantifying Visual Objects in Humans and LLMs Alberto Testoni ILLC, University of Amsterdam [email protected] Juell Sprott University of Amsterdam [email protected] Sandro Pezzelle ILLC, University of Amsterdam [email protected] Abstract While human speakers use a variety of different expressions when describing the same object in an image, giving rise to a distribution of plausible labels driven by pragmatic constraints, the extent to which current Vision & Language Large Language Models (VLLMs) can mimic this crucial feature of language use is an open question. This applies to common, everyday objects, but it is particularly interesting for uncommon or novel objects for which a category label may be lacking or fuzzy. Furthermore, similar patterns of variation are observed among human speakers for highly context-sensitive expressions, such as the quantifiers ‘few’ or ‘most’. In our work, we evaluate VLLMs (FROMAGe, BLIP-2, LLaVA) on three categories (nouns, attributes, and quantifiers) where humans show great subjective variability concerning the distribution over plausible labels, using datasets and resources mostly under-explored in previous work. Our results reveal mixed evidence on the ability of VLLMs to capture human naming preferences at generation time: while some models are good at mimicking human distributions for nouns and attributes, all of them fail to assign quantifiers, a task that requires more accurate, high-level reasoning. 1 Introduction Recent years have witnessed increasing popularity in the development of Large Language Models (LLMs) given their notable performance in following instructions, answering questions, and in many reasoning tasks, serving as general-purpose assistants (Huang and Chang, 2023; Zhao et al., 2023). In parallel, a new generation of powerful Vision and Language LLMs (VLLMs) with excellent visual understanding and generation capabilities have emerged (Gan et al., 2022; Li et al., 2023a). Rapidly, these models have outperformed previous approaches in many downstream tasks. In our work, we focus on the Natural Language Generation skills of powerful VLLMs by analyzing an important but under-explored problem, namely, their ability to capture human production variability (in terms of distribution over plausible labels/descriptions) in naming tasks. Previous work highlighted that speakers display a wide range of variability when asked to utter sentences, resulting in inter-speaker variability but also variability over time for the same speaker (Levelt, 1993; Fan et al., 2018; Alva-Manchego et al., 2021; Takmaz et al., 2024). In particular, in object naming, speakers may refer to objects appearing in a visual scene in many different ways (Graf et al., 2016). Objects generally belong to multiple categories/super-categories, and all the lexicalized labels of such categories are valid (Brown, 1958). However, although multiple labels are valid, humans pragmatically adapt their naming preferences depending on the context (Olson, 1970; Rohde et al., 2012), resulting in some labels being more frequently uttered than others. For instance, ‘mammal’ is a correct label to describe a Gold Retriever, but pragmatically less likely than ‘dog’. Similarly, speakers tend to prefer sub-ordinate words like ‘car’ instead of the potentially ambiguous super-ordinate word ‘vehicle’ in case multiple vehicles appear in the image. In our work, we are interested in capturing both these two features: while many labels are equally valid and acceptable when naming or describing entities, these labels distribute according to a certain likelihood distribution. In our work, we investigate this issue, which has recently entered the NLP research community (Plank, 2022), in three different production conditions. First of all, we consider the ManyNames dataset (Silberer et al., 2020a,b), where annotators assign labels to describe common objects in images in a referential expression generation setting (Yu et al., 2016; Kazemzadeh et al., 2014). We also explore two additional resources that have not received much attention within the NLP community 547 ManyNames NOUN QUANT 100% 0% Percentage of responses chair sofa couch armchair % descriptions mentioning color % descriptions mentioning texture 100% 0% none 40% 0% How many entities in the image are animals? almost none the smaller part few some many most almost all all Figure 1: Datasets used in our experiments and distribution of human answers/labels. In NOUN (left), we focus on the frequency of color and texture attributes in the generated descriptions. In ManyNames (middle), each object is associated with the frequency of the nouns used to describe it. In QUANT (right), each image is associated with a probability distribution over a list of quantifiers that humans selected when answering the question ‘How many of the objects are animals?’. and that allow us to broaden the horizons of this phenomenon. First, we analyze the NOUN dataset (Horst and Hout, 2016), where speakers describe uncommon and novel objects: we focus on both the choice of the adjectives and how they distribute in the across-subject distribution. Finally, we investigate human production variability arising from the context-sensitive nature of non-numerical quantifiers using the data collected by Pezzelle et al. (2018). We evaluate three VLLMs (FROMAGe, BLIP-2, LLaVA) on the above-mentioned tasks in a zeroshot setting. We sample multiple times from the model using nucleus sampling, mimicking various human speakers, and compare the generated samples against human production patterns using different metrics (Jensen–Shannon divergence and Pearson’s correlation, depending on the task at hand). Our results show that models weakly to moderately mimic human distributions in naming common and uncommon objects. Instead, all of them fail to mimic human distributions when selecting quantifiers, as highlighted by our in-depth analyses. 2 Tasks and Datasets We use the images and corresponding human labels or descriptions from three datasets in English, that we briefly describe below. NOUN The Novel Object and Unusual Name (NOUN) dataset (Horst and Hout, 2016) contains 64 images of multipart, multicolored, and threedimensional uncommon and novel objects. The dataset was originally created for behavioral studies on word learning and, to the best of our knowledge, it has not been used for NLP research. We focus on the naming task, where participants were asked to answer the question “What would you call this object?”. The answers are sentences like: ‘a plastic object with red stuff on top’. For each object, the proportion of colors (e.g., ‘red’, ‘bronze’) and textures (e.g., ‘soft’, ‘rough’) was calculated as the number of attributes given the number of responses. An example from the dataset is reported in Figure 1 (left), together with the ratio of colors and textures in human responses. In NOUN, we examine human production preferences on a high level, by looking at the frequency according to which certain adjectives (related to color and texture attributes) are used. ManyNames In ManyNames (Silberer et al., 2020a,b), the authors collected names for 25K objects appearing in real-world images from VisualGenome (Krishna et al., 2017) by asking human annotators to generate a name for them. Each object (highlighted by a red box in the image) is associated with an average number of 35.3 annotations. More than 90% of the objects are associated with more than one unique label (5.7 average name types per object). An example is shown in Figure 1 (middle). When describing the object in the 548 NOUN Q: What do you see in the image? LLaVA 🌋: 1) The image features a green toy with an orange nose and two orange wheels. 2) The image features a green and orange toy shaped like a face, with a big nose. ManyNames Q: Please name the object in the red box with the first name that comes to mind. LLaVA 🌋: 1) jet 2) plane 3) plane 4) jet 5) jet 6) jet QUANT Q: How many animals are there in the image? Pick the best among the following options: A: the smaller part, B: many, C: almost none, D: some, E: none, F: all, G: most, H: few, I: almost all LLaVA 🌋: 1) D (some) 2) I (almost all) 3) D (some) 4) H (few) Figure 2: Examples of the output generated by LLaVA (multiple samples with nucleus sampling decoding) for the three tasks analyzed in our work. For each task, a sample of the answers provided by the model is displayed. For space constraints, we only report a few random samples for each task. red box, most annotators referred to it as ‘chair’, while around 30% said ‘sofa’, and the remaining ones used ‘couch’ and ‘armchair’. The images in ManyNames are classified into 7 domains (e.g., vehicles, people, animals, etc.): for computational constraints, we evaluated 300 randomly sampled objects from each domain. Different from NOUN, we examine production preferences on a more finegrained level using the actual distribution over multiple labels. QUANT To study how quantifiers are used when referring to quantities grounded in images, Pezzelle et al. (2018) introduced a dataset of visual abstract scenes containing a variable number of animals and artifacts and asked human participants to answer the question “How many of the objects are animals?". Participants could select the answer from a list of nine pre-selected quantifiers: ‘none’, ‘almost none’, ‘the smaller part’, ‘few’, ‘some’, ‘many’, ‘most’, ‘almost all’, and ‘all’. The authors used images with 17 different proportions of animals and artifacts (ranging from 0% to 100%). In our work, we tested 50 images for each of the 17 proportions in the dataset, resulting in a total number of 850 images.1 1The actual images used in our experiment come from Testoni et al. (2019), which built a large-scale dataset using the stimuli and pipeline by Pezzelle et al. (2018). 3 Experiments 3.1 Generation In our work, we test the performance of three models in a zero-shot setting: BLIP-2 (Li et al., 2023b), FROMAGe (Koh et al., 2023), and LLaVA 1.5 (Liu et al., 2023b,a). All three models can be prompted for zero-shot generation. Additional details are discussed in Appendix A.4. For each of the three tasks described in Section 2, we used prompts that resembled the instructions provided to human annotators during the dataset collection. ManyNames: Q: Please name the object in the red box with the first name that comes to mind. A:. NOUN: Q: What do you see in the image? A:. QUANT: Question: How many animals are there in the image? Pick the best among the following options: , followed by the list of the nine quantifiers, each associated with a letter (from A to I). The ordering of the quantifiers is randomized at each inference step. Although investigating several variations of the above-mentioned prompts is beyond the scope of the paper, we discuss some insights on this aspect in Appendix A.5. We sample multiple times from each model using nucleus sampling decoding (Holtzman et al., 2019), with p = 0.9, t = 0.5 (different hyperparameter configurations did not significantly affect the overall results, as discussed in Appendix A.5). For each task, we sample the model 20 times and filter out ill-formed answers, such as empty strings or question repeti549 NOUN (Color) NOUN (Texture) ManyNames Quant 0.2 0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Correlation / Inverse JS Divergence FROMAGe BLIP-2 LLaVA Figure 3: For NOUN and QUANT, the plot shows the correlation between human responses and model samples. For ManyNames, it shows the inverse JS divergence between the frequency of the nouns chosen by annotators and the ones generated by the model. tions. After filtering, we randomly take 10 generations per image for ManyNames and NOUN, and 15 for QUANT. In this way, we have the same number of generations for each image/object. Some examples of the output generated by LLaVa for the three tasks analyzed are reported in Figure 2. We release our code at: https://github.com/ albertotestoni/ndq_visual_objects. 3.2 Evaluation Each object in NOUN is associated with color and texture saliency, i.e., how often speakers described the object using these attributes. We use a stringmatch approach (see Appendix A.2) to analyze the model output and compute color and texture saliency. We then compute the Pearson’s r correlation between human and model saliency, considering all objects. Each object in ManyNames is associated with H unique nouns assigned by human annotators and M unique nouns sampled from the model output, together with their frequency. Given A = H ∪M, we construct two term-frequency vectors for human and model output, h and m, respectively, with |h| = |m| = |A|. Each noun in A is mapped to a unique position in h and m and each vector is filled with its normalized frequency. We evaluate the models by computing the inverse Jensen–Shannon (JS, bounded between 0 and 1) divergence (Lin, 1991) between h and m. See Figure 6 in the Appendix for an example. Each image in QUANT is associated with a probability distribution over 9 quantifiers, depending on the proportion of animals and artifacts. From the model outputs, we extract the relative frequency of each quantifier and compute Pearson’s r correlation with the human distribution. We then average the correlation results over all images. Correlation is bounded between -1 and 1. Higher is better for all the metrics. 3.3 Results As we can observe from Figure 3, the results for ManyNames and NOUN (color saliency) show a clear trend: all the models correlate, to some extent, with human production, with LLAVA obtaining the highest correlations for both tasks (around 0.5) and significantly outperforming (t-test, p < 0.01) both BLIP2 and FROMAGe.2 These findings align with previous work showing the primacy of LLaVA over other models (Liu et al., 2023b,a). However, the remaining tasks show critical weaknesses for all models. First, none of the models achieve a statistically significant correlation for texture saliency (all have p > 0.05). We conjecture that texture attributes are less common for the models compared to colors, and thus they may be less accurate when generating them: we leave an in-depth analysis of this issue for future work. Despite the correlation results being similar across models, our manual inspection reveals interesting differences: while the low performance of FROMAGe is due to an under-generation of texture attributes, the opposite is true for LLaVA, with BLIP-2 being more flexible in terms of texture attribute generation but not aligned with human variability (see Figure 7 for an example). Finally, all models show almost no correlation in assigning quantifiers to visual scenes, highlighting a severe limitation of all models on this task. We scrutinize this issue in the following Section. 4 The Curious Case of Quantifiers We run some analyses to investigate the poor performance of all models in the QUANT task. First of all, we acknowledge that the multiplechoice prompting used in QUANT is different and more complex than the prompts used for the other datasets. Still, it is unlikely that this is the main reason behind the poor performance of all models. 2Appendix A.1 shows per-domain results for ManyNames. 550 0.00 0.10 0.17 0.20 0.25 0.33 0.40 0.43 0.50 0.57 0.60 0.67 0.75 0.80 0.83 0.90 1.00 Proportion of targets (animals) in the image 0.2 0.0 0.2 0.4 0.6 Pearson's r correlation FROMAGe BLIP-2 LLaVA Figure 4: Pearson correlation results (y-axis) broken down by proportion of targets (animals) in the image (x-axis) in the QUANT task and dataset. Figure 4 shows the correlation results broken down by the proportion of animals in the image. We observe that even though the overall correlation results are similar across models (Figure 3), they perform quite differently depending on the proportion of animals in the scene. While BLIP-2 performs relatively well on the ‘extreme’ proportions (no animals or all animals in the image, when speakers generally choose the quantifiers ‘none’ and ‘all’, respectively), LLaVA excels at intermediate proportions, and FROMAGe performs better on proportions above 50%. Can we conclude that models properly handle the task of assigning the most likely quantifiers for some proportions? These results evoke two hypotheses: (a) The models are capable of selecting plausible quantifiers only for some proportions or, vice versa, they understand only some of the quantifiers analyzed; (b) The models have a bias towards some specific quantifiers, regardless of the proportion of targets in the scene, leading to a decent perform on some proportions as a side effect. Our additional analyses, reported in Figures 8 and 9 in the Appendix, support hypothesis (b): FROMAGe has a strong bias towards selecting the quantifier ‘many’; BLIP-2 frequently selects the extreme quantifiers ‘none’ and ‘all’, and its selection is not influenced by the proportion of targets; LLaVA has a bias towards selecting the quantifier ‘some’, regardless of the proportion of targets. To further shed light on this result, we qualitatively assess the ‘counting’ skills of the models, a crucial skill to succeed in assigning quantifiers. As the examples in Figure 11 in the Appendix illustrate, all models struggle to successfully count how many animals appear in the image. We hypothesize that the reason for the poor performance in assigning quantifiers lies in the quantity estimation and comparison skills of the models. This observation is in line with recent research investigating the poor ‘counting’ skills of current models (Paiss et al., 2023). 5 Conclusion While human speakers exhibit a wide range of human production variability in naming tasks, mirroring pragmatic constraints and subjective preferences, it is not clear to what extent VLLMs can mimic this peculiar trait of language use. In our work, we investigate this issue in three tasks: naming common objects, naming novel objects, and assigning quantifiers. Our results reveal that bestperforming models achieve a moderate correlation with human patterns in some tasks (object names and color terms). However, all models dramatically fail when assigning quantifiers, the only production setup that requires some form of reasoning, i.e., the ability to reason over sets of objects and process quantities. Based on our analyses, we hypothesize that the reason behind this failure stems from the poor “counting” skills of the models. Limitations In the following, we discuss some limitations of our study that may inspire follow-up work in this direction. The poor performance on the quantification tasks may stem from the higher complexity of the prompt used (multiple choice prompting). Even though in our paper we discuss how analyzing the output variability allows us to gain valuable insights even when the model is not accurate, we can not rule out the possibility that a simpler prompt may lead to more accurate results. As an initial step, we used a prompt that corresponds to the instruction provided to the participants of the original experiment in Pezzelle et al. (2018). In Appendix A.5 we discuss the effect of re-phrasing the original prompt instructions. Moreover, it is worth noting that the human production variability analyzed in our experiments is obtained by aggregating data coming from multiple speakers. Even though we do aim at this, we acknowledge that it is unlikely that one single model can mimic such a rich variability. Our study is more focused on understanding to what extent current Vision & Language LLMs can mimic this feature, showing the suitability of some tasks and datasets not explored in previous work. 551 Finally, we computed the color and saliency feature for NOUN using a string-matching approach based on a manually defined list of keywords (as described in Appendix A.2. We acknowledge that this approach may underestimate the color and texture saliency in the model output. Although in this case, the small size of the dataset allowed us to verify that this is not the case, we believe that it is important to take this point into account when running experiments on a larger scale. Moreover, as a limitation of the NOUN dataset (and not of our experimental setup), we do not have access to the actual color and texture labels used by human participants during the dataset collection. For this reason, in NOUN we do not consider the actual distribution of the attributes used by human speakers but just their overall frequency. Acknowledgments We would like to thank the Dialogue Modelling Group (DMG) at the University of Amsterdam for their feedback and support at the different stages of this work. We thank Brent Brakenhoff for his input on some preliminary experiments with the ManyNames dataset. Alberto Testoni is supported by funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 819455, PI R. Fernández). References Fernando Alva-Manchego, Carolina Scarton, and Lucia Specia. 2021. The (un) suitability of automatic evaluation metrics for text simplification. Computational Linguistics, 47(4):861–889. Roger Brown. 1958. How shall a thing be called? Psychological review, 65(1):14. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889–898, Melbourne, Australia. Association for Computational Linguistics. Zhe Gan, Linjie Li, Chunyuan Li, Lijuan Wang, Zicheng Liu, Jianfeng Gao, et al. 2022. Vision-language pretraining: Basics, recent advances, and future trends. Foundations and Trends® in Computer Graphics and Vision, 14(3–4):163–352. Caroline Graf, Judith Degen, Robert XD Hawkins, and Noah D Goodman. 2016. Animal, dog, or dalmatian? level of abstraction in nominal referring expressions. In CogSci. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751. Jessica S Horst and Michael C Hout. 2016. The novel object and unusual name (noun) database: A collection of novel images for use in experimental research. Behavior research methods, 48:1393–1409. Jie Huang and Kevin Chen-Chuan Chang. 2023. Towards reasoning in large language models: A survey. In Findings of the Association for Computational Linguistics: ACL 2023, pages 1049–1065, Toronto, Canada. Association for Computational Linguistics. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. 2014. ReferItGame: Referring to objects in photographs of natural scenes. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 787– 798, Doha, Qatar. Association for Computational Linguistics. Jing Yu Koh, Ruslan Salakhutdinov, and Daniel Fried. 2023. Grounding language models to images for multimodal inputs and outputs. In International Conference on Machine Learning, pages 17283–17300. PMLR. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123:32–73. Willem JM Levelt. 1993. Speaking: From intention to articulation. MIT press. Chunyuan Li, Zhe Gan, Zhengyuan Yang, Jianwei Yang, Linjie Li, Lijuan Wang, and Jianfeng Gao. 2023a. Multimodal foundation models: From specialists to general-purpose assistants. arXiv preprint arXiv:2309.10020, 1(2):2. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023b. Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597. Jianhua Lin. 1991. Divergence measures based on the shannon entropy. IEEE Transactions on Information theory, 37(1):145–151. Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. 2023a. Improved baselines with visual instruction tuning. arXiv preprint arXiv:2310.03744. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023b. Visual instruction tuning. arXiv preprint arXiv:2304.08485. David R Olson. 1970. Language and thought: aspects of a cognitive theory of semantics. Psychological review, 77(4):257. 552 Roni Paiss, Ariel Ephrat, Omer Tov, Shiran Zada, Inbar Mosseri, Michal Irani, and Tali Dekel. 2023. Teaching clip to count to ten. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3170–3180. Sandro Pezzelle, Raffaella Bernardi, and Manuela Piazza. 2018. Probing the mental representation of quantifiers. Cognition, 181:117–126. Barbara Plank. 2022. The “problem” of human label variation: On ground truth in data, modeling and evaluation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10671–10682, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Hannah Rohde, Scott Seyfarth, Brady Clark, Gerhard Jäger, and Stefan Kaufmann. 2012. Communicating with cost-based implicature: A game-theoretic approach to ambiguity. In The 16th workshop on the semantics and pragmatics of dialogue, paris, september. Carina Silberer, Sina Zarrieß, and Gemma Boleda. 2020a. Object naming in language and vision: A survey and a new dataset. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 5792–5801, Marseille, France. European Language Resources Association. Carina Silberer, Sina Zarrieß, Matthijs Westera, and Gemma Boleda. 2020b. Humans meet models on object naming: A new dataset and analysis. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1893–1905, Barcelona, Spain (Online). International Committee on Computational Linguistics. Ece Takmaz, Sandro Pezzelle, and Raquel Fernández. 2024. Describing images fast and slow: Quantifying and predicting the variation in human signals during visuo-linguistic processes. arXiv preprint arXiv:2402.01352. Alberto Testoni, Sandro Pezzelle, and Raffaella Bernardi. 2019. Quantifiers in a multimodal world: Hallucinating vision with language and sound. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 105–116, Minneapolis, Minnesota. Association for Computational Linguistics. Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg. 2016. Modeling context in referring expressions. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, pages 69–85. Springer. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223. A Appendix A.1 ManyNames Appendix Figure 9 shows the results on the ManyNames dataset broken down by the image domain. LLaVA outperforms other models in most of the domains, but for clothing and people it is comparable to BLIP-2. Note that all model reach have the poorest performance on these two domains. As highlighted by Silberer et al. (2020a,b) and confirmed by our manual inspection, models confuse people and clothing objects much more frequently than humans do. ManyNames is licensed under Creative Commons Attribution 4.0 International. A.2 NOUN Appendix We define the following list of color and texture attributes to analyze the samples generated by the model with a string-matching approach. Colors = [“Red”, “Orange”, “Yellow”, “Green”, “Blue”, “Purple”, “Pink”, “Brown”, “Gray”, “Black”, “White”, “Beige”, “Turquoise”, “Teal”, “Magenta”, “Lavender”, “Indigo”, “Maroon”, “Gold”, “Silver”, “Bronze”, “Copper”, “Olive”, “Navy”, “Sky blue”, “Cream”, “Peach”, “Rose”, “Fuchsia”, “Coral”, “Mint”, “Chartreuse”, “Salmon”, “Sienna”, “Slate”, “Tan”, “Crimson”, “Ivory”, “Khaki”, “Lilac”, “Mauve”, “Mustard”, “Rust”, “Scarlet”, “Tangerine”, “Vermilion”, “Violet”, “Wheat”, “Brick red”, “Caramel”] Textures = [“Smooth”, “Rough”, “Fuzzy”, “Soft”, “Hard”, “Bumpy”, “Slick”, “Sticky”, “Grainy”, “Sandy”, “Slippery”, “Jagged”, “Sharp”, “Coarse”, “Silky”, “Velvety”, “Wet”, “Dry”, “Glossy”, “Matte”, “Sparkly”, “Metallic”, “Wooden”, “Leathery”, “Plastic”, “Rubber”, “Furry”, “Woolly”, “Feathery”, “Smooth”, “Satin”, “Lace”, “Crochet”, “Knitted”, “Embroidered”, “Linen”, “Silk”, “Velvet”, “Suede”, “Corduroy”, “Denim”, “Felt”, “Tweed”, “Mesh”, “Hairy”, “Crisp”, “Crumbly”, “Flaky”, “Puffy”, “Spongy”, “Crunchy”, “Chewy”, “Gummy”, “Slimy”, “Starchy”, “Syrupy”, “Icy”, “Rocky”, “Stony”, “Sandy”, “Peppery”, “Salty”, “Sour”, “Sweet”, “Tangy”, “Tart”, “Spicy”, “Herbaceous”, “Earthy”, “Mossy”, “Woody”, “Smoky”, “Smokey”, “Rusty”, “Corroded”, “Weathered”, “Rugged”, “Smooth”, “Polished”, “Shiny”, “Gleaming”, “Dull”, “Muddy”, “Cloudy”, “Milky”, “Transparent”, “Translucent”, “Opaque”] Figure 7 shows an example of the output of different models, together with their color and texture 553 animals/ plants buildings clothing food home people vehicles 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Inverse JS Divergence FROMAGe BLIP-2 LLAVA Figure 5: Inverse Jensen–Shannon divergence broken down by the image domain in ManyNames. saliency as well as human saliency values. NOUN is released without a specific license. A.3 QUANT Appendix Figures 8 and 9 show additional analyses on the QUANT dataset. They are discussed in Section 4. Figure 11 shows some qualitative examples of the models’ output when asked to answer the question ‘How many animals are there in the image?’. The images are randomly selected from QUANT. QUANT is released without a specific license. A.4 Models Appendix While FROMAGe is trained with a contrastive learning objective for image captioning and it is shown to perform particularly well with longer textual contexts, BLIP-2 jointly optimizes three pre-training objectives that share the same input format and model parameters: image-text contrastive learning, image-grounded text generation, and image-text matching. The main innovation of LLaVA is the use GPT-4 generated visual instruction tuning data. Moreover, LLaVA has a simpler scheme to connect image and language representations compared to BLIP-2 and FROMAGe. We used blip2-opt-2.7b and llava-v1.5-7b, while for FROMAGe we used the model made available by Koh et al. (2023). FROMAGe and LLaVA are released with an Apache-2.0 license. BLIP-2 is distributed with BSD 3-Clause License. We run our experiments under the model license. A.5 Generation Details The Effect of Different top_p Values We experimented with various top_p values for nucleus sampling decoding. As illustrated in Figure 10 (showing the results for LLaVA, with similar results for the other models), we observe that this variable does not play a significant role in our experimental setup for all the tasks analyzed. Different prompts In our experiments, we prompted the models with the same instructions provided to human annotators during the collection of the different datasets analyzed. We also experimented with small variations of the abovementioned prompts, such as ‘What is the object in the image?’ for the NOUN dataset, ‘Name the object in the red box with the most appropriate single name’ for ManyNames, and a more detailed instruction for QUANT, such as ‘Carefully examine the image. Can you determine the proportion of animals present, compared to objects? Please select the most accurate answer from the options below’. While we do not observe any significant difference between NOUN and ManyNames, the revised prompt for QUANT leads to a slight improvement in the model performance, with LLaVA reaching a correlation of 0.29. Still, the low absolute correlation coefficient highlights that computational models struggle to accurately assign quantifiers to visual scenes. This result demonstrates that the prompt may influence the performance of the models on this task. Although exploring which prompts work best was beyond the scope of this paper, we leave a systematic exploration of this aspect to future research. A.6 Additional Details The data used in our work do not contain any information that names or uniquely identifies individual people or offensive content. FROMAGe has 5M trainable parameters and a total number of around 7.2B parameters. BLIP-2 has 188M trainable parameters and 2.7B total parameters. LLaVA has 7B parameters. All the models are evaluated on a single GPU (NVIDIA RTX A5000). We experimented with a few configurations of hyperparameters for nucleus sampling generation ( described in Section 3). We did not find significant differences across different hyperparameters. We used the SciPy library (https://scipy.org/) to compute the correlation/divergence results. We used the NLTK library (https://www.nltk.org/) to extract nouns from the model output for ManyNames. 554 0.51 0.24 0.14 0.11 0 100% 0% Percentage of human responses chair sofa couch armchair chair sofa couch armchair window Humans 0.09 0.71 0.02 0.18 0 Model 1 0.15 0 0.29 0 0.56 Model 2 Inverse Jensen–Shannon divergence vs. humans 1 0.59 0.38 Figure 6: Toy example to show how the effect of over-generating a noun that was not often assigned by humans (model 1) and generating a noun that was not selected by humans (model 2) on the inverse Jensen–Shannon divergence metric. Human color saliency: 3% Human texture saliency: 55% FROMAGe BLIP-2 LLaVA ● A hand with a knife. ● A person holding a knife with a long handle. ● The image is a visual representation of the tool. ● The image shows a wooden board with a nail on it. ● … Color saliency: 0% Texture saliency: 20% ● A wooden stick with eight holes on it ● A wooden toothbrush holder with five holes in it ● A wooden spoon holder with five holes ● A wooden spatula is sitting on top of a white surface ● … Color saliency: 10% Texture saliency: 80% ● The image features a wooden object with four holes in it, resembling a wooden paddle ● The image features a wooden pizza cutter with a handle, which is designed to cut through ● The image features a wooden spoon with a handle ● … Color saliency: 0% Texture saliency: 100% Figure 7: Example of the model output in the NOUN dataset. 555 Figure 8: Density plot reporting the frequency distribution of responses for the 9 quantifiers (y-axis) against the proportion of targets in the scene (x-axis). none almost none the smaller part few some many most almost all all Quantifier 5 10 15 20 25 Frequency (%) FROMAGe BLIP-2 LLaVA Figure 9: How often each quantifier (x-axis) is selected by the model (y-axis, expressed as %), regardless of the proportion of targets (i.e., animals) in the image. 0.7 0.75 0.8 0.85 0.9 0.95 top_p value 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Correlation / Inverse JS Divergence NOUN (color) NOUN (texture) ManyNames Quant Figure 10: The role of different top_p values for nucleus sampling decoding using the LLaVa model. 556 Correct: 0 1 2 7 FROMAGe: 3 15 ‘many’ ‘many’ BLIP-2: 1 5 7 8 LLaVA: 0 1 2 4 Correct: 2 6 4 14 FROMAGe: 3 ‘many’ 4 ‘many’ BLIP-2: 6 12 10 70 LLaVA: 3 4 4 12 How many animals are there in the image? Figure 11: How many animals are there in the image? All models fail to successfully count the number of animals in the image. Note that models generally output a number but sometimes FROMAGe outputs the quantifier ‘many’ which, interestingly, is the quantifier the model is strongly biased towards as illustrated in Figure 8 and discussed in Section 4. 557
2024_short
50
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 558–573 August 11-16, 2024 ©2024 Association for Computational Linguistics Are LLMs classical or nonmonotonic reasoners? Lessons from generics Alina Leidinger ILLC University of Amsterdam [email protected] Robert van Rooij ILLC University of Amsterdam [email protected] Ekaterina Shutova ILLC University of Amsterdam [email protected] Abstract Recent scholarship on reasoning in LLMs has supplied evidence of impressive performance and flexible adaptation to machine generated or human feedback. Nonmonotonic reasoning, crucial to human cognition for navigating the real world, remains a challenging, yet understudied task. In this work, we study nonmonotonic reasoning capabilities of seven state-ofthe-art LLMs in one abstract and one commonsense reasoning task featuring generics, such as ‘Birds fly’, and exceptions, ‘Penguins don’t fly’ (see Fig. 1). While LLMs exhibit reasoning patterns in accordance with human nonmonotonic reasoning abilities, they fail to maintain stable beliefs on truth conditions of generics at the addition of supporting examples (‘Owls fly’) or unrelated information (‘Lions have manes’). Our findings highlight pitfalls in attributing human reasoning behaviours to LLMs, as well as assessing general capabilities, while consistent reasoning remains elusive.1 1 Introduction Generics are unquantified statements such as ‘Birds fly’ or ‘Tigers are striped’ (Carlson and Pelletier, 1995; Mari et al., 2013). They are generalisations about kinds even if exceptions are known (‘Penguins don’t fly’; Fig. 1). Humans typically accept generics even if the property in question is rare among the kind (‘Ticks carry the lime disease’; Brandone et al., 2012; Cimpian et al., 2010). Generics play a crucial role in human beliefs on whether an example of a kind has a given property (Pelletier and Asher, 1997). Human children master generics before they are able to reason about quantified statements (Hollander et al., 2002; Leslie and Gelman, 2012). In defeasible or nonmonotonic reasoning (Sloman and Lagnado, 2005; Ginsberg, 1987; Koons, 1Resources available at: https://github.com/ aleidinger/nonmonotonic_reasoning_generics Figure 1: Reasoning about generics and exceptions 2005), a hypothesis follows defeasibly from a premise, if the hypothesis is true in most normal cases in which the premise holds. Generics make for a rich test bed for testing nonmonotonic reasoning capabilities (Pelletier and Asher, 1997; Asher and Morreau, 1995). For example, given the generic ‘Birds fly’ the inference ‘Tweety, the bird, can fly’ is defeasibly valid (McCarthy, 1986; Reiter, 1988, i.a.), i.e., it is reasonable to assume ‘Tweety can fly’ even if exceptions are possible (‘Tweety is a penguin’) (Lascarides and Asher, 1991). A classical reasoner however would reject the generic ‘Birds fly’ upon learning that ‘Penguins don’t fly’. Nonmonotonic reasoning is an integral part of human cognition (Russell, 2001), that helps us to navigate the real-world, e.g., by planning (Stenning and Van Lambalgen, 2012, Ch.5), a task that LLMs still struggle with (Valmeekam et al., 2023; Stechly et al., 2024). Nonmonotonic reasoning poses a greater challenge for LLMs than other reasoning tasks (Han et al., 2024) and hasn’t been featured prominently among natural language inference (NLI) (Gubelmann et al., 2023) or reasoning benchmarks (see §2). The question of whether LLMs reason nonmonotonically or classically about generics and exceptions is intricately linked to desiderata of LLMs 558 as reasoners. LLMs are heralded for their ability to adapt to human or machine generated feedback (Shinn et al., 2023; Paul et al., 2023; Madaan et al., 2024; Pan et al., 2024, i.a.). At the same time, it is desired that they reason reliably when presented with invalid counterarguments, irrelevant information or user viewpoints. Sycophancy (Perez et al., 2023) of LLMs, i.e., susceptibility to be swayed by user belief, is a case in point that has been investigated in recent studies (Ranaldi and Pucci, 2023; Laban et al., 2023, i.a.). As studies on reasoning patterns with generics remain scarce (Ralethe and Buys, 2022; Lin et al., 2020) and do not examine nonmonotonic reasoning, we address this gap by investigating the following research questions: 1) Do LLMs reason nonmonotonically or classically about generics? 2) Are LLMs sensitive to counter-evidence in the form of exceptions? 3) Do LLMs reason consistently and reliably by maintaining their response given supporting or unrelated examples? We test seven state-of-the-art LLMs for their reasoning capabilities about generics in the presence of exceptions (‘Penguins don’t fly’), as well as supporting (‘Owls fly’) and irrelevant exemplars (‘Lions have manes’). Across two datasets featuring both abstract and commonsense generics, we find that LLM behaviour mirrors human nonmonotonic reasoning patterns in the presence of exceptions (§5.1). However, most LLMs are not able to consistently maintain their agreement with generics given unrelated, or even supportive exemplars (§5.2). Our study highlights challenges in comparing LLM behaviour to human reasoning patterns as well as assessing reasoning capabilities more broadly, while consistent reasoning cannot be guaranteed. In Section 7, we present recommendations for a more holistic evaluation practice encompassing logical consistency measures. 2 Related Work 2.1 Generics in NLP To date most works on generics focus on injecting commonsense knowledge or generics into LLMs (Gajbhiye et al., 2022; Liu et al., 2023a, i.a.), or training LLMs for knowledge/generic generation (Bhagavatula et al., 2023). (See AlKhamissi et al. (2022) for a review.) Bhakthavatsalam et al. (2020) construct GenericsKG, a large knowledge base of generics as an asset for downstream tasks such as Question Answering or explanation generation. Bhagavatula et al. (2023) design a pipeline for synthetic generation of generics using samples from GenericsKB as seeds. Allaway et al. (2023) in turn complement the data with exceptions and instantiations for each generic, but do not investigate nonmonotonic reasoning capabilities. Most closely related to our work, Lin et al. (2020) find that LMs struggle to predict numerical knowledge in generics such as ‘Birds have two legs’. Ralethe and Buys (2022) find that pre-trained masked LMs falsely overgeneralise (Leslie et al., 2011) from generics (‘Ducks lay eggs’) to universally quantified statements (‘All ducks lay eggs’). 2.2 Nonmonotonic reasoning in NLP Han et al. (2024) test nonmonotonic reasoning among other inductive reasoning tasks and find that only GPT-4 performs adequately. LLMs struggle to reason with contradictory information (Kazemi et al., 2024). Rudinger et al. (2020); Brahman et al. (2021); Bhagavatula et al. (2019) develop NLI tasks to test defeasible or abductive reasoning in pragmatics, while Pyatkin et al. (2023); Ziems et al. (2023); Rao et al. (2023) focus on defeasible reasoning and social norms. Parmar et al. (2024) introduce non-monotonic reasoning tasks inspired by Lifschitz (1989) as part of their LogicBench. 2.3 Consistency in reasoning Most recent studies on reliability and consistency in reasoning examine sycophancy (Perez et al., 2023; Laban et al., 2023; Ranaldi and Pucci, 2023), consistency within multi-step reasoning or across sessions and users (Chen et al., 2023a; Wang et al., 2022). (See Liu et al. (2023b) for a review.) Orthogonal to this, our work connects to studies of reasoning in the presence of unrelated or conflicting information. Shi et al. (2023) find that LLMs are easily confounded by irrelevant information in arithmetic reasoning. Across a variety of reasoning tasks, Wang et al. (2023a) find that OpenAI models struggle to maintain stable responses given irrelevant objections. Xie et al. (2023) find mixed evidence of LLMs being sensitive to information that contradicts prior knowledge, yet showing a form of ‘confirmation bias’ when presented with diverse viewpoints. 3 Tasks and datasets We test nonmonotonic reasoning with generics using two datasets, featuring commonsense and 559 abstract generics. Both datasets contain generics (‘Birds fly’) accompanied by statements where the generic holds (‘Owls fly’) or doesn’t (‘Penguins don’t fly’). We refer to such examples as instantiations or exceptions respectively, and to both collectively as as exemplars. As commonsense generics, we use the synthetic dataset of generics and exemplars released by Allaway et al. (2023) (henceforth referred to as GEN-comm). The dataset consists of ∼650 generics and ∼19.000 exemplars (E.g., ‘Hoes are used to plow fields or clear snow’; ‘Hoes can be used to cut grass’).2 Secondly, we construct an abstract reasoning dataset featuring generics (GEN-abs). Inspired by Han et al. (2024), we use categories (‘birds’) and examples (‘eagles’) from De Deyne et al. (2008) to construct generics of the form ‘Birds have property P’ and exemplars of the form ‘Eagles do (not) have property P’. The dataset contains 260 tuples of a generic paired with an exemplar.3 For both datasets, our goal is to prompt LLMs for their agreement with a generic in the presence of exemplars which confirm or contradict the generic. We use the following prompt template, including model-specific special tokens4 to signal a chat history between an assistant and a user.5 Example: [INST] Is the following statement true: “Birds fly.” \nPlease answer yes or no. [/INST] yes [INST] Penguins don’t fly.\nIs the following statement true: “Birds fly.”\nPlease answer yes or no. [/INST] As a control study, we also replace the exception in the prompt (‘Penguins don’t fly’) with an instantiation (‘Owls fly’) or a random exemplar (‘Hoes can be used to cut grass’). Since generics in GEN-abs are abstract in nature, and to enable a consistent set-up across both datasets, we retain generics in GEN-comm that LLMs accepts when prompted with the first part of the above template, e.g., [INST] Is the following statement true: “Birds fly.” \nPlease answer yes or no. [/INST].6 2See App. B for additional information on preprocessing. 3The dataset is available at: https://github.com/ aleidinger/nonmonotonic_reasoning_generics/blob/ main/data/abstract_generics.csv 4See Appendix A or https://huggingface.co/docs/ transformers/main/en/chat_templating for details. 5We also experiment with an alternative prompting template and Chain-of-Thought prompting. Since results are similar, they are included in Appendix F. 6See App. B for details and results on discarded generics. 4 Method 4.1 Models We conduct our experiments on medium-sized open-weight models selected from the top of AlpacaEval7 and LMSys8 leaderboards, namely Llama-2-13b (Touvron et al., 2023), Mistral-7bInstruct-v0.2 (Jiang et al., 2023), Mixtral-8x7BInstruct-v0.1 (Jiang et al., 2024), Zephyr-7b-beta (Tunstall et al., 2023), WizardLM-13B-V1.2 (Xu et al., 2023), Starling-LM-7B-alpha (Zhu et al., 2023a), and OpenHermes-2.5-Mistral-7B (NousResearch, 2023).9 4.2 Prompting set-up Since LLM behaviour can vary considerably with the phrasing of an instruction (Webson and Pavlick, 2022; Leidinger et al., 2023), we formulate three different instructions to test if an LLM agrees with a given generic: ‘Is the following statement true’, ‘Do you believe the following statement to be true’, ‘Do you believe that the following statement is accurate’. Since the optimal model reply is short and succinct, we follow the convention of HELM (Liang et al., 2023, p.161) in setting temperature to 0 for reproducibility across runs. We format every prompt using the chat template appropriate for each model, with no system prompt.4 To map LLM responses to labels disagree vs. agree, we use pattern matching and record whether a response starts with yes or no (Röttger et al., 2023). We aggregate responses for the three instructions via majority voting. 4.3 Statistical tests To assess whether behaviour of LLMs is significantly different in the absence vs. presence of exemplars we resort to non-parametric statistical testing. Since our samples are paired, we use the Wilcoxon signed-rank test (Wilcoxon, 1992). 5 Results We present our main results in Figure 2. Additional, accordant results are described in Appendix F. 5.1 Do LLMs reason nonmonotonically? Since humans maintain their beliefs about truth conditions of generics (‘Birds fly’) in the presence of exceptions (‘Penguins do not fly’), we examine 7https://tatsu-lab.github.io/alpaca_eval/ 8https://chat.lmsys.org/?leaderboard 9See App. C for checkpoints and additional information. 560 Figure 2: LLM agreement with generics in the presence of exemplars on GEN-comm (top) and GEN-abs (bottom). Missing columns indicate agreement rates of 0%. whether challenging LLMs with an exception decreases their agreement to generics significantly. We find this to be the case for all models on both datasets (p = 0.01; see App. E for statistical test results). Notably, agreement rates drop to 0 for Llama-2, Mixtral, Starling and WizardLM on GEN-abs. 5.2 Do LLMs reason consistently? In the presence of supporting evidence (instantiation) to a generic (‘Owls fly’), we expect LLM agreement to remain at 100%, but this is not the case. While agreement rates remain high in numbers, they drop significantly for all models. On GEN-abs, only Mistral, OpenHermes, and WizardLM maintain agreement rates of > 90%, while agreement drops to < 10% for Mixtral. Similarly, most LLMs are not able to disregard irrelevant random exemplars (exception/instantiation (shuffled)). Agreement rates decline steeply below 50% for Llama-2, Mistral, Mixtral and Zephyr on GEN-comm and to below 20% for Llama-2, Mixtral, Starling, WizardLM and Zephyr on GEN-abs. OpenHermes stands out as the only model that maintains agreement rates above 85% on both datasets. Notably, OpenHermes is the only model which has been trained on additional code data which has been shown to also help reasoning in natural language (Liang et al., 2023; Yang et al., 2024; Ma et al., 2023). Nevertheless, observed differences are statistically significant for all models on both datasets (App. E). 6 Analysis 6.1 How do LLMs reason about different types of generics? GEN-comm contains both bare plural (BP) generics as well as indefinite singular (IS) generics (Leslie et al., 2009). (For example, ‘Sea snails have a hard shell, which protects them from predators’ (BP) and ‘A deciduous tree can be identified by its leaves’ (IS)). We did not find notable differences between LLM agreement to BP or IS generics in the presence of exemplars (see Figure 3). Aforementioned consistency failures persist for both types of generics. 6.2 Qualitative analysis Generics in GEN-comm which are accepted in isolation, but are rejected in the presence of exceptions or instantiations include ‘Stimulants can be used to treat ADHD’ (Llama-2, Starling, Mixtral) or ‘A bobsleigh is driven by a single driver’ (Starling, Mistral, Mixtral, OpenHermes, WizardLM). Generics which are accepted no matter the exemplar presented in context include ‘Inflammatory diseases may be caused by an imbalance of the immune system’ (Llama-2, Starling, Mistral, OpenHermes), ‘A processor should be able to run a program’ (Starling, Mixtral, OpenHermes, WizardLM), ‘Experimental evidence is used to support or refute theories’, ‘An adventure has a beginning, middle, end’ (Starling, OpenHermes, WizardLM), and ‘Coin561 Figure 3: LLM agreement with bare plural (BP) and indefinite singular (IS) generics in the presence of exemplars on GEN-comm. cidence is part of the human condition’ (Starling, Mistral, OpenHermes). For GEN-abs, OpenHermes is the only LLM which maintains its agreement to a generic (‘Birds have property P’, ‘Mammals have property P’) in the presence of any instantiation or unrelated exemplar, but flips its decision and outputs disagreement in the presence of an exception. No LLM accepts any of the generics regardless of the exemplar it is paired with. 7 Discussion With the advent of LLMs and reports of impressive performance, including on reasoning tasks (Wei et al., 2022; Kojima et al., 2022), recent investigations into failure modes in reasoning have focused, e.g., on prompt attacks (Zhu et al., 2023b; Wang et al., 2023b, i.a.), sycophancy (Perez et al., 2023; Laban et al., 2023; Ranaldi and Pucci, 2023, i.a.) or adaptability to critique or feedback (Madaan et al., 2024; Chen et al., 2023b; Huang et al., 2023; Pan et al., 2024). Such research trends might be seen as emblematic of a view of LLMs as artificial natural artifacts (Kambhampati, 2022). Results in this study demonstrate the difficulties of making claims about reasoning capabilities of LLMs or comparing them to human reasoners (Han et al., 2024; Ralethe and Buys, 2022; Lin et al., 2020), while consistent reasoning remains elusive even for state-of-the-art LLMs. Research that predates the paradigm shift to few-shot prompting, has advocated for arguably simpler, systematic diagnostic tests (Ribeiro et al., 2020; Ettinger, 2020; Kassner and Schütze, 2020). We argue that such behavioural tests merit a revival, so that performance metrics for reasoning are complemented with measures of logical consistency and robustness. 8 Conclusion The present study focuses on nonmonotonic reasoning capabilities of LLMs in the context of generics. We evaluate seven state-of-the-art LLMs on two datasets featuring both abstract and commonsense generic statements. While LLM behaviour on generics paired with exceptions is in line with nonmonotonic reasoning patterns, LLMs fail to reason consistently and robustly when adding supporting or unrelated exemplars. 9 Limitations We acknowledge that our experiments exclusively feature generics and exemplars in English. Future research might profit from including additional 562 languages to examine nonmonotonic reasoning capabilities in other languages, drawing on crosslinguistic research on generics (Mari et al., 2013). Such work might also highlight differences in consistency failures between different languages. In this work, we do not experiment with generics pertaining to demographic groups or nationalities because of concerns around social bias. Future work might examine LLM behaviour on generic statements for larger LLMs or closed-source models. We restrict ourselves to medium-sized open-weight LLMs, due to their widespread use and availability, as well as restrictions on our computational budget. Acknowledgements We thank our anonymous reviewers for their insightful comments. The work for this publication is financially supported by the project, ‘From Learning to Meaning: A new approach to Generic Sentences and Implicit Biases’ (project number 406.18.TW.007) of the research programme SGW Open Competition, which is (partly) financed by the Dutch Research Council (NWO). References Badr AlKhamissi, Millicent Li, Asli Celikyilmaz, Mona Diab, and Marjan Ghazvininejad. 2022. A review on language models as knowledge bases. arXiv preprint arXiv:2204.06031. Emily Allaway, Jena D. Hwang, Chandra Bhagavatula, Kathleen McKeown, Doug Downey, and Yejin Choi. 2023. Penguins don’t fly: Reasoning about generics through instantiations and exceptions. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2618–2635, Dubrovnik, Croatia. Association for Computational Linguistics. Nicholas Asher and Michael Morreau. 1995. What some generic sentences mean. The generic book, pages 300–338. Chandra Bhagavatula, Jena D. Hwang, Doug Downey, Ronan Le Bras, Ximing Lu, Lianhui Qin, Keisuke Sakaguchi, Swabha Swayamdipta, Peter West, and Yejin Choi. 2023. I2D2: Inductive knowledge distillation with NeuroLogic and self-imitation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9614–9630, Toronto, Canada. Association for Computational Linguistics. Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Wen-tau Yih, and Yejin Choi. 2019. Abductive commonsense reasoning. In International Conference on Learning Representations. Sumithra Bhakthavatsalam, Chloe Anastasiades, and Peter Clark. 2020. Genericskb: A knowledge base of generic statements. arXiv preprint arXiv:2005.00660. Faeze Brahman, Vered Shwartz, Rachel Rudinger, and Yejin Choi. 2021. Learning to rationalize for nonmonotonic reasoning with distant supervision. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 12592–12601. Amanda C Brandone, Andrei Cimpian, Sarah-Jane Leslie, and Susan A Gelman. 2012. Do lions have manes? for children, generics are about kinds rather than quantities. Child development, 83(2):423–433. Gregory N Carlson and Francis Jeffry Pelletier. 1995. The generic book. University of Chicago Press. Angelica Chen, Jason Phang, Alicia Parrish, Vishakh Padmakumar, Chen Zhao, Samuel R Bowman, and Kyunghyun Cho. 2023a. Two failures of selfconsistency in the multi-step reasoning of llms. Transactions on Machine Learning Research. Justin Chih-Yao Chen, Swarnadeep Saha, and Mohit Bansal. 2023b. Reconcile: Round-table conference improves reasoning via consensus among diverse llms. arXiv preprint arXiv:2309.13007. Andrei Cimpian, Amanda C Brandone, and Susan A Gelman. 2010. Generic statements require little evidence for acceptance but have powerful implications. Cognitive science, 34(8):1452–1482. Simon De Deyne, Steven Verheyen, Eef Ameel, Wolf Vanpaemel, Matthew J Dry, Wouter Voorspoels, and Gert Storms. 2008. Exemplar by feature applicability matrices and other dutch normative data for semantic concepts. Behavior research methods, 40:1030– 1048. Allyson Ettinger. 2020. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34–48. Amit Gajbhiye, Luis Espinosa Anke, and Steven Schockaert. 2022. Modelling commonsense properties using pre-trained bi-encoders. In Proceedings of the 29th International Conference on Computational Linguistics, pages 3971–3983. Matthew L Ginsberg. 1987. Readings in nonmonotonic reasoning. Reto Gubelmann, Ioannis Katis, Christina Niklaus, and Siegfried Handschuh. 2023. Capturing the varieties of natural language inference: A systematic survey of existing datasets and two novel benchmarks. Journal of Logic, Language and Information, pages 1–28. 563 Simon Jerome Han, Keith J Ransom, Andrew Perfors, and Charles Kemp. 2024. Inductive reasoning in humans and large language models. Cognitive Systems Research, 83:101155. Michelle A Hollander, Susan A Gelman, and Jon Star. 2002. Children’s interpretation of generic noun phrases. Developmental psychology, 38(6):883. Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, and Denny Zhou. 2023. Large language models cannot self-correct reasoning yet. In The Twelfth International Conference on Learning Representations. Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. arXiv preprint arXiv:2310.06825. Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. 2024. Mixtral of experts. arXiv preprint arXiv:2401.04088. Subbarao Kambhampati. 2022. Ai as (an ersatz) natural science? Nora Kassner and Hinrich Schütze. 2020. Negated and misprimed probes for pretrained language models: Birds can talk, but cannot fly. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7811–7818, Online. Association for Computational Linguistics. Mehran Kazemi, Quan Yuan, Deepti Bhatia, Najoung Kim, Xin Xu, Vaiva Imbrasaite, and Deepak Ramachandran. 2024. Boardgameqa: A dataset for natural language reasoning with contradictory information. Advances in Neural Information Processing Systems, 36. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199– 22213. Robert Koons. 2005. Defeasible reasoning. Philippe Laban, Lidiya Murakhovs’ ka, Caiming Xiong, and Chien-Sheng Wu. 2023. Are you sure? challenging llms leads to performance drops in the flipflop experiment. arXiv preprint arXiv:2311.08596. Alex Lascarides and Nicholas Asher. 1991. Discourse relations and defeasible knowledge. In 29th Annual Meeting of the Association for Computational Linguistics, pages 55–62. Alina Leidinger, Robert van Rooij, and Ekaterina Shutova. 2023. The language of prompting: What linguistic properties make a prompt successful? In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 9210–9232, Singapore. Association for Computational Linguistics. Sarah-Jane Leslie and Susan A Gelman. 2012. Quantified statements are recalled as generics: Evidence from preschool children and adults. Cognitive psychology, 64(3):186–214. Sarah-Jane Leslie, Sangeet Khemlani, and Sam Glucksberg. 2011. Do all ducks lay eggs? The generic overgeneralization effect. Journal of Memory and Language, 65(1):15–31. Sarah-Jane Leslie, Sangeet Khemlani, Sandeep Prasada, and Sam Glucksberg. 2009. Conceptual and linguistic distinctions between singular and plural generics. Proceedings of the 31st annual cognitive science society, pages 479–484. Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2023. Holistic evaluation of language models. Transactions on Machine Learning Research. Vladimir Lifschitz. 1989. Benchmark problems for formal nonmonotonic reasoning: Version 2.00. In Non-Monotonic Reasoning: 2nd International Workshop Grassau, FRG, June 13–15, 1988 Proceedings 2, pages 202–219. Springer. Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, and Xiang Ren. 2020. Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of PreTrained Language Models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6862–6868, Online. Association for Computational Linguistics. Jiacheng Liu, Wenya Wang, Dianzhuo Wang, Noah Smith, Yejin Choi, and Hannaneh Hajishirzi. 2023a. Vera: A general-purpose plausibility estimation model for commonsense statements. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 1264–1287, Singapore. Association for Computational Linguistics. Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, and Hang Li. 2023b. Trustworthy llms: a survey and guideline for evaluating large language models’ alignment. In Socially Responsible Language Modelling Research. Yingwei Ma, Yue Liu, Yue Yu, Yuanliang Zhang, Yu Jiang, Changjian Wang, and Shanshan Li. 2023. At which training stage does code data help llms reasoning? arXiv preprint arXiv:2309.16298. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. 2024. Self-refine: Iterative refinement with self-feedback. Advances in Neural Information Processing Systems, 36. 564 Alda Mari, Claire Beyssade, and Fabio Del Prete. 2013. Genericity. 43. Oxford University Press. John McCarthy. 1986. Applications of circumscription to formalizing common-sense knowledge. Artificial intelligence, 28(1):89–116. MistralAI. 2023. Mixtral of experts - a high quality sparse mixture-of-experts. NousResearch. 2023. Openhermes 2.5 - mistral 7b. Liangming Pan, Michael Saxon, Wenda Xu, Deepak Nathani, Xinyi Wang, and William Yang Wang. 2024. Automatically correcting large language models: Surveying the landscape of diverse automated correction strategies. Transactions of the Association for Computational Linguistics, 12:484–506. Mihir Parmar, Nisarg Patel, Neeraj Varshney, Mutsumi Nakamura, Man Luo, Santosh Mashetty, Arindam Mitra, and Chitta Baral. 2024. Towards systematic evaluation of logical reasoning ability of large language models. arXiv preprint arXiv:2404.15522. Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, and Boi Faltings. 2023. Refiner: Reasoning feedback on intermediate representations. arXiv preprint arXiv:2304.01904. Francis Jeffry Pelletier and Nicholas Asher. 1997. Generics and defaults. In Handbook of logic and language, pages 1125–1177. Elsevier. Ethan Perez, Sam Ringer, Kamile Lukosiute, Karina Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu, Saurav Kadavath, Andy Jones, Anna Chen, Benjamin Mann, Brian Israel, Bryan Seethor, Cameron McKinnon, Christopher Olah, Da Yan, Daniela Amodei, Dario Amodei, Dawn Drain, Dustin Li, Eli Tran-Johnson, Guro Khundadze, Jackson Kernion, James Landis, Jamie Kerr, Jared Mueller, Jeeyoon Hyun, Joshua Landau, Kamal Ndousse, Landon Goldberg, Liane Lovitt, Martin Lucas, Michael Sellitto, Miranda Zhang, Neerav Kingsland, Nelson Elhage, Nicholas Joseph, Noemi Mercado, Nova DasSarma, Oliver Rausch, Robin Larson, Sam McCandlish, Scott Johnston, Shauna Kravec, Sheer El Showk, Tamera Lanham, Timothy Telleen-Lawton, Tom Brown, Tom Henighan, Tristan Hume, Yuntao Bai, Zac HatfieldDodds, Jack Clark, Samuel R. Bowman, Amanda Askell, Roger Grosse, Danny Hernandez, Deep Ganguli, Evan Hubinger, Nicholas Schiefer, and Jared Kaplan. 2023. Discovering language model behaviors with model-written evaluations. In Findings of the Association for Computational Linguistics: ACL 2023, pages 13387–13434, Toronto, Canada. Association for Computational Linguistics. Valentina Pyatkin, Jena D. Hwang, Vivek Srikumar, Ximing Lu, Liwei Jiang, Yejin Choi, and Chandra Bhagavatula. 2023. ClarifyDelphi: Reinforced clarification questions with defeasibility rewards for social and moral situations. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11253– 11271, Toronto, Canada. Association for Computational Linguistics. Sello Ralethe and Jan Buys. 2022. Generic overgeneralization in pre-trained language models. In Proceedings of the 29th International Conference on Computational Linguistics, pages 3187–3196. Leonardo Ranaldi and Giulia Pucci. 2023. When large language models contradict humans? large language models’ sycophantic behaviour. Kavel Rao, Liwei Jiang, Valentina Pyatkin, Yuling Gu, Niket Tandon, Nouha Dziri, Faeze Brahman, and Yejin Choi. 2023. What makes it ok to set a fire? iterative self-distillation of contexts and rationales for disambiguating defeasible social and moral situations. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 12140–12159, Singapore. Association for Computational Linguistics. Raymond Reiter. 1988. Nonmonotonic reasoning. In Exploring artificial intelligence, pages 439–481. Elsevier. Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4902– 4912, Online. Association for Computational Linguistics. Paul Röttger, Hannah Rose Kirk, Bertie Vidgen, Giuseppe Attanasio, Federico Bianchi, and Dirk Hovy. 2023. Xstest: A test suite for identifying exaggerated safety behaviours in large language models. arXiv preprint arXiv:2308.01263. Rachel Rudinger, Vered Shwartz, Jena D. Hwang, Chandra Bhagavatula, Maxwell Forbes, Ronan Le Bras, Noah A. Smith, and Yejin Choi. 2020. Thinking like a skeptic: Defeasible inference in natural language. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4661–4675, Online. Association for Computational Linguistics. James Russell. 2001. Cognitive theories of autism. In Cognitive deficits in brain disorders, pages 309–338. CRC Press. Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed H Chi, Nathanael Schärli, and Denny Zhou. 2023. Large language models can be easily distracted by irrelevant context. In International Conference on Machine Learning, pages 31210–31227. PMLR. Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik R Narasimhan, and Shunyu Yao. 2023. Reflexion: Language agents with verbal reinforcement learning. In Thirty-seventh Conference on Neural Information Processing Systems. 565 Steven A Sloman and David Lagnado. 2005. The problem of induction. The Cambridge handbook of thinking and reasoning, pages 95–116. Kaya Stechly, Karthik Valmeekam, and Subbarao Kambhampati. 2024. On the self-verification limitations of large language models on reasoning and planning tasks. arXiv preprint arXiv:2402.08115. Keith Stenning and Michiel Van Lambalgen. 2012. Human reasoning and cognitive science. MIT Press. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Kashif Rasul, Younes Belkada, Shengyi Huang, Leandro von Werra, Clémentine Fourrier, Nathan Habib, Nathan Sarrazin, Omar Sanseviero, Alexander M. Rush, and Thomas Wolf. 2023. Zephyr: Direct distillation of lm alignment. Karthik Valmeekam, Matthew Marquez, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambhampati. 2023. Planbench: an extensible benchmark for evaluating large language models on planning and reasoning about change. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Boshi Wang, Xiang Yue, and Huan Sun. 2023a. Can ChatGPT defend its belief in truth? evaluating LLM reasoning via debate. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 11865–11881, Singapore. Association for Computational Linguistics. Jindong Wang, HU Xixu, Wenxin Hou, Hao Chen, Runkai Zheng, Yidong Wang, Linyi Yang, Wei Ye, Haojun Huang, Xiubo Geng, et al. 2023b. On the robustness of chatgpt: An adversarial and out-ofdistribution perspective. In ICLR 2023 Workshop on Trustworthy and Reliable Large-Scale Machine Learning Models. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations. Albert Webson and Ellie Pavlick. 2022. Do promptbased models really understand the meaning of their prompts? In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2300–2344, Seattle, United States. Association for Computational Linguistics. Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022. Finetuned language models are zero-shot learners. Frank Wilcoxon. 1992. Individual comparisons by ranking methods. In Breakthroughs in Statistics: Methodology and Distribution, pages 196–202. Springer. Jian Xie, Kai Zhang, Jiangjie Chen, Renze Lou, and Yu Su. 2023. Adaptive chameleon or stubborn sloth: Revealing the behavior of large language models in knowledge conflicts. In The Twelfth International Conference on Learning Representations. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. 2023. Wizardlm: Empowering large language models to follow complex instructions. Ke Yang, Jiateng Liu, John Wu, Chaoqi Yang, Yi R Fung, Sha Li, Zixuan Huang, Xu Cao, Xingyao Wang, Yiquan Wang, et al. 2024. If llm is the wizard, then code is the wand: A survey on how code empowers large language models to serve as intelligent agents. arXiv preprint arXiv:2401.00812. Banghua Zhu, Evan Frick, Tianhao Wu, Hanlin Zhu, and Jiantao Jiao. 2023a. Starling-7b: Improving llm helpfulness & harmlessness with rlaif. Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei Ye, Neil Zhenqiang Gong, Yue Zhang, et al. 2023b. Promptbench: Towards evaluating the robustness of large language models on adversarial prompts. arXiv preprint arXiv:2306.04528. Caleb Ziems, Jane Dwivedi-Yu, Yi-Chia Wang, Alon Halevy, and Diyi Yang. 2023. NormBank: A knowledge bank of situational social norms. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7756–7776, Toronto, Canada. Association for Computational Linguistics. A Example input Example input formatted using our prompt template and the chat template10, e.g., for Mistral7B-Instruct: 10https://huggingface.co/blog/chat-templates 566 Model # samples Mistral-7B-Instruct 2093 Llama-2-13b 1245 Zephyr-7b-beta 1536 WizardLM-13B-V1.2 2225 OpenHermes-2.5-Mistral-7B 2153 Starling-LM-7B-alpha 2244 Mixtral-8x7B-Instruct-v0.1 1959 Table 1: # retained samples in GEN-comm Prompt: [INST] Do you believe that the following statement is accurate: ‘Birds fly.’ Please answer yes or no. [/INST] yes [INST] Penguins do not fly. Do you believe that the following statement is accurate: ‘Birds fly.’ Please answer yes or no. [/INST] B Additional information on data preprocessing For GEN-comm, we conduct additional processing to obtain high quality generics and ensure a parallel experimental setup between GEN-comm and GEN-abs. We retain only generics that were annotated as ‘valid’ by human annotators. We filter generics for which both an exception and an instantiation exists. Since generics are unquantified statements, we remove any quantifiers such as ‘generally’, ‘usually’ and ‘typically’ at the beginning of each generic. To enable consistent evaluation on GEN-abs and GEN-comm, we evaluate each LLM on generics contained in GEN-comm which it accepts a priori. In an initial experiment, we prompt LLMs using the first part of our template (above; App. A). An example input for GEN-comm would be, e.g., ‘[INST] Do you believe that the following statement is accurate: ‘Birds have property P.’ Please answer yes or no[/INST]’. Generics for which an LLM does not generate yes as a response are discarded. We retain > 1200 samples for each model (See Table 1 for details). Results on the resultant dataset are presented in the main body of the paper (Section 5). For the reader’s interest, we include here also LLM reFigure 4: Results on generics contained in GEN-comm that are rejected a priori. Missing bars for ‘no exemplar’ indicate agreement rates of zero. sponses to generics contained in GEN-comm which are rejected by LLMs, i.e., a given LLM generates the response no to the prompt above (See Figure 4). As expected agreement rates soar for almost all models when adding an instantiation which confirms the previously rejected generic. Nevertheless, agreement rates also increase, albeit less, when adding exceptions or unrelated random examplars, particularly for Llama-2 and WizardLM. OpenHermes and Starling show the least inconsistencies. C Additional information on LLMs In this section we provide additional details on the models used in this study which are listed in Section 4.1. The specific checkpoints we use can be seen in Table 2 and are all available through the HuggingFace Hub. All models we use are trained for chat interaction. Mixtral-8x7B-Instruct-v0.1 (MistralAI, 2023) is a sparse mixture of expert model based on 8 Mistral 7B models that has been further trained using supervised finetuning and Direct Preference Optimisation. It ranks highest among its weight class on AlpacaEval11 and chat.lmsys12 leaderboards (as of Feb 6 2024). At its release it surpasses GPT-3.5 and LLaMA-2-70b. 11https://tatsu-lab.github.io/alpaca_eval/ 12https://chat.lmsys.org/?leaderboard 567 LLM Checkpoints meta-llama/Llama-2-13b-chat-hf mistralai/Mistral-7B-Instruct-v0.2 mistralai/Mixtral-8x7B-Instruct-v0.1 HuggingFaceH4/zephyr-7b-beta berkeley-nest/Starling-LM-7B-alpha WizardLM/WizardLM-13B-V1.2 teknium/OpenHermes-2.5-Mistral-7B Table 2: LLM checkpoints used in this study. StarlingLM-13B-V1.2 (Zhu et al., 2023a) has been trained via Reinforcement Learning from AI Feedback (RLAIF) on the Nectar dataset. In its weight class, it is the second best performing model on chat.lmsys and 4th on AlpacaEval (as of Feb 6 2024). Amidst mounting evidence that training on code enhances reasoning abilities also for natural language (Liang et al., 2023; Yang et al., 2024; Ma et al., 2023), we also use OpenHermes-2.5-Mistral7B (NousResearch, 2023) which ranks third in its weight class on chat.lmsys. It is Mistral-based model that has been finetuned on additional code datasets. Notably, the developers detail that this results in improvements on non-code tasks.13 WizardLM-13B-V1.2 (Xu et al., 2023) is a finetuned version of Llama-2 13b and is ranked 8th in its weight-class on both chat.lmsys and AlpacaEval. Zephyr-7b-beta (Tunstall et al., 2023) is a finetuned version of Mistral-7B-v0.1. It is ranked 9th on chat.lmsys and 11th on AlpacaEval. D Average runtime Generating LLM responses for one LLM and all generics across all settings took less than 0.5 GPU hours. All experiments were conducted on one NVIDIA A100 GPU. E Statistical test results Responses in the presence of exemplars are significantly different from results obtained without examplars (see Tables 3, 4, 5), for all types of exemplars and all models (significance level 0.01; sole exception is Llama-2 with CoT prompting as can be seen in Table 5 rows 1-2). 13https://huggingface.co/teknium/OpenHermes-2. 5-Mistral-7B Figure 5: Results on GEN-comm. Alternative prompt template described in Section F F Additional experimental results We demonstrate additional experimental results based on an alternative prompting set-up in Figures 5 and 6. To this end, we prompt LLMs using the following template where [INST] is an example of a model-specific special token used in chat templating. For example: Prompt [INST] Do you believe that the following statement is accurate: ‘Birds fly’ Please answer yes or no. [/INST] For GEN-comm, we retain all generics to which an LLM responds yes to the prompt above. We then prompt LLMs anew supplying an exception, instantiation or random exemplar together with a generic for both datasets. For example: Prompt [INST] Penguins do not fly. Do you believe that the following statement is accurate: ‘Birds fly’ Please answer yes or no. [/INST] We find that results differ significantly between 568 Figure 6: Results on GEN-abs. Alternative prompt template described in Section F. the two conditions (no exemplar vs. with an exemplar) (see Table 4 for statistical test results). On GEN-comm (Figure 5) agreement rates drop considerably in the presence of exceptions which mirrors nonmonotonic reasoning patterns. Agreement is higher, yet still drops significantly in the presence of instantiations. No LLM maintains perfectly consistent responses at the addition of random instantiations or exceptions. When prompting with random exemplars surprisingly agreement drops, most notably for Llama-2 and Zephyr. For the reader’s interest, we also include results on the portion of generics in GEN-comm which is rejected by LLMs a priori (Table 7). As expected, agreement increases from zero at the addition of an instantiation to the prompt, most notably for OpenHermes and Starling. However, LLMs should maintain a response of no at the addition of an exception or random exemplar to the prompt. This is visibly not the case with agreement rates increasing significantly for all models. On GEN-abs, agreement drops considerably at the addition of an exception for all models except OpenHermes (Figure 6). Notably OpenHermes and Starling-LM appear to yield consistent responses in the presence of our controls, the random exemplars, while Llama-2 and Zephyr perform worst in that regard. Figure 7: Results on generics of GEN-comm that are rejected by LLMs a priori. Alternative prompt template described in Section F. Missing bars indicate that agreement for ‘no exemplar’ is zero. F.1 Chain-of-thought prompting Additionally, we ran experiments using zero-shot Chain-of-Thought (CoT) prompting in the style of (Kojima et al., 2022) by appending ‘Let’s think step by step’ to our prompts. We present results on GEN-comm in Figure 8 and results on GEN-abs in Figure 9. On GEN-comm, agreement rates drop significantly for all models at the addition of exceptions, instantiations or shuffled exemplars (with the exception of Llama-2 when we include instantiations; see Table 5 for significance results). Agreement rates drop more given exceptions in comparison to instantiations or unrelated examplars for Mistral, Mixtral, OpenHermes and Starling. For Llama-2 and Zephyr agreement rates fall below 10% at the addition of unrelated exemplars. On GEN-abs, agreement rates fall drastically given exceptions and equal 0% for Llama-2, Mixtral, Starling and Zephyr. The same is true for shuffled instantiations. OpenHermes is the only model to maintain agreement rates above 90% when presented with instantiations or shuffled exceptions. 569 Figure 8: Results on GEN-comm using zero-shot CoT prompting. Figure 9: Results on GEN-abs using zero-shot CoT prompting. Missing bars indicate agreement rate of 0%. 570 Model prompt setting p-value Llama-2-13b-chat-hf exception 1.2444035588550786e-84 Llama-2-13b-chat-hf instantiation 1.3944889010907487e-28 Llama-2-13b-chat-hf exception (shuffled) 1.3664041679452567e-86 Llama-2-13b-chat-hf instantiation (shuffled) 3.7504271121760947e-128 OpenHermes-2.5-Mistral-7B exception 2.0884875837625446e-45 OpenHermes-2.5-Mistral-7B instantiation 7.237829871739995e-08 OpenHermes-2.5-Mistral-7B exception (shuffled) 1.733880104231141e-27 OpenHermes-2.5-Mistral-7B instantiation (shuffled) 9.799073841979368e-26 Starling-LM-7B-alpha exception 1.0691632340127197e-102 Starling-LM-7B-alpha instantiation 7.247101964362887e-14 Starling-LM-7B-alpha exception (shuffled) 3.14927364689666e-77 Starling-LM-7B-alpha instantiation (shuffled) 5.588400099286033e-62 Mixtral-8x7B-Instruct-v0.1 exception 5.599059901868063e-84 Mixtral-8x7B-Instruct-v0.1 instantiation 4.84145282763492e-53 Mixtral-8x7B-Instruct-v0.1 exception (shuffled) 1.8855259265259482e-119 Mixtral-8x7B-Instruct-v0.1 instantiation (shuffled) 3.312378211336223e-151 WizardLM-13B-V1.2 exception 3.169934685227252e-109 WizardLM-13B-V1.2 instantiation 1.244192114854348e-15 WizardLM-13B-V1.2 exception (shuffled) 6.7440576522393956e-49 WizardLM-13B-V1.2 instantiation (shuffled) 3.312389179997469e-50 zephyr-7b-beta exception 3.2434215158679907e-99 zephyr-7b-beta instantiation 2.68778179464934e-25 zephyr-7b-beta exception (shuffled) 2.7464111838608292e-137 zephyr-7b-beta instantiation (shuffled) 2.671546422248841e-187 Mistral-7B-Instruct-v0.2 exception 6.521923113646968e-71 Mistral-7B-Instruct-v0.2 instantiation 2.0670658180782593e-15 Mistral-7B-Instruct-v0.2 exception (shuffled) 6.923699393684986e-120 Mistral-7B-Instruct-v0.2 instantiation (shuffled) 4.9982887921763924e-139 Table 3: Results of Wilcoxon signed ranked test for paired samples. We compare agreement of LLMs to generics with and without an exemplar (one of exception, instantiation, exception (shuffled), instantiation (shuffled). Results are obtained using the original prompt template described in section 5 and correspond to the main results in the paper in Figure 2. 571 Model prompt setting p-value Llama-2-13b-chat-hf exception 1.2402659787920488e-62 Llama-2-13b-chat-hf instantiation 1.8577351435735865e-29 Llama-2-13b-chat-hf exception (shuffled) 6.558556037957885e-98 Llama-2-13b-chat-hf instantiation (shuffled) 9.990918651724453e-148 OpenHermes-2.5-Mistral-7B exception 9.041178413936276e-31 OpenHermes-2.5-Mistral-7B instantiation 0.025347318677468252 OpenHermes-2.5-Mistral-7B exception (shuffled) 9.236596617174027e-13 OpenHermes-2.5-Mistral-7B instantiation (shuffled) 1.2052982584446398e-13 Starling-LM-7B-alpha exception 4.84145282763492e-53 Starling-LM-7B-alpha instantiation 0.0009111188771537128 Starling-LM-7B-alpha exception (shuffled) 9.89884333064868e-40 Starling-LM-7B-alpha instantiation (shuffled) 6.7440576522393956e-49 Mixtral-8x7B-Instruct-v0.1 exception 2.6891242658680216e-51 Mixtral-8x7B-Instruct-v0.1 instantiation 2.8706760140807313e-27 Mixtral-8x7B-Instruct-v0.1 exception (shuffled) 7.287679729162835e-32 Mixtral-8x7B-Instruct-v0.1 instantiation (shuffled) 1.8712872006902566e-36 WizardLM-13B-V1.2 exception 5.8780179991539864e-33 WizardLM-13B-V1.2 instantiation 9.633570086430965e-07 WizardLM-13B-V1.2 exception (shuffled) 7.74421643104407e-06 WizardLM-13B-V1.2 instantiation (shuffled) 2.5802843041604163e-08 zephyr-7b-beta exception 3.525239394844374e-74 zephyr-7b-beta instantiation 2.476062658812572e-30 zephyr-7b-beta exception (shuffled) 3.7238080067294776e-86 zephyr-7b-beta instantiation (shuffled) 9.415767818703249e-116 Mistral-7B-Instruct-v0.2 exception 3.9328331793483447e-54 Mistral-7B-Instruct-v0.2 instantiation 2.0670658180782593e-15 Mistral-7B-Instruct-v0.2 exception (shuffled) 3.699479889932592e-64 Mistral-7B-Instruct-v0.2 instantiation (shuffled) 2.6476609044572044e-100 Table 4: Results of Wilcoxon signed ranked test for paired samples. We compare agreement of LLMs to generics with and without an exemplar (one of exception, instantiation, exception (shuffled), instantiation (shuffled)). These results correspond to the alternative prompting style and results described in section F. 572 Model prompt setting p-value Llama-2-13b-chat-hf exception 0.025347318677468252 Llama-2-13b-chat-hf instantiation 0.31731050786291415 Llama-2-13b-chat-hf exception (shuffled) 0.0009111188771537128 Llama-2-13b-chat-hf instantiation (shuffled) 3.737981840170154e-05 Starling-LM-7B-alpha exception 4.320463057827488e-08 Starling-LM-7B-alpha instantiation 5.733031437583866e-07 Starling-LM-7B-alpha exception (shuffled) 1.5417257900279904e-08 Starling-LM-7B-alpha instantiation (shuffled) 1.1825298845719069e-11 OpenHermes-2.5-Mistral-7B exception 2.3159484001346495e-35 OpenHermes-2.5-Mistral-7B instantiation 3.552964224155306e-33 OpenHermes-2.5-Mistral-7B exception (shuffled) 4.4044942248007814e-32 OpenHermes-2.5-Mistral-7B instantiation (shuffled) 1.773177466197228e-41 Mixtral-8x7B-Instruct-v0.1 exception 2.9303133449994263e-53 Mixtral-8x7B-Instruct-v0.1 instantiation 4.474661339129513e-39 Mixtral-8x7B-Instruct-v0.1 exception (shuffled) 6.758775639492622e-37 Mixtral-8x7B-Instruct-v0.1 instantiation (shuffled) 5.058648827940248e-40 zephyr-7b-beta exception 3.6136286243610392e-96 zephyr-7b-beta instantiation 8.956226067732092e-94 zephyr-7b-beta exception (shuffled) 1.2813208444193637e-111 zephyr-7b-beta instantiation (shuffled) 2.0076004412348868e-151 Mistral-7B-Instruct-v0.2 exception 3.294362383314041e-67 Mistral-7B-Instruct-v0.2 instantiation 6.210993425425191e-19 Mistral-7B-Instruct-v0.2 exception (shuffled) 2.380470154600155e-54 Mistral-7B-Instruct-v0.2 instantiation (shuffled) 1.2444035588550786e-84 Table 5: Results of Wilcoxon signed ranked test for paired samples. We compare agreement of LLMs to generics with and without an exemplar (one of exception, instantiation, exception (shuffled), instantiation (shuffled)). These results correspond to Chain-of-Thought prompting results described in section F. 573
2024_short
51
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 574–582 August 11-16, 2024 ©2024 Association for Computational Linguistics ConstitutionalExperts: Training a Mixture of Principle-based Prompts Savvas Petridis∗, Ben Wedin∗, Ann Yuan∗, James Wexler, Nithum Thain Google Research {petridis,wedin,annyuan,jwexler,nthain}@google.com Abstract Large language models (LLMs) are highly capable at a variety of tasks given the right prompt, but writing one is still a difficult and tedious process. In this work, we introduce ConstitutionalExperts, a method for learning a prompt consisting of constitutional principles (i.e. rules), given a training dataset. Unlike prior methods that optimize the prompt as a single entity, our method incrementally improves the prompt by surgically editing individual principles. We also show that we can improve overall performance by learning unique prompts for different semantic regions of the training data and using a mixture-ofexperts (MoE) architecture to route inputs at inference time. We compare our method to other state of the art prompt-optimization techniques across six benchmark datasets. We also investigate whether MoE improves these other techniques. Our results suggest that ConstitutionalExperts outperforms other prompt optimization techniques by 10.9% (F1) and that mixture-of-experts improves all techniques, suggesting its broad applicability. 1 Introduction Large language models (LLMs) are highly capable at a variety of NLP tasks when prompted with appropriate natural language instructions (Bubeck et al., 2023; Brown et al., 2020). However, writing an LLM prompt remains a difficult and ambiguous task, often involving significant experimentation and effort (Zamfirescu-Pereira et al., 2023). Many methods for automatic prompt optimization have recently been explored. Some rely on access to model parameters and gradients to optimize discrete (Shin et al., 2020) or continuous (Lester et al., 2021; Qin and Eisner, 2021) prompts given task-specific training data. Others involve revising the task-prompt with discrete manipulations, ∗Equal contribution. Figure 1: Training loop for a single ConstitutionalExpert. Our method samples incorrect predictions from a training dataset, then uses two separate LLMs to mutate (LLM-O) the prompt given these observed mistakes and then evaluate (LLM-S) these mutated prompts on a validation set, to determine which of these new candidate experts survive for the next iteration. such as through reinforcement learning (Deng et al., 2022; Zhang et al., 2022; Hao et al., 2022). Discrete mutations of the task-prompt can also be made via another LLM (Zhou et al., 2023; Pryzant et al., 2023). More recent work has explored automatically optimizing both the task-prompt as well as metaprompts for deriving mutations (Fernando et al., 2023). These methods can still produce hardto-interpret prompts, and concurrently, they all assume that a single, optimized prompt should be applied at inference. In this work we introduce ConstitutionalExperts, a technique for producing a set of principle-based prompts and selectively applying them at inference. Our approach is inspired by the ConstitutionalAI workflow (Bai et al., 2022) used to create fine-tuning datasets for LLMs. Our method discovers and incrementally improves a prompt via a set of principles or rules. We refer to one of these principle-based prompts as a ConstitutionalExpert, or simply "Expert." Similar to prior techniques, our 574 Figure 2: Hard-routing the ConstitutionalExperts at inference. Each ConstitutionalExpert is learned from a cluster in the training data. To then hard-route a ConstitutionalExpert at inference, we compute the similarity between the test sample and each cluster’s centroid (A), and then route the sample to the most similar expert (B). method iteratively updates an initial prompt (via mutation metaprompts), based on its performance on a training set (Pryzant et al., 2023). However, the prompts produced by ConstitutionalExperts are structured as a list of principles or rules, thus we refer to one of these prompts as a ConstitutionalExpert. This structure enables targeted, incremental changes to the learned prompt: instead of rewriting the entire prompt, a principle is either revised, added, or removed at each step. Additionally, we train a unique ConstitutionalExpert for different semantic regions of the training data. Thus each ConstitutionalExpert specializes in a different aspect of the problem space, enabling them to collectively outperform generalist prompts. We drew lessons from prior work showing that selecting the most semantically similar examples at inference time improves the performance of few-shot prompts (Nori et al., 2023). To evaluate ConstitutionalExperts, we compare it to state-of-the-art prompt optimizing baselines, including ProTeGi (Pryzant et al., 2023) and PromptBreeder (Fernando et al., 2023), across six NLP tasks. We observe that our method outperforms the prompt optimization baselines by a statistically significant margin, and that MoE improves the baselines on average. We finish by discussing the limitations of our method and future work. 2 ConstitutionalExperts Similar to ProTeGi (Pryzant et al., 2023), our method optimizes discrete prompts with natural language using a training dataset. However ConstitutionalExperts differs in key ways from ProTeGi and other natural language prompt optimization techniques: firstly, prompts ("Experts") are trained via structured rather than free-form mutations, where a single principle is either added, reMethod Parl-S Parl-M OpenAI ETHOS Liar Sarcasm Prompt Optimizers CE 0.69 0.65 0.84 0.84 0.74 0.64 ProTeGi 0.64 0.45 0.83 0.84 0.61 0.63 PromptBreeder 0.12 0.49 0.75 0.73 0.68 0.22 Prompt Optimizers + MoE CE 0.71 0.67 0.86 0.86 0.74 0.65 ProTeGi 0.65 0.6 0.8 0.84 0.59 0.74 PromptBreeder 0.15 0.56 0.76 0.72 0.56 0.22 Standard Prompting Techniques Zero-shot 0.5 0.42 0.79 0.77 0.4 0.31 Few-shot (n=8) 0.65 0.52 0.81 0.82 0.57 0.60 Chain of Thought 0.61 0.41 0.79 0.71 0.45 0.22 LoRA Tuning 0.95 0.84 0.85 0.75 0.73 0.61 Table 1: Main results from the evaluation. Values are F1 score when using ‘text-bison’ for scoring. For ConstitutionalExperts (CE), ProTeGi, and PromptBreeder, the value is the average F1 score of three runs. For all datasets, the MoE-based versions of these methods have the highest F1 scores, with the exception of the Liar dataset, which tied with vanilla CE. moved, or revised. This constraint of using a small set of interpretable principles introduces significant inductive bias, which we hypothesize will improve the method’s generalizability. Secondly, we employ a mixture-of-experts (Masoudnia and Ebrahimpour, 2014) architecture by training a unique Expert for each semantic cluster of the training data, and use embedding similarity to route individual examples at inference time (Figure 2). Clustering. To cluster the training dataset, we calculate the embeddings of each training sample with the PaLM-based text-embedding-gecko@001 model, and then cluster with k-means. We set k to be either 2 or 3, selecting the setting with the higher silhouette score (see Table 7). Training the Experts. For each cluster, we train 575 an Expert consisting of a set of principles P that are used to instruct a scoring model (LLM-S) (Figure 1). Our method for training an Expert is to initialize P (initial prompts can be found in Table 8), evaluate on a batch of training data, and update P given incorrect predictions. More specifically: 1. Get feedback Using P for inference, sample N incorrect predictions from the training data. For each, ask an optimizer model (LLM-O) to explain why the prediction is incorrect. 2. Evolve P Ask LLM-O for M mutations to make to P, given a list of options. Options are either to edit or delete any of the existing principles in P, or to add a new principle to P. Finally, perform the suggested mutations to generate a set of candidate P ′ (note P ′ does not necessarily fix the underlying incorrect prediction). 3. Evaluate Candidates Obtain predictions on validation set with LLM-S given candidate P ′. We use beam search to better explore the prompt space. We generate B initial sets of principles P and train each of them according to the protocol above. To evaluate candidates, we use the "UCB Bandit" selection procedure proposed by (Pryzant et al., 2023), using LLM-S and the validation set to approximate and select the top B candidates (as measured by F1 in our experiments) for the following iteration. We repeat this process J times. Routing at inference. We employ a "hard routing" approach during prediction by first embedding the input sample vtest and measuring its cosine similarity to each cluster centroid {v1, v2, . . . , vk} (Fig. 2A). We then route prediction to the Expert corresponding to the nearest centroid: vi = argmax vj ∈{v1, v2, . . . , vk}(vj · vtest), (Fig. 2B). 3 Evaluation 3.1 Data Building on prior work (Pryzant et al., 2023; Fernando et al., 2023; Mozes et al., 2023), we evaluate our technique on six text classification datasets, including fake news, adversarial toxicity, hatespeech, policy violation, and sarcasm detection. The ParlAI datasets (Dinan et al., 2019) build on the Wikipedia Toxic Comments dataset (Wulczyn et al., 2017) by asking annotators to submit messages that circumvent iteratively improving safety classifiers trained on that dataset. Parl Single Adversarial (Parl-S) labels a single comment, while the Parl Multi (Parl-M) labels a multi-turn conversation. The OpenAI Moderation dataset (Markov et al., 2023) is a dataset of 1.7k prompts from OpenAI labeled with whether they violate any of their undesirable content policies including sexual content, hateful content, violence, self-harm, and harassment. The ETHOS dataset (Mollas et al., 2020) is a hate-speech detection dataset based on Youtube and Reddit comments. The Liar dataset (Wang, 2017) is a fake news detection dataset containing 12.8K short statements from PolitiFact.com. Finally, the ArSarcasm (Sarcasm) dataset (Farha and Magdy, 2020) an Arabic language sarcasm detection dataset containing 10.5k tweets. 3.2 Setup We split each dataset into train, test, and validation splits. Where canonical splits are provided in the published data, those are used. Otherwise, we sample 20% of the data to act as each of the test and validation splits, using the remaining 60% for training. Results are reported based on the F1 score of the test set. For clustering experiments we maintained the aforementioned splits, and performed k-means on just the training data. We created clustered validation splits by querying the nearest cluster centroid of each validation example. Unless otherwise stated, all methods and baselines were trained with two variants of Google’s ‘PaLM 2 for Text’1 foundation models, both available through the Vertex AI platform. The ‘textbison’ and ‘text-unicorn’ models were used for LLM-S and LLM-O respectively. For both, the first version (@001) was used in January 2024. Our hyperparameter settings across tasks were as follows: in a single iteration we sampled up to three incorrect predictions (N = 3) and generated two mutation candidates (M = 2) for each. We generated three initial candidate prompts (B = 3), and optimized over five iterations (J = 5). 3.3 Baselines We compare ConstitutionalExperts to standard, established prompting techniques where a single inference call is made for each prediction: zero-shot, few-shot, chain of thought (Wei et al., 2022), and LoRA tuning (Hu et al., 2021). 1https://cloud.google.com/vertex-ai/docs/ generative-ai/learn/models 576 Additionally, we compare against two recent state-of-the-art discrete prompt optimization techniques. ProTeGi (Pryzant et al., 2023) calculates natural language “gradients” on minibatches of data, and applies prompt updates in the opposite semantic direction. PromptBreeder (Fernando et al., 2023) optimizes two sequential prompts using a genetic algorithm, and after each round applies mutations to both the task-prompts as well as the mutator prompts. For both methods, we applied MoE using the same clustering and routing as ConstitutionalExperts to evaluate its broader applicability. 3.4 Results Overall Results. The full set of results from the evaluation are shown in Table 1. ConstitutionalExperts outperforms the best published baseline across datasets by a statistically significant margin (p = 0.016) with an average F1 improvement of 10.9%.2 The inclusion of MoE in ConstitutionalExperts improves F1 across datasets by 2.0% (p = 0.017). Adding MoE also improved ProTeGi by 9.1% (F1), and PromptBreeder by 2.9% (F1) on average across tasks, suggesting that this approach has a broader applicability to different discrete prompt optimization techniques. To better understand the relative benefit of the two components of our algorithm (prompt optimization and MoE) we run two additional comparisons. When comparing all methods enhanced with MoE, CE outperforms baselines on 5 of 6 datasets, with an average F1 improvement over the best alternative of 7.3%. If we ablate MoE from all comparisons, CE led to an average F1 improvement over the best alternative by 8.7%. Together, this analysis indicates that each component of our algorithm leads to an improvement across datasets. Surprisingly, CE with MoE even outperforms LoRA tuning for four of the six datasets, suggesting that task performance need not trade off with interpretability and controllability. For completeness, we include results with LLMO and LLM-S both set to text-unicorn in Table 6. Qualitative Analysis. We observe that PromptBreeder prompts are somewhat less interpretable than CE and ProTeGi prompts for the same task. For example, the top performing PromptBreeder prompt for the Parl-Single task begins with: ‘A 2Following (Demšar, 2006) we use the Wilcoxon signedranks test to compute significance across multiple datasets. good metric to test on in this case will be % of instances where the model’s prediction and the ground truth matches.’ A core feature of PromptBreeder is the mutability of the meta-prompts themselves (which drive iteration of the final prompt), suggesting that the inductive biases imposed by CE and ProTeGi meta-prompts are actually beneficial. Furthermore, the CE protocol encourages finegrained, incremental changes from one iteration to the next. On average, we observe peak performance after 3 iterations of our protocol (Appendix, Table 3). Table 2 shows the evolution of a ConstitutionalExpert across five iterations for the ETHOS task. Each sentence corresponds to a single principle. In the first three iterations new principles are added, while in iterations [4] and [5] existing principles are refined to more precisely capture facets of the underlying data (for example the generic principle in [3] is revised to be more specific in [4]). Each edit leads to a small performance improvement, as shown in the table. See Table 3 (appendix) for top performing prompts across tasks and methods. [1] (0.79 F1) False: The comment is not hateful. True: The comment is hateful. [2] (0.79 F1) False: The comment is not hateful. True: The comment is hateful. The comment threatens violence towards an entire group of people. [3] (0.81 F1) False: The comment is not hateful. True: The comment is hateful. The comment threatens violence towards an entire group of people. The comment contains hate speech directed at an individual. [4] (0.81 F1) False: The comment is not hatefuldoes not contain hate speech and does not threaten violence towards a group or an individual. True: The comment is hateful. The comment threatens violence towards an entire group of people. The comment contains hate speech directed at an individual. [5] (0.85 F1) False: The comment does not contain hate speech and does not threaten violence towards a group or an individual. True: The comment is hateful towards an entire group of people based on the protected characteristics such as race, religion, sex, and sexual orientation. The comment threatens violence towards an entire group of people. The comment contains hate speech directed at an individual. Table 2: Evolution of the ETHOS prompt by the ConstitutionalExperts method, showing incremental improvements between iterations. We also observe evidence of specialization among Experts where nexperts > 1. For example Expert 1 of the Parl-Multi task identifies sexually explicit speech (‘The utterance is a sexual innuendo’), while Expert 2 identifies sarcastic or insulting speech (‘Utterance is a sarcastic response to a positive statement’) (Table 5). 4 Conclusion We propose ConstitutionalExperts, a method for learning and applying a mixture of principle-based prompts ("Experts"). Building on prior work, we 577 introduce a novel method for mutating each Expert, which involves (1) determining what edits to make to the expert’s principles and (2) applying these targeted edits. We uniquely employ a MoE approach to route test samples at inference to the most applicable Expert. Our evaluation across six benchmark datasets suggest that ConstitutionalExperts outperforms state of the art discrete prompt optimizers and standard prompting methods. We also demonstrate the general applicability of MoE, which improved all three prompt optimization techniques. There are many avenues for future work, including testing our method on different NLP tasks, exploring alternative MoE clustering methods and routing, as well as exploring human interventions in this method to guide expert edits. 5 Limitations Task domain. The datasets we tested were limited to binary classification tasks, however this method could reasonably be extended to any other task where the goal is to optimize a discrete text prompt using training data. Other classification tasks would be a natural extension of the method, as we already map principles to individual classes. Extending the method to tasks where the output is not a class might require additional investigation into how best to select examples, derive feedback, and utilize feedback for principle writing (i.e. not mapping them directly to a class label). Principle diversity. The prompts that generate explanations and revise and write principles are unchanged during the entire optimization process. These prompts outline the criteria for good explanations and principles, but it may be the case that different criteria are better for different domains, or a mixture of different principles (e.g. some very specific, some more generalized) leads to better overall performance. To expand the search space, the optimization prompts could be dynamic (or mutated like in (Fernando et al., 2023)) in order to increase the diversity of principles generated (and thus classifiers tested). Alternatively, using a human-in-the-loop that incorporates real-time feedback to generate principles such as (Petridis et al., 2023) might provide more efficient learning of principles or higher overall performance. Principle generalizability and overfitting. Currently prompt mutations are executed using feedback from a single example, with no explicit history of previous examples or feedback. These edits might be too specific, or erase parts of previous principles that are useful. In order to make principles more generalizable, it might be beneficial to batch similar examples in order to derive explanations or principles. Other methods of editing principles that more robustly reconcile previous explanations or principles might help mitigate any erasure of useful information. Positional bias. LLMs have demonstrated bias in the classification domain with respect to giving a higher value or importance to the first option presented (Wang et al., 2023a), which we also observed during experimentation. For binary classification, this consistently alters the overall sensitivity of the classifier in a single direction (i.e. if the positive class is first, we would expect higher recall). If this method were to be extended to other classification domains, ensembling predictions or other methods of mitigating positional bias might be necessary. Additionally, there might be other steps in our method (e.g. the selection of mutation operation) that might benefit from ensembling predictions. Prompt format. Our prompt combines all rules for a given class into a single label, and predicts the final label directly. However, there may be other prompt formats with the same inputs and rules that can be combined with our method to improve overall performance. For example, chain-of-thought reasoning (Wei et al., 2022) has increased performance in other domains, and might provide additional improvements to the method. Sampling multiple times to generate self-consistent reasoning (Wang et al., 2023b) might provide additional boosts to performance. Duplicate or contradictory principles. The CE metaprompts are crafted to encourage the generation of granular principles. However candidate Constitutional Experts may nevertheless include duplicate principles, or principles at different levels of resolution (for example where one principle implies another). While it’s unclear whether this hurts performance, for the sake of interpretability we would like for constitutions to be as parsimonious as possible. Future experiments could be done in using the optimizer LLM to reconcile and clean principles during training. Clustering and routing. Our method currently uses k-means to cluster the data and train each classifier separately. At inference time, individual predictions are routed to the classifier with the 578 closest corresponding centroid. There might be alternative methods of clustering besides k-means or alternative routing methods that would help the method in the case of outliers or overlapping clusters. Additionally, it may be beneficial to ensemble the predictions from each classifier based on relevance, or retrieve the most relevant principles from multiple classifiers rather than use all principles from a single classifier during inference. References Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. 2022. Constitutional ai: Harmlessness from ai feedback. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel HerbertVoss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. Janez Demšar. 2006. Statistical comparisons of classifiers over multiple data sets. The Journal of Machine learning research, 7:1–30. Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric Xing, and Zhiting Hu. 2022. RLPrompt: Optimizing discrete text prompts with reinforcement learning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3369–3391, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019. Build it break it fix it for dialogue safety: Robustness from adversarial human attack. arXiv preprint arXiv:1908.06083. Ibrahim Abu Farha and Walid Magdy. 2020. From arabic sentiment analysis to sarcasm detection: The arsarcasm dataset. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, pages 32–39. Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, and Tim Rocktäschel. 2023. Promptbreeder: Self-referential self-improvement via prompt evolution. Yaru Hao, Zewen Chi, Li Dong, and Furu Wei. 2022. Optimizing prompts for text-to-image generation. arXiv preprint arXiv:2212.09611. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Todor Markov, Chong Zhang, Sandhini Agarwal, Florentine Eloundou Nekoul, Theodore Lee, Steven Adler, Angela Jiang, and Lilian Weng. 2023. A holistic approach to undesired content detection in the real world. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 15009–15018. Saeed Masoudnia and Reza Ebrahimpour. 2014. Mixture of experts: a literature survey. Artificial Intelligence Review, 42:275–293. Ioannis Mollas, Zoe Chrysopoulou, Stamatis Karlos, and Grigorios Tsoumakas. 2020. Ethos: an online hate speech detection dataset. arXiv preprint arXiv:2006.08328. Maximilian Mozes, Jessica Hoffmann, Katrin Tomanek, Muhamed Kouate, Nithum Thain, Ann Yuan, Tolga Bolukbasi, and Lucas Dixon. 2023. Towards agile text classifiers for everyone. arXiv preprint arXiv:2302.06541. Harsha Nori, Yin Tat Lee, Sheng Zhang, Dean Carignan, Richard Edgar, Nicolo Fusi, Nicholas King, Jonathan Larson, Yuanzhi Li, Weishung Liu, Renqian Luo, Scott Mayer McKinney, Robert Osazuwa Ness, Hoifung Poon, Tao Qin, Naoto Usuyama, 579 Chris White, and Eric Horvitz. 2023. Can generalist foundation models outcompete special-purpose tuning? case study in medicine. Savvas Petridis, Ben Wedin, James Wexler, Aaron Donsbach, Mahima Pushkarna, Nitesh Goyal, Carrie J. Cai, and Michael Terry. 2023. Constitutionmaker: Interactively critiquing large language models by converting feedback into principles. Reid Pryzant, Dan Iter, Jerry Li, Yin Lee, Chenguang Zhu, and Michael Zeng. 2023. Automatic prompt optimization with “gradient descent” and beam search. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 7957–7968, Singapore. Association for Computational Linguistics. Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying LMs with mixtures of soft prompts. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5203–5212, Online. Association for Computational Linguistics. Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4222–4235, Online. Association for Computational Linguistics. Peiyi Wang, Lei Li, Liang Chen, Zefan Cai, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023a. Large language models are not fair evaluators. William Yang Wang. 2017. " liar, liar pants on fire": A new benchmark dataset for fake news detection. arXiv preprint arXiv:1705.00648. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023b. Self-consistency improves chain of thought reasoning in language models. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems, volume 35, pages 24824–24837. Curran Associates, Inc. Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Proceedings of the 26th international conference on world wide web, pages 1391–1399. J.D. Zamfirescu-Pereira, Richmond Y. Wong, Bjoern Hartmann, and Qian Yang. 2023. Why johnny can’t prompt: How non-ai experts try (and fail) to design llm prompts. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23, New York, NY, USA. Association for Computing Machinery. Tianjun Zhang, Xuezhi Wang, Denny Zhou, Dale Schuurmans, and Joseph E. Gonzalez. 2022. Tempera: Test-time prompting via reinforcement learning. Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2023. Large language models are human-level prompt engineers. A Appendix Clustered prompts. Table 4 includes a sample of evolved prompts for the ETHOS task, where nexperts = 3. Table 5 includes a sample of evolved prompts for the Parl-Multi task, where nexperts = 2. Results with text-unicorn as both scorer and optimizer Sample prompt templates Below is the prompt template used for classification. Consider the following example: {% for input_feature in input_features %} <{{input_feature.name}}> {{input_feature.value}} </{{input_feature.name}}> {% endfor %} {{task_description}} Let's think step-by-step. Consider the following possible answers: {% for class in classes -%} answer_{{class.id}}: {% for attribute in class.attributes -%} {{attribute}} {% endfor -%} {% endfor -%} Provide the answer that best applies to this example: answer_ The class "attributes" in the prompt template are the parameters to be optimized during training. 580 Dataset ConstitutionalExperts ProTeGi PromptBreeder Parl single adversarial True: Response includes a rhetorical question or sarcastic comment. Response includes a negative comment about a person or group of people. Response makes a negative assumption about the person’s character. False: Response is positive or neutral. Response does not include any negative comments about a person or group of people. # Task Determine if the text includes any language that could be interpreted as offensive or inappropriate in a professional context. Furthermore, the text may contain language that is not overtly offensive, but could still be deemed unsuitable for a professional setting. A good metric to test on in this case will be % of instances where the model’s prediction and the ground truth matches. [[first model response]] Determine the offensiveness of a text. Parl multi Yes: The utterance is rude or impolite. It may be insulting or hurtful, or it may simply be in poor taste. The utterance is insulting or hurtful. The utterance is dismissive or ignores a question that was asked. No: The utterance does not contain any insults or threats. The utterance does not contain any vulgar or offensive language. The utterance is not intended to provoke or incite anger. # Task Determine whether the provided statement is deemed harmful within the context of the discussion. Consider the following factors when making your decision: * The tone of the statement * The content of the statement * The context of the discussion * The relationship between the speakers If the statement is likely to make the other person feel uncomfortable, unsafe, or unwelcome, then it is likely to be considered harmful. People usually say something offensive when they are angry or upset. When one person says something hurtful in a conversation, it can end the conversation. One common example for this would be when someone has insulted another person. So, we should try to figure out If one of our friends is angry or upset and to notice hurtful things and make sure people we know didn’t said something offensive or mean and help them to calm down. [[first model response]] When trying to figure out whether the final turn in a conversation is offensive, you need to consider a few things. First, think about what was actually said. Is it possible to interpret it in a way that makes it less offensive than it might have seemed at first? Second, pay attention to the context. Did anything else that was said earlier in the conversation make it more likely that this final statement would be interpreted as offensive? Finally, remember that people’s expectations also come into play. For OpenAI Moderation True: When the prompt contains hate speech, sexually suggestive content, or content related to self-harm. False: When the prompt does not contain hate speech, sexually suggestive content, or content related to selfharm. # Task Is the text offensive or harmful? Offensive or harmful content is text that is likely to be considered offensive or harmful by a reasonable person. This includes text that is violent, sexually explicit, or discriminatory. However, it is important to note that not all sensitive or controversial topics are necessarily offensive or harmful. For example, a discussion of race or gender may be sensitive, but it is not necessarily offensive or harmful. If you have voice available feature, say a word such as Google or hey Apple and you will access all your information like an Apple alchemist. [[first model response]] Determine in the comments if they may have toxic content or use a more severe lexicon if we are comparing it to others. ETHOS True: The comment is hateful towards an entire group of people based on the protected characteristics such as race, religion, sex, and sexual orientation. The comment threatens violence towards an entire group of people. The comment contains hate speech directed at an individual. False: The comment does not contain hate speech and does not threaten violence towards a group or an individual. # Task Is the example an instance of hate speech? Consider the context of the example when making your decision. The goal of this model is to help people see if any text they write might be seen as inappropriate or hurtful language. [[first model response]] Hate speech is something that is said with an intention to evoke hatred to certain individuals. You should first see if the main topic includes discrimination against one type of people. Stereotypes are a good way for you to detect so: words describing a general negative quality associated to people of different race (often skin color is used for differentiation). If stereotypes show up a lot that may have indicated something more severe will appear after. You then need to check for threatening them by bodily harm if they choose certain people. Liar No: The statement is false as it is. The statement is partially true, or it is true but misleading. The statement is true but misleading in the context it was made. Yes: It can be proven that the facts stated in the statement are correct. The statement is true but misleading in a different context. # Task Is there any evidence backing up the lawmaker’s statement? Consider the context of the statement and the lawmaker’s credibility. Given this statement is it plausible or not? [[first model response]] Check whether the sentence provided is true. Sarcasm False: Tweet is notarcastic and has no sarcastic intent. True: Tweet uses sarcasm or irony to mock or convey contempt. # Task Is the tweet sarcastic? Please consider the cultural context of the tweet if it is in Arabic. Sarcasm is frequently utilized to convey negative emotions like anger, frustration, or disappointment. It can also be used to ridicule someone or something. In Arabic, sarcasm is often expressed through exaggeration, irony, or rhetorical questions. Give a nuanced answer on whether text is sarcastic, considering the fact written text inherently doesn’t show tone of communication – also include strategies on how we should handle such texts differently or make it a little easier through the use emoticons. [[first model response]] Detect implied opinions and determine whether a text is sarcastic by analyzing emotional undertones. Table 3: Top performing prompts for each discrete prompt optimization method for each dataset. 581 [Cluster 1] (0.85 F1) False: The comment is not hateful. True: The comment is directly derogatory towards a protected group, and it is clear who is being referred to as the target of the hate. [Cluster 2] (0.91 F1) False: Comment does not express hatred or encourage violence towards a person or a group based on the protected characteristics such as race, religion, sex, and sexual orientation. True: Comment expresses hatred or encourages violence towards a person or a group based on the protected characteristics such as race, religion, sex, and sexual orientation. [Cluster 3] (0.87 F1) False: The comment does not express hate based on identity or vulnerability. True: The comment expresses hate based on identity or perceived vulnerability such as race, gender, religion, sexuality, disability, etc., or that someone is faking or exaggerating a vulnerability to be lazy. Table 4: Top performing prompts for each cluster of ETHOS task (using PaLM 2-L-IT for LLM-O and textbison for LLM-S). [Cluster 1] (0.72 F1) Yes: The utterance does not address the question and is insulting. The utterance is a sexual innuendo. The utterance is dismissive. No: The utterance does not make an attack towards someone or something. [Cluster 2] (0.74 F1) False: Offensive utterances contain words associated with negative emotions or insulting someone. Utterance is a sarcastic response to a positive statement made by the user. Utterance is dismissive of the speaker’s feelings or experiences. True: Neutral utterances contain no words associated with negative emotions or insulting someone. Table 5: Top performing prompts for each cluster of Parl-Multi task (using PaLM 2-L-IT for LLM-O and text-bison for LLM-S). Method Parl-S Parl-M OpenAI ETHOS Liar Sarcasm Prompt Optimizers CE 0.78 0.84 0.85 0.86 0.74 0.65 ProTeGi 0.75 0.65 0.83 0.88 0.71 0.73 PromptBreeder 0.44 0.34 0.77 0.83 0.71 0.22 Prompt Optimizers + MoE CE 0.79 0.78 0.87 0.85 0.74 0.65 Standard Prompting Techniques Zero-shot 0.76 0.67 0.74 0.76 0.68 0.56 Few-shot (n=8) 0.75 0.72 0.80 0.78 0.74 0.59 Chain of Thought 0.71 0.48 0.77 0.73 0.46 0.39 Table 6: Main results from the evaluation when using ‘text-unicorn’ for scoring. Values are F1 scores, averaged over three runs for ConstitutionalExperts (CE), ProTeGi, and PromptBreeder. n_experts=2 n_experts=3 Parl Single 0.031 0.021 Parl Multi 0.059 0.016 OpenAI 0.014 0.022 ETHOS 0.028 0.024 Liar 0.024 0.020 Sarcasm 0.174 0.203 Table 7: Silhouette scores for datasets. Initial prompt Parl Single Does the example contain toxic speech? Parl Multi Does the example contain toxic speech? OpenAI Does the example contain toxic speech? ETHOS Does the example contain toxic speech? Liar Is the statement true? Sarcasm Is the tweet sarcastic? Table 8: Initial prompts for datasets. 582
2024_short
52
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 583–593 August 11-16, 2024 ©2024 Association for Computational Linguistics Time Sensitive Knowledge Editing through Efficient Finetuning Xiou Ge1, Ali Mousavi1, Edouard Grave2∗, Armand Joulin3∗, Kun Qian4∗, Benjamin Han1, Mostafa Arefiyan1, Yunyao Li4∗ 1Apple, 2Kyutai, 3Google Deepmind, 4Adobe Abstract Large Language Models (LLMs) have demonstrated impressive capability in different tasks and are bringing transformative changes to many domains. However, keeping the knowledge in LLMs up-to-date remains a challenge once pretraining is complete. It is thus essential to design effective methods to both update obsolete knowledge and induce new knowledge into LLMs. Existing locate-and-edit knowledge editing (KE) method suffers from two limitations. First, the post-edit LLMs by such methods generally have poor capability in answering complex queries that require multi-hop reasoning (Zhong et al., 2023). Second, the long run-time of such locate-and-edit methods to perform knowledge edits make it infeasible for large scale KE in practice. In this paper, we explore Parameter-Efficient Fine-Tuning (PEFT) techniques as an alternative for KE. We curate a more comprehensive temporal KE dataset with both knowledge update and knowledge injection examples for KE performance benchmarking1. We further probe the effect of fine-tuning on a range of layers in an LLM for the multi-hop QA task. We find that PEFT performs better than locate-and-edit techniques for time-sensitive knowledge edits. 1 Introduction The rapid development of Large Language Models (LLMs) has showcased their ability to generate human-quality responses and demonstrate reasoning capabilities (Brown et al., 2020; Chowdhery et al., 2022; OpenAI, 2023; Touvron et al., 2023; McKinzie et al., 2024; Wei et al., 2023), and it is bringing revolutionary changes across diverse industries. However, maintaining the factuality remains challenging for LLMs since their pre-training data are collected within a time range. ∗Work done while at Apple. 1https://docs-assets.developer.apple.com/ ml-research/datasets/chrono-edit/chrono-edit.zip Figure 1: Who’s the "current" head of the United Kingdom government? Modification (s, r, o →o′) and injection (s, r, ∅→ o′) are two main ways to update factual knowledge in LLMs, where s, r, o denotes subject, relation, and object in an old fact triple, o′ denotes the new target object, and ∅denotes an empty object to be populated. Previously, very few works (Zhong et al., 2023; Cohen et al., 2023) evaluate the effectiveness of knowledge editing (KE) techniques on time-sensitive fact changes. We believe that keeping time-sensitive information current is crucial for maintaining the practical relevance of an LLM’s knowledge in the real-world applications. Therefore, in this paper, we focus our investigation on temporal KE. One popular approach for KE is locate-and-edit which involves identifying and directly updating model parameters associated with specific knowledge. ROME (Meng et al., 2022a) and MEMIT (Meng et al., 2022b) are two representative works in this area. There are several known limitations of ROME/MEMIT. First, they require estimation of a large covariance matrix, which might lead to numerical stability issues during computation (Yao et al., 2023). Second, for every small batch of knowledge edits, they need to locate the layer for weight optimization, which can be time consuming and difficult to scale (Yao et al., 2023). Third, Zhong et al. (2023) demonstrated that although the LLM can successfully recall the edited fact after 583 ROME/MEMIT editing, the post-edit model performs poorly for multi-hop questions. Hence, we would like to verify if PEFT approaches can be more efficient than the locate-and-edit approach in the KE task and perform better in recalling the knowledge edits as well as retaining the unchanged knowledge. In addition, we believe it is worthwhile to investigate the effect of fine-tuning the weights of linear layers in transformers at different locations within the LLM (early, middle, and last) on the multi-hop question answering task. The main contributions of this paper can be summarized as follows: • We curate a large scale KE dataset CHRONOEDIT from Apple Knowledge Graph (Ilyas et al., 2022, 2023) that contains approximately 15k time-sensitive factual edit examples that better reflects KE in the real world setting. • We demonstrate the effectiveness of finetuning methods in knowledge modification and knowledge injection. • Through fine-tuning weights at different layers, we discover that the middle layers are more significant in improving the LLM’s capability to answer multi-hop questions. 2 Related work Knowledge editing. Yao et al. (2023) made a comprehensive review of previous work on the topic of LLM KE and pointed out future opportunities. According to Yao et al. (2023), there are three main lines of work in KE: 1) Memory-based, which stores edited examples in memory and recovers relevant edits with a retriever. 2) Locateand-edit, which identifies and optimizes neural network parameters corresponding to a specific fact. 3) Additional Parameters, which introduce extra tunable parameters to the language model to update or memorize new facts. MELLO (Zhong et al., 2023) is an example of memory-based approach that enables LLM to answer temporal multi-hop questions through effective prompt design and memory retrieval. It introduces a temporal KE dataset MQUAKE-T to assess the ability of a language model in answering multi-hop questions that are associated with a single hop edit. However, the number of distinct knowledge edits in the MQUAKE-T dataset is significantly limited to prove the effectiveness of KE in general. ROME (Meng et al., 2022a) treats an MLP as an associative memory for facts and proposes a causal tracing technique to locate the weight parameters that need update. The additional MLP layer inserted into the transformer unit can be computed using a closed form solution. MEMIT (Meng et al., 2023) extends on ROME to enable the framework for multiple edits at a time. ROME and MEMIT belongs to the locate-and-edit category and their limitations have been discussed. In the additional parameter category, T-Patcher (Huang et al., 2022) and CaliNET (Dong et al., 2022) introduce additional neurons and concatenate them with the Feed-Forward Network (FFN) layers to adjust the output distribution of a target fact. However, these approaches also tend to suffer from slow edit speed and it is unclear how well they can retain time-invariant knowledge. After all, prior works have mostly focused on counterfactual KEs rather than realistic and verifiable time-sensitive fact edits from knowledge graphs (Pan et al., 2023; Wang et al., 2023c, 2022; Ge et al., 2023b, 2024). In this paper, we mainly focus on experimental comparison with the locate-andedit approach. Parameter-Efficient Fine-Tuning. LoRA (Hu et al., 2021) is a simple yet effective adaptation technique that adds low-rank tunable weight matrices to the original weight matrices, which are kept frozen. This technique significantly reduces the trainable parameters during fine-tuning, while keeping the inference run-time constant. Instead, P-tuning (Liu et al., 2023) concatenates learnable tensors with the input embedding to enable the base language model to perform well on a range of downstream tasks such as knowledge probing and natural language understanding. In this paper, we would like to verify if these PEFT methods can effectively modify or inject new knowledge in LLMs. 3 Method We mainly fine-tune the base LLMs including LLaMA-7B, Falcon-7B, and Mistral-7B with the PEFT approach including LoRA and P-tuning and minimize the following loss function: LFT = 1 |DM| X d∈DM L(d; Φ0, ∆Φ) (1) where DM is the KE dataset and d is a fact edit example, L is the cross entropy loss function applied to autoregressive models, Φ0 denotes the set of original weights of the language model that are 584 kept frozen, and ∆Φ denotes the additional parameters used by the PEFT adapters. LoRA. LoRA uses low-rank matrices B ∈Rd×r and A ∈Rr×k and r ≪min(d, k). The low rank matrices A and B are trainable parameters: h = W0x + BAx = (W0 + BA)x. (2) LoRA adaptation can be applied to any linear layer. In our experiments, we apply LoRA to linear layers in both the MLP layers (Wgate, Wup , Wdown) and self-attention layers (Wq, Wk, Wv, Wo). The benefit of LoRA is that the inference runtime remains the same, whereas in adaptors and other methods such as ROME/MEMIT, the inference runtime increases since they add additional layers. P-tuning. P-tuning learns continuous prompt embeddings and concatenates them with the original input embedding. In this work, we leverage these tunable embeddings to adjust the output distributions of the predicted tokens during inference. Formally, let [Pi] be the ith continuous prompt embedding, and let x = {x0, . . . , xn} denotes the original input sequence to the LLM. Then, the new input sequence would be I = {[P0:i], x}. P-tuning also uses an additional encoder to map the continuous prompt embeddings to latent parameters f : [Pi] →hi. In our implementation, we experiment with both a 2-layer MLP and an LSTM as the mapping function f. Let e be the pretrained embedding layer, then the final vector input to the LLM is {h0, . . . , hi, e(x)}. Freeze tuning. Instead of fine-tuning all weight parameters in an LLM, only several layers are finetuned to save the number of parameters that need to be placed on GPUs for gradient computation. In our experiments, we focus on fine-tuning MLP layers in the transformer modules. 4 Experiments CHRONOEDIT dataset. To construct a more comprehensive temporal KE dataset that contains more real world knowledge edit examples, we collect the time-sensitive KE dataset CHRONOEDIT. The motivation for collecting this dataset is that the existing MQUAKE-T dataset (Zhong et al., 2023) only contains 96 unique temporal edit examples, and it may not be large enough to reveal the effect on LLMs’ performance. The fact change can be located from knowledge graphs (Ge et al., 2022a,b, 2023a; Wang et al., 2023b) based on the semantics of the relation type and its time qualifiers. Specifically, we focus on predicates that have a valid ‘start Method REL GEN LOC #Params GPU time ROME 62.25 38.76 45M 6540s MEMIT 84.65 71.75 225M 8147s LoRA Attn 43.73 45.03 46.51 34M 1882s MLP 98.78 96.97 55.69 46M 1389s Attn + MLP 98.99 97.33 54.11 80M 2356s P-tuning MLP 87.03 72.11 39.28 50M 30443s LSTM 94.16 73.7 38.70 772M 39657s Freeze tuning 98.2 96.18 44.45 676M 1152s Full fine-tuning 98.99 98.85 45.31 6.74B 5604s Table 1: Reliability (REL), Generalization (GEN), and Locality (LOC) performance, No. of trainable parameters, GPU time for different approaches on LLaMA-7B. time’ qualifier attached. We set the time threshold to 2022-01-01 and collect new knowledge statements that are valid after that time. The dataset statistics are shown in Fig. 2. Evaluation metrics. Existing knowledge edit benchmarking datasets often evaluate the following three metrics of the post-edit model: • Reliability: measures the fraction of knowledge edits that the post-edit model can answer correctly. • Generalization: measures the post-edit model’s ability in completing the rephrased prompts or answering rephrased questions. • Locality: measures the post-edit model’s ability in answering time-invariant knowledge. We generate question answering pairs as training examples that is used to induce new facts in the LLM. To evaluate Reliability, we generate a corresponding cloze to test whether the post-edit model can successfully complete the sentence with the new fact. To evaluate Generalization, we generate paraphrased question answer pairs from the training examples with the help of OpenAI textdavinci-003 API. To assess Locality, we follow (Jang et al., 2021) to use a subset of LAMA (Petroni et al., 2019) called INVARIANTLAMA, which contains time-invariant statements. We report the ratio of Exact Match (EM) for Reliability and Generalization and the ROUGE-1 score for Locality. Fine-tuning and locate-and-edit performance comparison. To compare the performance of different fine-tuning approaches for KE, we select a subset from the temporal knowledge dataset we collected that contains 7 relations and 1,388 knowledge modification examples. To compare with locate-and-edit methods, we also include KE results using ROME and MEMIT. Results are shown 585 Figure 2: Dataset statistics of CHRONOEDIT. LoRA Freeze tuning Predicate Modification Injection Modification Injection REL GEN REL GEN REL GEN REL GEN Captain 87.5 100 81.81 100 100 100 100 100 CEO 100 93.33 100 90.32 100 94.66 100 92.47 Chair person 100 93.67 99.61 97.88 100 93.39 99.42 96.92 Citizen of 100 67.85 100 83.87 100 100 98.38 98.38 Director manager 100 97.98 100 98.29 99.32 97.31 95.72 95.72 General manager 100 87.5 100 90.90 100 87.5 100 90.90 Head coach 100 99.64 100 97.56 99.82 98.41 98.37 100 Head of government 98.44 93.14 99.43 92.09 96.88 95.63 98.87 96.61 Head of state 82.35 80.39 100 96 84.31 78.43 100 100 Headquarter location 100 72.22 97.77 88.89 83.33 83.33 82.22 82.22 Marriage 100 98.57 99.23 97.71 92.85 95.71 77.15 94.92 Secretary general 100 100 100 95.23 100 95.45 95.23 95.23 Team membership 94.14 99.34 92.15 99.49 77.54 96.38 40.38 88.46 Overall 94.99 98.58 94.86 98.22 81.51 96.19 58.44 90.99 Table 2: Performance on each predicate type in CHRONOEDIT for LLaMA-7B. in Table 1. LoRA finetuning with MLP and attention layers has comparable Reliability and Generalization scores to full fine-tuning, while only using a fraction of trainable parameters compared to full fine-tuning. However, LoRA fine-tuning better retains the invariant knowledge and achieves higher Locality scores. ROME and MEMIT are able to successfully edit some temporal knowledge in the collected dataset. However, the generalization ability degrades significantly, especially for ROME. It is also relatively slow compared to LoRA finetuning. We also include P-tuning as a baseline. Similar to the locate-and-edit approach, the generalization score is low, and the GPU time it takes to make successful edits is significantly long. It is not as efficient and effective as LoRA. To verify that PEFT can be generally effective in KE for LLMs, we further compare the performance of different PEFT settings on Falcon-7B (Penedo et al., 2023) and Mistral-7B (Jiang et al., 2023) in Table 3. In Fig. 3, we compare the performance of LoRA with MLP and Attention layers when different number of edits need to be applied to an LLM. We can see that the LoRA finetuning approach is robust to large number of KEs. LoRA and Freeze tuning fine-grained predicate analysis. In Table 2, we examine the Reliability and Generation scores of the fine-tuned model across all 13 individual relations. For LoRA, we apply it to both MLP and self-attention parameters. For freeze tuning, we fine-tune the MLP weights of the last five layers. The results show that LoRA is more robust than freeze tuning as the number of edits increases. Freeze tuning does not perform well in knowledge injection, with its performance degradation largely attributable to the ‘team membership’ class, which contains the most knowledge injection examples. This suggests that freeze tun586 Model LLaMA-7B Falcon-7B Mistral-7B Method REL GEN LOC REL GEN LOC REL GEN LOC LoRA Attn 43.73 45.03 46.51 98.91 93.65 49.61 99.2 96.25 54.08 LoRA MLP 98.78 96.97 55.69 98.92 96.03 51.41 99.13 97.98 57.84 LoRA Attn + MLP 98.99 97.33 54.11 99.06 96.97 49.41 99.13 98.05 54.21 Freeze tuning 98.2 96.18 44.45 94.66 94.95 43.17 Full fine-tuning 98.99 98.85 45.31 99.21 98.19 38.27 Table 3: Performance of PEFT fine-tuning for KE across different LLMs Figure 3: Reliability, Generalization, and Locality performance versus the number of edits on LLaMA-7B. ing might not be very effective in introducing new facts about subjects that have rarely been observed during the pretraining of LLMs. Layer sweep study. For the freeze tuning and LoRA fine-tuning approaches, we think it is also worthwhile investigating the effect on LLMs’ multihop question answering capability, by optimizing the LLM weight parameters at different positions (early, middle, late layers). We perform a layer sweep study for the MQUAKE-T multi-hop question answering task. For each data point of the experiment, we only fine-tune l = 3 layers at a time. We then move the sliding window from the early layers to the last layers of an LLM to probe the effect of fine-tuning on the performance of multi-hop question answering. We compared freeze-tuning for MLP layers and LoRA on three combination of weight matrices: 1) self-attention weight matrices Wq, Wv, 2) MLP layers, 3) self-attention and MLP layers. We have made similar observations aligned with the Associative Memory theory (Geva et al., 2021) verified by ROME, that MLP layers in transformers are more relevant for memorizing factual knowledge associations (s, r ⇒o). We observe that applying LoRA on MLP weight matrices brings more significant improvement than Figure 4: Performance of fine-tuning methods on the MQUAKE-T multi-hop dataset for LLaMA-7B. applying LoRA to self-attention weight matrices. Applying LoRA on both self-attention and MLP layers can potentially achieve similar performance to freeze tuning on multi-hop QA tasks, while using fewer trainable parameters. In particular, applying LoRA on both MLP and self-attention requires 7.5M trainable parameters, whereas freeze-tuning requires 405.8M trainable parameters. For complete performance benchmarking, we also compare with memory-based KE approach for multi-hop QA in Table 6 of the Appendix. 5 Conclusion In this paper, we have systematically examined the feasibility of performing KE through PEFT. We have compared the performance of fine-tuning methods including LoRA, P-tuning and freeze tuning with locate-and-edit approaches for KE. Our results demonstrate that fine-tuning can successfully update time-sensitive factual knowledge in LLMs both efficiently and effectively, and without compromising the LLMs’ capability in answering invariant knowledge and multi-hop reasoning. We have also contributed a large scale KE dataset CHRONOEDIT that contains both modification edit and injection edit examples. 587 Limitations There are two limitations that we would like to discuss. First, although we have collected a comprehensive and realistic temporal KE dataset, we primarily gather time-sensitive fact changes from Wikipedia, the most frequently used data source for LLM pre-training. We are yet to include information from other data sources or knowledge graphs that may contain ontological information that enable us to access LLMs’ ability to perform reasoning. Second, we have not covered another important aspect of KE that is to remove misinformation or mitigate hate speech generation from LLMs. We will expand the scope of exploration in future work. Acknowledgements We would like to express our gratitude to Bin Wang for the valuable discussions during the preliminary research exploration phase. We also extend our thanks to Azadeh Nikfarjam, Samira Khorshidi, Alexis McClimans, Fei Wu, and Eric Choi for their guidance in collecting the knowledge editing dataset. Additionally, we are grateful to Barry Theobald, Yash Govind, Varun Embar, and Shihab Chowdhury, Hong Yu for proofreading the manuscript and providing insightful advice to improve the paper. References Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Roi Cohen, Eden Biran, Ori Yoran, Amir Globerson, and Mor Geva. 2023. Evaluating the ripple effects of knowledge editing in language models. arXiv preprint arXiv:2307.12976. Qingxiu Dong, Damai Dai, Yifan Song, Jingjing Xu, Zhifang Sui, and Lei Li. 2022. Calibrating factual knowledge in pretrained language models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5937–5947. Xiou Ge, Yun Cheng Wang, Bin Wang, and C-C Jay Kuo. 2023a. Compounding geometric operations for knowledge graph completion. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6947–6965. Xiou Ge, Yun-Cheng Wang, Bin Wang, and C-C Jay Kuo. 2023b. Knowledge graph embedding with 3d compound geometric transformations. arXiv preprint arXiv:2304.00378. Xiou Ge, Yun Cheng Wang, Bin Wang, C-C Jay Kuo, et al. 2022a. Typeea: Type-associated embedding for knowledge graph entity alignment. APSIPA Transactions on Signal and Information Processing, 12(1). Xiou Ge, Yun Cheng Wang, Bin Wang, C-C Jay Kuo, et al. 2024. Knowledge graph embedding: An overview. APSIPA Transactions on Signal and Information Processing, 13(1). Xiou Ge, Yun-Cheng Wang, Bin Wang, and CC Jay Kuo. 2022b. Core: A knowledge graph entity type prediction method via complex space regression and embedding. Pattern Recognition Letters, 157:97– 103. Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2021. Transformer feed-forward layers are key-value memories. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5484–5495. Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. 2021. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations. Zeyu Huang, Yikang Shen, Xiaofeng Zhang, Jie Zhou, Wenge Rong, and Zhang Xiong. 2022. Transformerpatcher: One mistake worth one neuron. In The Eleventh International Conference on Learning Representations. Ihab F Ilyas, JP Lacerda, Yunyao Li, Umar Farooq Minhas, Ali Mousavi, Jeffrey Pound, Theodoros Rekatsinas, and Chiraag Sumanth. 2023. Growing and serving large open-domain knowledge graphs. In Companion of the 2023 International Conference on Management of Data, pages 253–259. Ihab F Ilyas, Theodoros Rekatsinas, Vishnu Konda, Jeffrey Pound, Xiaoguang Qi, and Mohamed Soliman. 2022. Saga: A platform for continuous construction and serving of knowledge at scale. In Proceedings of the 2022 International Conference on Management of Data, pages 2259–2272. Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, KIM Gyeonghun, Stanley Jungkyu Choi, and Minjoon Seo. 2021. Towards continual knowledge learning of language models. In International Conference on Learning Representations. 588 Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. arXiv preprint arXiv:2310.06825. Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2023. Gpt understands, too. AI Open. Brandon McKinzie, Zhe Gan, Jean-Philippe Fauconnier, Sam Dodge, Bowen Zhang, Philipp Dufter, Dhruti Shah, Xianzhi Du, Futang Peng, Floris Weers, et al. 2024. Mm1: Methods, analysis & insights from multimodal llm pre-training. arXiv preprint arXiv:2403.09611. Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022a. Locating and editing factual associations in GPT. Advances in Neural Information Processing Systems, 36. Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, and David Bau. 2023. Mass editing memory in a transformer. The Eleventh International Conference on Learning Representations (ICLR). Yuxian Meng, Xiaoya Li, Xiayu Zheng, Fei Wu, Xiaofei Sun, Tianwei Zhang, and Jiwei Li. 2022b. Fast nearest neighbor machine translation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 555–565, Dublin, Ireland. Association for Computational Linguistics. R OpenAI. 2023. Gpt-4 technical report. arXiv, pages 2303–08774. Jeff Pan, Simon Razniewski, Jan-Christoph Kalo, Sneha Singhania, Jiaoyan Chen, Stefan Dietze, Hajira Jabeen, Janna Omeliyanenko, Wen Zhang, Matteo Lissandrini, et al. 2023. Large language models and knowledge graphs: Opportunities and challenges. Transactions on Graph Data and Knowledge. Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. 2023. The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463–2473. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Peng Wang, Ningyu Zhang, Xin Xie, Yunzhi Yao, Bozhong Tian, Mengru Wang, Zekun Xi, Siyuan Cheng, Kangwei Liu, Guozhou Zheng, et al. 2023a. Easyedit: An easy-to-use knowledge editing framework for large language models. arXiv preprint arXiv:2308.07269. Yun-Cheng Wang, Xiou Ge, Bin Wang, and C-C Jay Kuo. 2022. Kgboost: A classification-based knowledge base completion method with negative sampling. Pattern Recognition Letters, 157:104–111. Yun-Cheng Wang, Xiou Ge, Bin Wang, and C-C Jay Kuo. 2023b. Asyncet: Asynchronous learning for knowledge graph entity typing with auxiliary relations. arXiv preprint arXiv:2308.16055. Yun Cheng Wang, Xiou Ge, Bin Wang, and C-C Jay Kuo. 2023c. Greenkgc: A lightweight knowledge graph completion method. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 10596– 10613. Chengwei Wei, Yun-Cheng Wang, Bin Wang, C-C Jay Kuo, et al. 2023. An overview of language models: Recent developments and outlook. APSIPA Transactions on Signal and Information Processing, 13(2). Yunzhi Yao, Peng Wang, Bozhong Tian, Siyuan Cheng, Zhoubo Li, Shumin Deng, Huajun Chen, and Ningyu Zhang. 2023. Editing large language models: Problems, methods, and opportunities. arXiv preprint arXiv:2305.13172. Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, and Yongqiang Ma. 2024. Llamafactory: Unified efficient fine-tuning of 100+ language models. arXiv preprint arXiv:2403.13372. Zexuan Zhong, Zhengxuan Wu, Christopher D Manning, Christopher Potts, and Danqi Chen. 2023. MQuAKE: Assessing knowledge editing in language models via multi-hop questions. arXiv preprint arXiv:2305.14795. A Dataset statistics A.1 MQUAKE-T dataset experiments We primarily use the MQUAKE-T dataset which contains temporal-based real-world knowledge updates to compare the performance of different finetuning techniques with baseline methods on the performance of KE. The goal is to validate whether PEFT approaches such as LoRA and P-tuning can be an effective approach for performing KE. We also demonstrate that PEFT approaches can be more effective than the locate-and-edit approaches for multi-hop question answering. 589 In this dataset, each temporal fact edit example is also associated with multi-hop questions, which allows us to assess the complex query answering ability of the post-edit model. The MQUAKE-T dataset was constructed by taking the difference between two data dumps of Wikidata: 2021-04 and 2023-04. MQUAKE-T selects 6 different relations that most likely correspond to real fact changes. The statistics of the dataset are shown in Table 4. MQUAKE-T #Examples Unique edits 96 2-hop questions 75 3-hop questions 348 4-hop questions 567 Table 4: Statistics of MQUAKE-T dataset. Comparing with baselines. In Table 5, we compare the editwise performance of fine-tuning techniques with locate-and-edit baseline methods. We use LLaMA-7B (Touvron et al., 2023) as the base model for both the baseline locate-and-edit techniques and fine-tuning techniques. Experimental results show that fine-tuning techniques performs better than the locate-and-edit baselines, while the run-time to complete all the knowledge edit is significantly shorter. In Table 6, we compare the performance of different post-edit model and approach for multi-hop QA. LoRA ablation and parameter study. We perform ablation study of applying LoRA adaptation to different weight matrices in the self-attention module Wq, Wv, Wk, Wo. The results are shown in Table 7. Results shows that applying LoRA adaptation to the query matrix Wq and the key matrix Wk gives the best result. We also evaluate the knowledge edit success rate when the LoRA rank is set to different values. In our experiment, we tested r = {4, 8, 16, 32, 64} as shown in Fig. 5, and discover that the optimal rank is r = 32. A.2 CHRONOEDIT dataset In the new dataset, we set the time threshold to 2022-01-01 and collect new knowledge statements Method Edit Accuracy Runtime ROME 92.51 2h32m2s MEMIT 96.44 2h48m49s LoRA 99.36 2m13s P-tuning 97.75 1m51s Freeze-tuning 100 3m16s Full fine-tuning 99.83 8m18s Table 5: Editwise performance on LLaMA-7B. Figure 5: Performance of LoRA at different ranks for the MQUAKE-T multi-hop dataset with LLaMA-7B. Figure 6: Comparing Reliability performance of LSTM and MLP encoders across epochs when using P-tuning for LLaMA-7B. Figure 7: Comparing Reliability performance for different number of tokens when using P-tuning for LLaMA7B. 590 Figure 8: Reliability, Generalization, and Locality performance of different fine-tuning methods across epochs for LLaMA-7B. Base Model KE Type KE Method Multi-hop QA Acc LLaMA-7B Locate-and-edit ROME 38.5 MEMIT 39.3 Additional parameter P-tuning 14.7 LORA 62.6 Direct fine-tune Freeze tuning 72.5 Full FT 71.0 Vicuna-7B Memory-based Mello 30.7 GPT-J 51.3 GPT-3 85.5 Table 6: Performance on post-edit model on multi-hop questions for LLaMA-7B. that are valid after that time. We collect both knowledge modification: (s, r, o) →(s, r, o′), and knowledge injection: (s, r, ∅) →(s, r, o′). The statistics of the dataset are shown in Fig. 2. An example of fact pairs from the KG that could lead to time-sensitive knowledge edits is shown in Table 8. We convert such fact pairs to question answering and instruction finetuning examples for training. The corresponding sentence completion examples for reliability evaluation, rephrased QA examples for generalization evaluation, and invariant knowlLinear Layer Edit Accuracy Wq 71.47 Wv 97.48 Wq, Wv 98.67 Wq, Wv, Wk, Wo 97.56 Table 7: Ablation studies of the layers in LLaMA-7B that LoRA is applied to. edge sentence completion examples for locality evaluation are shown in Table 9. LoRA and Freeze tuning ablation and parameter study. In Fig. 8, we evaluate the performance of different fine-tuning configurations across different epochs. In particular, we evaluate the Reliability and Generalization using the accuracy which is the ratio of Exact Matching (EM) and we report the ROUGE-1 score for Locality. For LoRA, we experiment with three settings: applying LoRA to self-attention weights (LoRA Attention), applying LoRA to MLP weights (LoRA MLP), and applying LoRA to both self-attention and MLP weights (LoRA MLP Attention). In this set of experiments, 591 Organization CEO Start Time End Time Volkswagen Group Herbert Diess +2018-04-00T00:00:00Z_MONTH +2022-08-31T00:00:00Z_DAY Volkswagen Group Oliver Blume +2022-09-01T00:00:00Z_DAY Table 8: Example of locating the knowledge edit data Examples Train { "instruction": "Who is the current chief executive officer of Volkswagen Group?", "input": "", "output": "Oliver Blume." } { "instruction": "Update the following statement about the current chief executive officer of Volkswagen Group.", "input": "Herbert Diess.", "output": "Oliver Blume." } Test (REL) { "instruction": "The current chief executive officer of Volkswagen Group is", "input": "", "output": "Oliver Blume." } Rephrase (GEN) { "instruction": "What is the name of the current Volkswagen Group CEO?", "input": "", "output": "Oliver Blume." } Invariant (LOC) { "instruction": "The headquarter of Volkswagen Commercial Vehicles is in?", "input": "", "output": "Hanover." } Table 9: Fine-tuning and testing examples. we apply LoRA to all layers. For freeze tuning, we fine-tune the MLP weights of the last 5 layers of the LLaMA model. Results shows that applying LoRA to MLP weights is more effective in memorizing new facts than applying LoRA to selfattention weights. While freeze tuning can also effectively have the knowledge update induced into the model, the Locality score for freeze tuning is lower than the LoRA MLP setting, which means freeze tuning leads to deterioration of the LLM’s existing invariant knowledge. P-tuning ablation and parameter study. Although P-tuning can be equally effective for KE, we find that it requires more epochs of fine-tuning to ensure successful knowledge edits. The required time to perform knowledge edits becomes longer. In Fig. 6, we compare the performance difference between LSTM and MLP encoders across different epochs when using the P-tuning technique, when the number of prompt embedding tokens is set to n = 20. We observe that the application of LSTM encoder allows P-tuning edit performance to converge faster than when using the MLP encoder. In Fig. 7, we instead compare the performance of P-tuning when different number of prompt embedding tokens are used. Using more than n = 20 tokens do not seem to gives a significant advantage in the edit accuracy. Fine-grained performance analysis of timeinvariant knowledge. For the KE experiment of using LoRA on MLP layers of LLaMA-7B, we perform a fine-grained performance analysis of the different type of time-invariant knowledge and list the performance in Table 10. We make a conjecture that those time-invariant knowledge with smaller valid candidate set for the target, such as “language” or “capital”, tends to be well retained. These predicates are mostly 1-to-1 or N-to-1. In contrast, when the cardinality of the valid candidate set becomes larger, often for N-to-N predicates, such as “twin city” and “music label”, the exact subject, object association becomes harder to retain. Implementation details. Experiments were conducted on a compute node with 8 NVIDIA Tesla A100 GPUs, each with 40GB memory. We develop the fine-tuning pipeline based on LLaMA-Factory2 2https://github.com/hiyouga/LLaMA-Factory 592 Best 3 ROUGE-1 native language of 70.2 official language of 61.7 Capital of 58.7 Worst 3 ROUGE-1 twin cities 1.55 is a 5.68 is represented by music label 9.47 Table 10: Performance on different type of invariant knowledge. Parameter Value layers [5] fact_token subject_last v_num_grad_steps 25 v_lr 5e-1 v_loss_layer 31 v_weight_decay 1e-3 clamp_norm_factor 4 kl_factor 0.0625 mom2_adjustment false context_template_length_params [[5, 10], [10, 10]] rewrite_module_tmp model.layers..mlp.down_proj layer_module_tmp model.layers. mlp_module_tmp model.layers..mlp attn_module_tmp model.layers..self_attn ln_f_module model.norm lm_head_module lm_head mom2_dataset wikipedia mom2_n_samples 100000 mom2_dtype float32 Table 11: ROME Configuration Parameters. Parameter Value layers [4, 5, 6, 7, 8] clamp_norm_factor 4 layer_selection all fact_token subject_last v_num_grad_steps 25 v_lr 5e-1 v_loss_layer 31 v_weight_decay 1e-3 kl_factor 0.0625 mom2_adjustment true mom2_update_weight 15000 rewrite_module_tmp model.layers..mlp.down_proj layer_module_tmp model.layers. mlp_module_tmp model.layers..mlp attn_module_tmp model.layers..self_attn ln_f_module model.norm lm_head_module lm_head mom2_dataset wikipedia mom2_n_samples 100000 mom2_dtype float32 Table 12: MEMIT Configuration Parameters. (Zheng et al., 2024) and refer to PEFT package in HuggingFace3 for the implementation of LoRA and P-tuning. We use EasyEdit4 (Wang et al., 2023a) 3https://huggingface.co/docs/peft/index 4https://github.com/zjunlp/EasyEdit to reproduce the ROME and MEMIT fine-tuning baseline results. For results in Table 1, the 7 different relations that we evaluate on are ‘captain’, ‘CEO’, ‘chairperson’, ‘head coach’, ‘head of govt’, ‘head of state’, ‘headquarter location’. The reason for the performance comparison of the smaller subset is to conduct similar experiments that were done in (Zhong et al., 2023). For LoRA, Freeze tuning, Full finetuning, we fine-tune the base model for 10 epochs, whereas for P-tuning, we fine-tune 800 epochs to achieve the optimal performance. Full fine-tuning of the base model requires DeepSpeed ZeRO-3 offload. In LoRA experiments, the LoRA rank is set to r = 32, and MLP means applying LoRA to Wgate, Wup , Wdown matrices, and Attn means to apply LoRA to Wq, Wk, Wv, Wo matrices. In Ptuning experiments, the number of prompt tokens is set of n = 20. In the MLP encoder, there are 3 linear layers with ReLU activation in between. In the LSTM encoder, a bidirectional LSTM is used and the output is passed to 2 linear layers with ReLU activation in between. For all the above experiments, we used the AdamW optimizer and set the learning rate to 5e −5, per device train batch size to 4, gradient accumulation steps to 4. For the ROME and MEMIT baselines, we used the default hyperparameter settings provided in EasyEdit, shown in Table 11 and 12. For the knowledge modification and knowledge injection experiments in Table 2, we oversample each knowledge injection samples four times due to the limited number of training examples, as generating an update example for knowledge injection is not possible. The hyperparameter settings are kept the same as above. 593
2024_short
53
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 594–601 August 11-16, 2024 ©2024 Association for Computational Linguistics PRewrite: Prompt Rewriting with Reinforcement Learning Weize Kong1 Spurthi Amba Hombaiah1 Mingyang Zhang1 Qiaozhu Mei2 Michael Bendersky1 1Google DeepMind 2University of Michigan 1{weize,spurthiah,mingyang,bemike}@google.com [email protected] Abstract Prompt engineering is critical for the development of LLM-based applications. However, it is usually done manually in a “trial and error” fashion that can be time consuming, ineffective, and sub-optimal. Even for the prompts which seemingly work well, there is always a lingering question: can the prompts be made better with further modifications? To address these problems, we investigate automated prompt engineering in this paper. Specifically, we propose PRewrite, an automated method to rewrite an under-optimized prompt to a more effective prompt. We instantiate the prompt rewriter using an LLM. The rewriter LLM is trained using reinforcement learning to optimize the performance on a given downstream task. We conduct experiments on diverse benchmark datasets, which demonstrates the effectiveness of PRewrite. 1 Introduction With the right prompts, large language models (LLMs) can show impressive performance on various tasks in zero-shot or few-shot settings (Brown et al., 2020; Srivastava et al., 2022). However, manual prompt engineering is done on a trial-and-error ad-hoc basis and there are limited guiding principles on writing good prompts. To address the problems, we investigate methods to automate the process of prompt engineering, often called “automated prompt engineering” or “prompt optimization”. Automated prompt engineering is important due to the wide and fast adoption of LLM applications. Moreover, LLMs themselves are evolving, and as a result, we also need effective automated methods to update existing prompts to adapt to new models. Several previous works have explored automated prompt engineering. AutoPrompt (Shin et al., 2020) uses a gradient-based search method to iteratively edit prompts, but requires gradient access to Figure 1: Overview of PRewrite. the language model. RLPrompt (Deng et al., 2022) optimizes prompts using reinforcement learning (RL), but often produces uninterpretable gibberish prompts. Also using RL, TEMPERA (Zhang et al., 2022) allows editing prompts based on task input, but its small action space might hinder exploration. Another common limitation is that they are based on relatively small-size language models like BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019). It is not clear how well the proposed methods can generalize to larger models, especially with API-only model access. More recent works like APE (Zhou et al., 2023), OPRO (Yang et al., 2023) and Promptbreeder (Fernando et al., 2023) use larger models from the PaLM 2 (Anil et al., 2023) and the GPT1 model families. These works leverage LLMs themselves to propose prompt candidates, and search for a better prompt from them via validating performance on a given training dataset. We follow a similar idea but aim to use RL instead of search to improve the optimization process. In this work, we propose PRewrite, prompt rewriting with reinforcement learning, to address the limitations above. Our idea is to train a prompt rewriter to rewrite an initial under-optimized prompt to a more effective prompt. The prompt rewriter itself is a LLM, trained using RL to optimize for a downstream task. We give an overview in Figure 1. Specifically, given an initial prompt, the prompt rewriter LLM is instructed to generate a rewritten prompt, which in turn is used by the task LLM to generate the final output. Using a reward computed on the final output against the ground1https://platform.openai.com/docs/models 594 truth output, the rewriter LLM is finetuned with RL. As compared to previous RL-based methods, PRewrite produces interpretable prompts (cf. RLPrompt), allows unconstrained exploration without manually defined action space (cf. TEMPERA), and leverages larger models (PaLM 2). Our contributions are summarized as follows: • We propose PRewrite, a novel automated prompt engineering approach. It optimizes prompt via rewriting, in an end-to-end manner using reinforcement learning. • We develop two rewriting strategies, including one that searches for an optimal rewritten prompt from candidates generated by the RL-trained prompt rewriter (Section 2.4). This often further improves the prompt optimization performance. • We conduct experiments on diverse benchmark datasets, which testify the effectiveness of PRewrite and demonstrate its state-of-the-art performance. 2 PRewrite 2.1 Problem Formulation We formulate our prompt rewriting problem more formally in this section. Given a text generation task, we denote the task input and output by x and y respectively. To solve the task, one can use a LLM for prediction, y = LLM(p), where p is the input to the LLM, also known as prompt. Prompts are usually constructed using a template that incorporates the input x and a task instruction t. For the example, t can be “Write a brief answer for the following question” for a question answering task. A prompt can be constructed using template p="{t}: {x}" (Python f-string), where x is the input question. Please refer to Table 4 in Appendix for a complete example. Prompt rewriting aims to rewrite a given initial prompt to another prompt p† = δ(p), in order to optimize the task output. We call the rewritten prompt p†. Since prompts can be constructed using an instruction, we simplify the prompt rewriting problem to only rewriting the instruction t† = δ(t). In fact, most prior works (Fernando et al., 2023; Zhou et al., 2023; Yang et al., 2023) optimize prompts via optimizing instructions, and do not differentiate between prompt and instruction. In this paper, we use the two terms interchangeably wherever it is clear. Prompt rewriting can be performed independent or dependent of the task input, i.e., δ(·) and δ(·|x). We focus on input-independent prompt rewriting, following most prior works. In this case, the instruction is rewritten offline and prompts can be constructed cheaply online using the rewritten instruction. 2.2 Overview Directly searching for an optimal rewritten prompt is challenging due to the large search space of natural language. So, we propose PRewrite to optimize prompt rewriting, as illustrated in Figure 1. First, the prompt rewriter takes in an initial prompt p and rewrites it to another prompt p†. The initial prompt is usually crafted manually and can be sub-optimal. Observing the remarkable capability of LLMs, we instruct a LLM (e.g., PaLM 2-S) with a meta prompt m for rewriting as follows: p† = LLMR("{m}\nInstruction: {p}"). (1) We call LLMR, rewriter LLM, which is to be differentiated from the task LLM, used for the end task. We list our meta prompts in Appendix B. Second, the rewritten prompt p† is then used by the task LLM to generate the task output. The task LLM is assumed to be a blackbox accessed via API and can be larger than the rewriter LLM. Third, we compute rewards based on the task output in comparison with the ground-truth output and use reinforcement learning (RL) to finetune the rewriter LLM on a training set (Section 2.3). This is critical because our meta prompt is very generic. As a result, the rewriter LLM and the rewritten prompt are unlikely to perform well on the downstream task initially. Lastly, we use the RL-trained prompt rewriter to rewrite the initial prompt according to Equation 1 based on two strategies outlined in Section 2.4. 2.3 Finetuning Rewriter LLM with RL This section provides more details on RL finetuning for our rewriter LLM, which is very similar to other RL-based LLM alignment work (Ouyang et al., 2022). Action space consists of all tokens in the rewriter LLM’s vocabulary, allowing arbitrary text rewriting. State is defined as the concatenation of all the decoded tokens so far. Reward is the task LLM’s performance on the downstream task when using the rewritten prompt. We measure this using the end task metric, but also explore other rewards like perplexity and F1 in our experiments (see Appendix D). We use Proximal Policy Optimization 595 (PPO) (Schulman et al., 2017) with KL penalty as the RL algorithm for its robustness. A key difference between our work and previous RL-based methods is that we use a capable LLM (PaLM 2-S) as our rewriter model. Because of this, our model is less likely to produce uninterpretable gibberish prompt as in RLPrompt (this can be also attributed to the KL penalty in PPO). We also don’t need to define a constrained action space manually as in TEMPERA. 2.4 Rewriting via Inference and Search Once the rewriter LLM is trained, we use it for prompt rewriting following Equation 1. We design two specific rewriting strategies. For the inference strategy, denoted as PRewrite-I, we set temperature to zero, in which case the model greedily decodes and generates one single rewritten prompt. For the search strategy, denoted as PRewrite-S, we prompt the rewriter LLM K-times with temperature=1 to generate a set of prompts, {p†i}K i=1. We then select the best p†i based on their end task performance on a dev dataset. 3 Experiments & Analysis 3.1 Experimental Setup We evaluate PRewrite on diverse benchmark datasets, spanning from classification with AG News (Zhang et al., 2015) and SST-2 (Wang et al., 2018), question answering with Natural Questions (NQ) (Kwiatkowski et al., 2019) to arithmetic reasoning with GSM8K (Cobbe et al., 2021). We use the standard train/dev/test splits. As GSM8K doesn’t come with a dev split, so we randomly sample 10% examples from the train split as the dev split. Data statistics are reported in Appendix C. Our initial prompts, prompt templates and meta prompts are listed in Appendix E, G and B respectively. We experiment with PaLM 2-S and PaLM 2-L (Anil et al., 2023) as the frozen task LLMs with zero temperature. We use PaLM 2-S as the rewriter LLM and set temperature to 1 for both the policy and value model during RL training. We use standard PPO algorithm for online policy optimization with GAE. The model is trained until convergence on the dev set. We test both the inference and search strategy for rewriting, denoted as PRewriteI and PRewrite-S respectively. For PRewrite-S, we search from K=10 rewritten prompts (Section 2.4). For baselines, we cite evaluation results for AutoPrompt (Shin et al., 2020), RLPrompt (Deng et al., 2022), and TEMPERA (Zhang et al., 2022), out of which the last two are RL-based methods; APE (Zhou et al., 2023), OPRO (Yang et al., 2023) and Promptbreeder (PB) (Fernando et al., 2023), which use LLMs of same size as ours. We report standard metrics on test: accuracy for AG News, SST-2, GSM8K; and Exact Match (EM) for NQ. 3.2 Results We first present PRewrite results based on PaLM 2-S task model in Table 1. AG News SST-2 NQ GSM8K AutoPrompt 65.7 75.0 RLPrompt 77.2 90.1 TEMPERA 81.3 92.0 Initial prompt 76.9 96.3 24.1 29.9 PRewrite-I 84.5 96.5 29.3 52.0 PRewrite-S 85.2 96.6 30.2 53.6 Table 1: PRewrite experiment results based on PaLM 2-S task model. The baseline results (top section) are based on RoBERTa-Large task model, cited from TEMPERA (Zhang et al., 2022). For TEMPERA, non-testtime-editing (No TTE) results are reported. First, PRewrite consistently improves over the initial prompts, demonstrating the effectiveness of the proposed method. We repeated the PRewrite experiments 5 times and the results were consistent. We list the rewritten prompts in Table 9 in Appendix. Second, we observe larger improvement for PRewrite when there is more headroom. For example, the performance gain on SST-2 is minimum, but we observe 80%, 22% and 10% relative improvement with PRewrite on GSM8K, NQ and AG News respectively. Third, PRewrite-S consistently shows improvement over PRewrite-I, suggesting that search strategy can be more helpful. We find the two strategies often produce prompts with small differences. For example, “sentiment classification” from PRewrite-I and “sentiment classification from text” for Prewrite-S on SST-2. Lastly, baseline models underperform PRewrite. However, this can be largely due to the smaller task model, RoBERTaLarge (Liu et al., 2019), being used for the baselines. That said, it is not straightforward to apply some of the baseline methods on larger models like PaLM 2, especially in case of API-only access. Next, we compare PRewrite with baselines on GSM8K, all based on PaLM 2-L task model in Table 2 (PRewrite-S only due to space constraints). PRewrite-S not only dramatically improves the ini596 APE OPRO PB Initial prompt PRewrite-S 77.9 80.2 83.9 37.0 83.8 Table 2: GSM8K experiment results based on PaLM 2-L task model. Baseline results (left section) are cited from PromptBreeder (PB) (Fernando et al., 2023), also based on PaLM 2-L task model. tial prompt, but also outperforms strong baselines like APE and OPRO, and is on par with Promptbreeder. This result is especially impressive in that the PRewrite setup is relatively simple with minimal customization, in comparison with the baselines. For example, APE proposes to use task input and output to induce instructions for most tasks but has a special treatment to GSM8K. It collects a customized dataset with questions and reasoning steps via prompting InstructGPT for instruction induction – this is more likely to induce chain-ofthought instructions. Promptbreeder uses 56 mutation prompts and 39 thinking style prompts including ones that contain phrases like steps required, taking a break or suggesting explanation. In comparison, we only use one generic meta prompt for GSM8K (Appendix B). We also experiment with different rewards for PRewrite. Please refer to Appendix D for the results. 3.3 Case Studies To showcase the capability of PRewrite, we present rewriting performed by it for two datasets in Table 3 (see Appendix E, F for more results). For NQ, PRewrite not only learns the task needs a short answer, but also impressively adds an incontext example. For GSM8K, PRewrite rewrites the simple initial prompt to a creative chain-ofthought (CoT) prompt. This CoT prompt is different from previous human created ones (Kojima et al., 2022) – it does not instruct the LLM to think/write step by step, but instead assumes there already exists a solution with steps, that follows after the prompt. Moreover, we find Prewrite always produces interpretable rewritten prompts, unlike RLPrompt, which often generates gibberish text. This is due to the LLM-based rewriter and KL-divergence penalty in PPO we have used (Section 2.3). 4 Related Work We survey related work on automated prompt engineering for discrete prompts. Please refer Liu et al. NQ: “Answer the question” →“Compose a short, informative answer that directly answers the given question. The answer should be no longer than 15 words and should not contain any extraneous information. For example, if the question is "Who is the president of the United States?", the answer should be "Joe Biden". Do not write an essay or provide additional explanation.” GSM8K: “SOLUTION"” →“Solve the problem by following the steps in the SOLUTION.” Table 3: Prompt rewriting (initial prompt →rewritten prompt) for NQ and GSM8K produced by PRewrite-S. See Appendix E and F for full results. (2023) for a more comprehensive literature review. Some earlier works optimize prompts via paraphrasing (Jiang et al., 2020; Yuan et al., 2021; Haviv et al., 2021). In contrast, we adopt a powerful LLM to rewrite prompts, providing more capacity for prompt optimization. Shin et al. (2020); Wallace et al. (2021) propose gradient-based search approach which is challenging for larger API-access only models as it requires model gradient access. Prior work have also explored RL based solutions. RLPrompt (Deng et al., 2022) optimizes prompts using RL, but often produces uninterpretable gibberish prompts. TEMPERA (Zhang et al., 2022) allows prompt editing at test time based on task input using RL, but it defines a small action space. By leveraging more capable LLMs, our method produces interpretable prompts and allows unconstrained exploration without a manually defined the action space. More recent works use blackbox LLMs such as PaLM 2 and GPT models, similar to ours. These include APE (Zhou et al., 2023), Promptbreeder (Fernando et al., 2023) and OPRO (Yang et al., 2023). These works use LLMs in different ways to propose prompt candidates and search for the optimal one via validating performance on a given training dataset. We follow a similar idea but instead use RL for prompt optimization. 5 Conclusions In this paper, we present PRewrite, a prompt rewriter trained with reinforcement learning (RL) for prompt optimization. We instantiate the rewriter with a LLM (PaLM 2-S) and finetune it using RL to optimize the end task performance. To further improve the performance, we develop a rewriting strategy that searches from the rewritten prompts generated by the trained rewriter. Our experiments testify the effectiveness of PRewrite and demonstrate its state-of-the-art performance. 597 6 Limitations In this work, we only test with limited initial and meta prompts (see Table 8 and 5 in Appendix) on four benchmark datasets. It would be interesting to experiment with more initial-meta prompt combinations to understand their implications, and on more datasets to test the generality of PRewrite. Moreover, we do not investigate the use of multiple meta/initial prompts to diversify exploration in prompt rewriting, which may further improve PRewrite. We leave these ideas for future work. Due to resource constraints, we only experiment with PaLM 2 models. However, we believe that our conclusions should generalize to other LLMs as well. References Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, and et al. 2023. Palm 2 technical report. arXiv:2305.10403. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. arXiv:2110.14168v2. Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric Xing, and Zhiting Hu. 2022. RLPrompt: Optimizing discrete text prompts with reinforcement learning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP ’22, pages 3369–3391. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), NAACL-HLT ’19, pages 4171–4186. Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, and Tim Rocktäschel. 2023. Promptbreeder: Selfreferential self-improvement via prompt evolution. arXiv:2309.16797. Adi Haviv, Jonathan Berant, and Amir Globerson. 2021. BERTese: Learning to speak to BERT. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3618–3623. Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How Can We Know What Language Models Know? Transactions of the Association for Computational Linguistics, 8:423–438. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199– 22213. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452–466. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9):1–35. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv:1907.11692. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv:1707.06347v2. Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP ’20, pages 4222–4235. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615. 598 Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2021. Universal adversarial triggers for attacking and analyzing NLP. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355. Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, and Xinyun Chen. 2023. Large language models as optimizers. arXiv:2309.03409. Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. Bartscore: Evaluating generated text as text generation. Advances in Neural Information Processing Systems, 34:27263–27277. Tianjun Zhang, Xuezhi Wang, Denny Zhou, Dale Schuurmans, and Joseph E. Gonzalez. 2022. Tempera: Test-time prompting via reinforcement learning. arXiv:2211.11890. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems, volume 28. Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2023. Large language models are human-level prompt engineers. In The Eleventh International Conference on Learning Representations, ICLR ’23. 599 A Problem Formulation Examples In Table 4, we show an example of the task input (x), output (y), prompt (p), and instruction (t) for a question answering task. Input Who is Harry Potter’s father? Output James Potter Prompt Write a brief answer for the following question: Who is Harry Potter’s father? Instruction Write a brief answer for the following question Table 4: Example for a question answering task. B Meta Prompts In Table 5, we show the meta prompts used for prompting the rewriter LLM. Rewrite the following instruction via rephrasing and/or adding specific requirements. Use illustrative description if needed. Output the new instruction only. Rewrite the following instruction via rephrasing and/or adding specific requirements. Add instructions which would be helpful to solve the problem correctly. Output the new instruction only. Table 5: Meta prompts used for prompt rewriting, for experiments based on PaLM 2-S (upper) and PaLM 2-L (bottom) task model. C Dataset Statistics Data statistics for train/dev/test splits are given in Table 6. GSM8K doesn’t come with a dev (or validation) split, so we randomly reserve 10% examples from the train split as the dev split. Dataset Train Dev Test AG News 108,000 12,000 7,600 SST-2 60,614 67,35 871 NQ 79,168 8,757 3,610 GSM8K 6,725 748 1,319 Table 6: Train/Dev/Test splits for eval datasets. D Results based on Different Rewards We test different rewards for all datasets and report the results based on PRewrite-I and PaLM 2-S task model in Table 7. Perplexity uses perplexity of the ground truth labels as the reward. F1 use wordlevel F1 measure as the reward. Perplexity+F1 sums perplexity and F1 as the reward. First, we find that using the final task metric, accuracy or EM, performs well in general. In other Reward AG News SST-2 NQ EM/Accuracy 84.5 96.5 29.3 F1 84.5 96.6 30.6 Perplexity 60.1 95.8 12.7 Perplexity+F1 84.2 96.5 32.3 Table 7: PRewrite-I experiment results based on different rewards and PaLM 2-S task model. words, RL is able to directly optimize for these task metrics. Second, we find that F1 is in general be more stable than accuracy/EM. This is because F1 can provide more fine-grained feedback for RL – accuracy/EM is either 0 or 1 for a datapoint while F1 provides a fractional score. Third, we find perplexity can sometimes be harmful (see NQ and AG News results in the table), as it is not directly linked to the final task metrics. However, combining perplexity with F1 gives the best performance on NQ. E Initial Prompts Table 8 lists all initial prompts we used for prompt rewriting in our experiments. F Rewritten Prompts Table 9 lists the produced rewritten prompts for experiments using PaLM 2-S as the task LLM. This includes rewritten prompts produced using both rewriting strategies (Section 2.4) for all datasets. Table 10 lists the rewritten prompts produced by PRewrite-S for GSM8K experiments using PaLM 2-L as the task LLM. G Prompt Templates Table 11 lists the prompt template we used for each dataset. 600 Dataset Source Initial prompt AG News Given a news article, categorize it into one of the following categories: 1. World 2. Sports 3. Business 4. Sci/Tech. SST-2 Zhang et al. (2022) In this task, you are given sentences from movie reviews. The task is to classify a sentence as “positive” if the sentiment of the sentence is positive or as “negative" if the sentiment of the sentence is negative. NQ Answer the question GSMK Fernando et al. (2023) SOLUTION" Table 8: Initial prompts used for prompt rewriting in our experiments. When the source is absent, the prompt is manually crafted by us. Dataset Strategy Rewritten prompt AG News PRewrite-I Classify a news article into one of the following categories: World, Sports, Business, Sci/Tech. AG News PRewrite-S Classify a given news article into one of the following categories: World, Sports, Business, or Sci/Tech. SST-2 PRewrite-I sentiment classification SST-2 PRewrite-S sentiment classification from text NQ PRewrite-I Compose a short, informative answer to the given question. The answer should be no longer than 15 words and should be written in a clear, concise manner. For example, if the question is "Who is the president of the United States?", the answer should be "Joe Biden". Do not write an essay or provide additional explanation. NQ PRewrite-S Compose a short, informative answer that directly answers the given question. The answer should be no longer than 15 words and should not contain any extraneous information. For example, if the question is "Who is the president of the United States?", the answer should be "Joe Biden". Do not write an essay or provide additional explanation. GSM8K PRewrite-I Provide a detailed solution to the problem. GSM8K PRewrite-S Provide a solution to the problem in a clear and concise manner. Table 9: Rewritten prompts produced by PRewrite based on PaLM 2-S task model. Dataset Strategy Rewritten prompt GSM8K PRewrite-S Solve the problem by following the steps in the SOLUTION. Table 10: Rewritten prompts produced by PRewrite based on PaLM 2-L task model. Dataset Prompt template AG News "{t}\nArticle: {title} {description}" SST-2 "{t}\nText: {text}" NQ "{t}\nQuestion: {question}" GSM8K "{t}\nQuestion: {question}" Table 11: Prompt templates used for each datasets. t is the initial/rewritten task instruction. 601
2024_short
54
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 602–615 August 11-16, 2024 ©2024 Association for Computational Linguistics Paraphrasing in Affirmative Terms Improves Negation Understanding MohammadHossein Rezaei and Eduardo Blanco Department of Computer Science, University of Arizona {mhrezaei,eduardoblanco}@arizona.edu Abstract Negation is a common linguistic phenomenon. Yet language models face challenges with negation in many natural language understanding tasks such as question answering and natural language inference. In this paper, we experiment with seamless strategies that incorporate affirmative interpretations (i.e., paraphrases without negation) to make models more robust against negation. Crucially, our affirmative interpretations are obtained automatically. We show improvements with CondaQA, a large corpus requiring reasoning with negation, and five natural language understanding tasks. 1 Introduction Negation is a fundamental linguistic phenomenon present in all human languages (Horn, 1989). Language models underperform in various natural language understanding (NLU) tasks when the input includes negation. For example, Ettinger (2020) and Kassner and Schütze (2020) show that BERT (Devlin et al., 2019) fails to distinguish between negated and non-negated cloze questions. Researchers have also shown that large language models such as GPT-3 (Brown et al., 2020) and InstructGPT (Ouyang et al., 2022) are insensitive to negation and fail to reason under negation (Truong et al., 2023). Jang et al. (2022) point out that language models violate the logical negation property (p is true iff ¬p is false). Hossain et al. (2022a) analyze negation in eight popular corpora for six NLU tasks. They conclude that (a) NLU corpora have few negations compared to general-purpose texts and (b) the few negations in them are often unimportant. To our knowledge, CondaQA (Ravichander et al., 2022) is the largest benchmark (14,182 question-answer pairs from Wikipedia) requiring reasoning over the implications of negations. In this paper, we paraphrase sentences with negation without using negation to make models for natural language understanding more robust when negation is present in the input. We will use the term affirmative interpretation to refer to paraphrases without negation (e.g., I am not sad: I am just ok, I am happy, etc.). Appendix A provides examples of how affirmative interpretations differ from simple paraphrases. The main contributions of this paper are (a) strategies to generate and incorporate affirmative interpretations and (b) experimental results demonstrating that doing so yields better results.1 In addition to CondaQA, we experiment with five of the eight corpora analyzed by Hossain et al. (2022a): CommonsenseQA (Talmor et al., 2019), STS-B (Cer et al., 2017), QNLI (Rajpurkar et al., 2016), WiC (Pilehvar and Camacho-Collados, 2019), and WSC (Levesque et al., 2012).2 We do not experiment with the other three corpora because they do not contain any negation (Roemmele et al., 2011, COPA), there is no difference in results when negation is present (Cer et al., 2017, QQP; 0.01 in macro F1), or has already been shown (Hossain and Blanco, 2022) to benefit from affirmative interpretations (Socher et al., 2013, SST-2). The corpora we experiment with are in English. Related Work Early research on negation targeted detecting negating cues and generating semantic representations, usually by identifying the scope and focus (Morante et al., 2011; Morante and Daelemans, 2012; van Son et al., 2016; Khandelwal and Sawant, 2020; Truong et al., 2022). More recent works bypass formal representations. Instead, they make neural models robust when the input contains negation. Hosseini et al. (2021) combine unlikelihood training and syntactic data augmentation to enhance the ability of BERT to understand negation with negated LAMA (Kass1Code available at https://github.com/mhrezaei1/ paraphrase-affirmative under Apache 2.0 license. 2See examples from these corpora in Appendix B. 602 ner and Schütze, 2020). Singh et al. (2023) present a pretraining strategy designed for negation. Unlike these works, we couple original inputs containing negation with affirmative interpretations. The first work on affirmative interpretations was by Sarabi et al. (2019). Hossain et al. (2022b) present AFIN, a corpus of ≈3,000 sentences with negations and their affirmative interpretations. These two previous works are limited to generating affirmative interpretations from negations; they do not provide extrinsic evaluations. More recently, Hossain and Blanco (2022) present Large-AFIN, over 153,000 pairs of sentences with negation and their affirmative interpretations obtained from parallel corpora via backtranslation. In this paper, we present strategies to generate affirmative interpretations that do not require parallel corpora or a machine translation system. Moreover, we demonstrate that incorporating affirmative interpretations yields better results with CondaQA and five other natural language understanding tasks. 2 Generating Affirmative Interpretations An affirmative interpretation generator is a system that takes a sentence with negation as its input and outputs an affirmative interpretation. The task is similar to paraphrase generation with an additional constraint: the output must not contain negation. We use two approaches to generate affirmative interpretations. The first one is an off-the-shelf T5 (Raffel et al., 2020) fine-tuned by Hossain and Blanco (2022) with Large-AFIN (Section 1) to generate affirmative interpretations. We refer to this model as T5-HB, and to the affirmative interpretations generated by T5-HB as AHB. The second approach bypasses the need for a large collection of pairs of sentences with negation and their affirmative interpretations. It is based on the work by Vorobev and Kuznetsov (2023), who fine-tuned T5 on a paraphrase dataset obtained with ChatGPT (419,197 sentences and five paraphrases per sentence). We refer to this model as T5-CG. Note that it is trained to generate paraphrases—not affirmative interpretations. We obtain affirmative interpretations with T5-CG by generating five paraphrases and selecting the first one that does not contain negation. We refer to these affirmative interpretations as ACG.3 For examples of AHB and ACG, see Appendix D. 3At the time of writing, ChatGPT cannot reliably paraphrase without negation. See an example in Appendix C We use all negation cues in CondaQA to identify negation cues in our experiments. CondaQA contains over 200 unique cues, including single words (e.g., inaction, unassisted, unknown), affixal negations (e.g., dislike, unmyelinated, unconnected, inadequate, impartial), and multiword expressions (e.g., a lack of, in the absence of, no longer, not at all, rather than). They also include multiple part-of-speech tags such as nouns (e.g., absence, nobody, inability), adverbs (e.g., indirectly, involuntarily, unexpectedly), determiners (e.g., neither, no, none), and verbs (e.g., cannot, refuse, exclude). 3 Experimental Results We use RoBERTa-Large (Liu et al., 2019) as the base model. In addition to experimenting with the original inputs for a task (e.g., passage and question from CondaQA), we couple the original input with one affirmative interpretation of the sentence with negation (if any; no change otherwise). Affirmative interpretations are concatenated to the original input after the <sep> special token. Our approach is the same regardless of the type of negation. For implementation details, see Appendix E and F. 3.1 CondaQA CondaQA (Ravichander et al., 2022) is a questionanswering dataset that requires reasoning over negation. It was created by asking crowdworkers to write questions about a negated sentence within a paragraph retrieved from Wikipedia. Crowdworkers also made three edits to the original paragraph: 1. Paraphrase Edit: Paraphrase the negation. 2. Scope Edit: Change the scope of the negation. 3. Affirmative Edit: Remove the negation. Additionally, they answered the question based on the original passage and all three edited passages. (see examples in Appendix G). Note that paraphrase edits preserve meaning thus answers remain unchanged. On the other hand, scope edits change meaning but the answer may or may not remain the same. Finally, affirmative edits reverse meaning thus answers are also reversed. Paraphrase edits are not the same as our affirmative interpretations—crowdworkers were not asked to paraphrase without using negation. We discovered, however, that 40.5% of these edits satisfy our definition of affirmative interpretation. We believe crowdworkers simply found it intuitive to paraphrase the negation without using negation. We refer to these affirmative interpretations as AG (Gold) 603 Input Representation Acc. Group Consistency # Pars. Training Testing All Par. Sco. Aff. From Ravichander et al. (2022) RoBERTa-Large 355M P+Q P+Q 54.1 13.6 51.6 26.5 27.2 UnifiedQA-v2-Base 220M P+Q P+Q 58.0 17.5 54.6 30.4 33.0 UnifiedQA-v2-Large 770M P+Q P+Q 66.7 30.2 64.0 43.7 46.5 UnifiedQA-v2-3B 3B P+Q P+Q 73.3 42.2 72.8 55.7 57.2 Our Implementation RoBERTa-Large 355M P+Q P+Q 64.9 29.6 61.3 42.3 48.3 w/ sentence with neg. from P (S) P+Q+S P+Q+S 65.2 31.1 58.4 44.1 49.2 w/ 1st par. of S by T5-CG (SCG) P+Q+SCG P+Q+SCG 65.7 28.4 60.8 42.4 48.6 w/ Affirmative Interpretations P+Q+AHB P+Q 62.8 26.3 60.5 39.2 43.3 P+Q+AHB P+Q+AHB 67.1∗ 31.4 61.9 43.8 50.7 P+Q+ACG P+Q 61.3 23.4 59.6 37.8 37.8 P+Q+ACG P+Q+ACG 66.4∗ 31.7 62.6 44.6 49.4 P+Q+AHB+ACG P+Q+AHB+ACG 65.6 30.1 60.9 43.7 49.9 P+Q+AG P+Q 63.6 26.7 61.4 38.8 43.9 P+Q+AG P+Q+AHB 64.4 28.3 57.2 40.7 46.2 P+Q+AG P+Q+ACG 65.6 30.3 61.3 42.4 49.0 P+Q+AG or AHB P+Q 62.5 25.7 60.1 38.6 42.4 P+Q+AG or AHB P+Q+AHB 65.7 30.2 61.1 41.3 48.9 P+Q+AG or ACG P+Q 60.6 22.0 57.9 35.2 36.8 P+Q+AG or ACG P+Q+ACG 66.7∗ 32.2 62.2 44.9 50.9 Table 1: Results on the CondaQA test set. Q, P and S stand for question, passage and sentence with negation from P. SCG stands for the first paraphrase of S obtained with T5-CG, without avoiding negations. An asterisk (‘*’) indicates statistically significant improvements (McNemar’s test (McNemar, 1947), p < 0.05) with respect to not using affirmative interpretations (P+Q). UnifiedQA is fine-tuned with ≈1M question-answer pairs from 20 corpora yet it does not outperform our best approach to incorporate affirmative interpretations (Accuracy: 66.7 vs. 67.1) unless it uses an order of magnitude more parameters (3B vs. 355M). The negated sentence (S) or a paraphrase that is not an affirmative interpretation (SCG) bring minor improvements compared to AHB and ACG affirmative interpretations. and only use them for training purposes, as using them at prediction time would be unrealistic. Our evaluation reuses the metrics proposed by the authors of CondaQA: accuracy and group consistency. Group consistency is the percentage of questions answered correctly for all the passages in a group. The groups include the original passage and either all three or one of the edited passages. Table 1 summarizes the experimental results (see Appendix H for additional results). Our implementation of RoBERTa-Large obtains substantially better results than those by Ravichander et al. (2022, Acc.: 64.9 vs. 54.1). Reviewing the training details revealed that the difference is that they stop training after ten epochs while we use early stopping and stop after 18 epochs. The best-performing model in terms of accuracy is UnifiedQA-v2-3B (Khashabi et al., 2022), which is a 3B-parameter T5 model pre-trained on 20 question-answering corpora data (≈1M, Appendix I). Smaller versions of UnifiedQA (220M and 770M parameters) obtain substantially lower results despite being trained with the same corpora (Acc.: 58.0 and 66.7). Our implementation of RoBERTa-Large using the question and passage as input almost rivals UnifiedQA-v2-Large (64.9 vs. 66.7) despite the latter having twice the size and being fine-tuned with ≈1M question-answering pairs. Coupling the original input (passage and question) with either the sentence that contains negation (S) or the first paraphrase obtained with T5-CG with no effort to avoid negation (SCG) brings minor improvements (64.9 vs. 65.2, 65.7). More interestingly, incorporating affirmative interpretations brings statistically significantly better results (64.9 vs. 67.1 (AHB), 66.4 (ACG) and 66.7 (AG or ACG/ACG)). We conclude the following from the results: • The benefits of affirmative interpretations are not due to pinpointing the sentence within the passage that is most relevant to answer the question (P+Q+S vs. P+Q+SCG vs. P+Q+AHB). • Training with affirmative interpretations is always beneficial as long as they are also used at prediction time. Note that we only use automatically obtained affirmative interpretations (all but AG) at testing time. However, 604 Negated sentence Affirmative interpretation Adjective (48%) The island became completely uninhabited by 1980 with the automation of the lighthouse. The island became vacant by the 1980s because of the automation of the lighthouse. They are also made to work the company unpaid as a form of "training". They are made to work the company free as a form of "training". Verb (28%) Early Negro leagues were able to attract top talent but were unable to retain them due to financial, logistical and contractual difficulties. Early Negro Leagues were able to attract top talent but failed to retain them due to financial, logistical and contractual difficulties. Although the original date is not used in modern times, it has become an official holiday. Although the original date was used in the ancient times, it has become an official holiday. Quantity (24%) But nobody outside of the Muslim world made daily use of them before Stevin. Muslim groups were the only ones to made daily use of them before Stevin. However, he enjoyed it but not at that age. He enjoyed it at another age. Drop negation without further modifications (10%) The unpopular central government found itself in the difficult position of trying to gain support for spending cuts from the recalcitrant regional governments. The central government found itself in a difficult position trying to get support for spending cuts from recalcitrant regional governments. Approximately 30% of the acellular component of bone consists of organic matter, while roughly 70% by mass is attributed to the inorganic phase. Around 30% of the acellular component of bone is made up by organic matter. Table 2: Qualitative analysis of AHB affirmative interpretations that result in fixing errors made by the system not using affirmative interpretations with CondaQA (P+Q vs. P+Q+AHB, Table 1). The affirmative interpretations rephrase in affirmative terms an adjective (48%), a verb (28%), or a quantity (24%). We also observe that 10% are erroneous as they simply drop the negated content. % w/ negation % meaning-preserving AHB 23 64 ACG 46 83 SCG 60 90 Table 3: Qualitative analysis (100 samples from CondaQA) of affirmative interpretations (AHB and ACG) and the first paraphrase by T5-CG without avoiding negation (SCG). Affirmative interpretations are less meaningpreserving, but the experimental results demonstrate that they are more beneficial (Table 1). using both of them together does not yield better results (Acc.: 65.6 vs. 66.4 and 67.1). • At training time, complementing AG (available for ≈40% of paraphrase edits) with AHB or ACG is beneficial (last and second-to-last block). Qualitative and Error Analysis Manual analysis of 100 samples from CondaQA reveals that ACG contains less negations than AHB (46% vs. 23%). ACG, however, contains less meaning-preserving paraphrases (36% vs. 17%). On the other hand, paraphrases in SCG rarely do not preserve meaning (10%) but often include negation (60%). (Table 3). Sometimes it is not natural to rewrite a sentence without negation (e.g., The inner membrane is rich in an unusual phospholipid, cardiolipin.) Out of the 23 samples where AHB contains negation, a human was able to rewrite 15 of them without negation. Combined with the results from Table 1, this analysis leads to the conclusion that affirmative interpretations are beneficial despite being noisy. We also analyzed 50 samples of the errors made representing the input with P+Q that are fixed using affirmative interpretations from AHB. A negated adjective is replaced by its affirmative counterpart (e.g., not happy →sad) in 48% of cases. Table 2 shows the analysis and examples of negated sentences and their AHB affirmative interpretations. 3.2 Other NLU Tasks We experiment with five additional NLU tasks to evaluate the benefits of affirmative interpretations. We access these corpora through the GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019) benchmarks. We report results on the development set of each corpus, given that the test sets are not publicly available. In addition, we report the results for important and non-important instances as identified by Hossain et al. (2022a). They consider a negation unimportant if one can disregard it and still make the correct prediction. For example, John didn’t eat the steak with gusto (most likely) entails John ate meat even if one disregards the negation. Table 4 presents the results. Incorporating affirmative interpretations (AHB or ACG) improves perfor605 CmnsnsQA STS-B QNLI WiC WSC F1 Prsn Sprmn F1 F1 F1 RoBERTa 0.70 0.92 0.92 0.93 0.71 0.69 instances without negation 0.69 0.92 0.92 0.93 0.71 0.67 instances with negation 0.73 0.88 0.88 0.92 0.66 0.71 Important 0.67 0.82 0.85 0.78 n/a n/a Unimportant 0.80 0.88 0.88 0.92 0.66 0.71 RoBERTa w/ Affirmative Interpret. obtained using T5-HB (AHB) 0.72 (+2.9%) 0.92 0.91 0.94 (+1.1%) 0.70 (-1.4%) 0.68 (-1.4%) instances without negation 0.72 (+4.3%) 0.92 0.92 0.94 (+1.1%) 0.71 (+0.0%) 0.62 (-7.5%) instances with negation 0.74 (+1.4%) 0.88 0.88 0.92 (+0.0%) 0.70 (+6.1%) 0.74 (+4.2%) Important 0.70 (+4.5%) 0.83 0.84 0.89 (+14.1%) n/a n/a Unimportant 0.80 (+0.0%) 0.87 0.88 0.92 (+0.0%) 0.70 (+6.1%) 0.74 (+4.2%) obtained using T5-CG (ACG) 0.71 (+1.4%) 0.92 0.92 0.94 (+1.1%) 0.73 (+2.8%) 0.71 (+2.9%) instances without negation 0.71 (+2.9%) 0.93 0.92 0.94 (+1.1%) 0.73 (+2.8%) 0.68 (+1.5%) instances with negation 0.74 (+1.4%) 0.88 0.88 0.92 (+0.0%) 0.70 (+6.1%) 0.75 (+5.6%) Important 0.69 (+3.0%) 0.82 0.87 0.89 (+14.1%) n/a n/a Unimportant 0.80 (+0.0%) 0.88 0.88 0.92 (+0.0%) 0.70 (+6.1%) 0.75 (+5.6%) Table 4: Results on additional NLU tasks (macro F1 except with STS-B (Pearson and Spearman correlations)). Percentages between parentheses indicate improvements compared to models not using affirmative interpretations. Affirmative interpretations yield better results, and ACG outperforms AHB. The largest gains are with important negations, although we observe gains with instances without negation (up to 4.3%) except with WSC (-7.5%). mance across all corpora with instances containing important negations; the only exception is STSB with AHB (Spearman: -1.2%) and ACG (Pearson: no difference). It is worth noting that WiC and WSC have no important negations, yet either AHB or ACG yield substantial improvements with unimportant negations (4.2–6.1%). Surprisingly, we found that incorporating affirmative interpretations is beneficial for instances without negation across all corpora except WSC with AHB. These experiments demonstrate that incorporating affirmative interpretations not only obtains higher or comparable results with instances containing important negations, but also often improves results with instances not containing negation. 4 Conclusion We have presented two strategies to generate and incorporate affirmative interpretations into models for natural language understanding. The idea is simple yet effective: complement inputs that contain negation with a paraphrase that does not contain negation. Crucially, we have demonstrated that automatically obtained (noisy) affirmative interpretations yield improvements with (a) CondaQA compared with a model with twice as many parameters pre-trained with ≈1M question-answer pairs from 20 existing corpora and (b) five NLU tasks. Our methodology is architecture- and task-agnostic. In fact, the model to generate affirmative interpretations was tuned with out-of-domain corpora. Future Work. The methods we have presented are simple and effective, but they are not the only way to incorporate or generate affirmative interpretations. For example, one might be able to use LLMs such as GPT-4 or Llama to generate affirmative interpretations. Another interesting direction is to investigate the effect of affirmative interpretations on other NLU tasks, such as sentiment analysis or text classification. Finally, it would be interesting to investigate the effect of affirmative interpretations on other languages, especially those with different word order or negation structures. Limitations The scope of this paper is limited to question answering (CondaQA) and natural language understanding (five tasks and corpora) in English with an emphasis on negation. We leave for future work the task of exploring whether affirmative interpretations are beneficial in other languages. We acknowledge that this strategy might not generalize to other languages. We also acknowledge that we did not conduct experiments with the latest GPT models or spend substantial amounts of time engineering prompts. We note, however, that good faith efforts using prompts showed that ChatGPT may not be well suited for generating affirmative interpretations at this time (Appendix C). It is worth pointing out that writing affirmative interpretations for negated sentences might not be 606 straightforward or even possible in some cases. In this paper, we did not focus on the task of determining whether a sentence can be paraphrased without negation. We leave this for future work. None of the corpora that we work with include information about the scope and focus of negation. Therefore, we do not have any insight into the relation between affirmative interpretations and the scope and focus of a negation. Ethics Statement The work in this paper does not involve human subjects. We only use publicly available datasets and models. We do not collect any personal information. Therefore, this work does not raise any ethical concerns. Acknowledgements This material is based upon work supported by the National Science Foundation under Grant No. 2310334. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. We used computational resources available at the Chameleon testbed to run our experiments (Keahey et al., 2020). We are also grateful to the anonymous reviewers for their valuable comments. References Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. 2019. Piqa: Reasoning about physical commonsense in natural language. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc. Daniel Cer, Mona Diab, Eneko Agirre, Iñigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Association for Computational Linguistics. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2924–2936, Minneapolis, Minnesota. Association for Computational Linguistics. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. Pradeep Dasigi, Nelson F. Liu, Ana Marasovi´c, Noah A. Smith, and Matt Gardner. 2019. Quoref: A reading comprehension dataset with questions requiring coreferential reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5925–5932, Hong Kong, China. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368–2378, Minneapolis, Minnesota. Association for Computational Linguistics. Allyson Ettinger. 2020. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34–48. Laurence R. Horn. 1989. A Natural History of Negation. University of Chicago Press. Md Mosharaf Hossain and Eduardo Blanco. 2022. Leveraging affirmative interpretations from negation improves natural language understanding. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5833– 5847, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Md Mosharaf Hossain, Dhivya Chinnappa, and Eduardo Blanco. 2022a. An analysis of negation in natural language understanding corpora. In Proceedings of the 607 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 716–723, Dublin, Ireland. Association for Computational Linguistics. Md Mosharaf Hossain, Luke Holman, Anusha Kakileti, Tiffany Kao, Nathan Brito, Aaron Mathews, and Eduardo Blanco. 2022b. A question-answer driven approach to reveal affirmative interpretations from verbal negations. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 490– 503, Seattle, United States. Association for Computational Linguistics. Arian Hosseini, Siva Reddy, Dzmitry Bahdanau, R Devon Hjelm, Alessandro Sordoni, and Aaron Courville. 2021. Understanding by understanding not: Modeling negation in language models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1301–1312, Online. Association for Computational Linguistics. Myeongjun Jang, Frank Mtumbuka, and Thomas Lukasiewicz. 2022. Beyond distributional hypothesis: Let language models learn meaning-text correspondence. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 2030– 2042, Seattle, United States. Association for Computational Linguistics. Nora Kassner and Hinrich Schütze. 2020. Negated and misprimed probes for pretrained language models: Birds can talk, but cannot fly. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7811–7818, Online. Association for Computational Linguistics. Kate Keahey, Jason Anderson, Zhuo Zhen, Pierre Riteau, Paul Ruth, Dan Stanzione, Mert Cevik, Jacob Colleran, Haryadi S. Gunawi, Cody Hammock, Joe Mambretti, Alexander Barnes, François Halbach, Alex Rocha, and Joe Stubbs. 2020. Lessons learned from the chameleon testbed. In Proceedings of the 2020 USENIX Annual Technical Conference (USENIX ATC ’20). USENIX Association. Aditya Khandelwal and Suraj Sawant. 2020. NegBERT: A transfer learning approach for negation detection and scope resolution. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 5739–5748, Marseille, France. European Language Resources Association. Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 252–262, New Orleans, Louisiana. Association for Computational Linguistics. Daniel Khashabi, Tushar Khot, and Ashish Sabharwal. 2020. More bang for your buck: Natural perturbation for robust question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 163–170, Online. Association for Computational Linguistics. Daniel Khashabi, Yeganeh Kordi, and Hannaneh Hajishirzi. 2022. Unifiedqa-v2: Stronger generalization via broader cross-format training. Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2020. Qasc: A dataset for question answering via sentence composition. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8082–8090. Tomáš Koˇciský, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The NarrativeQA reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317–328. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452–466. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAding comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785– 794, Copenhagen, Denmark. Association for Computational Linguistics. Hector J. Levesque, Ernest Davis, and Leora Morgenstern. 2012. The Winograd Schema Challenge. In Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning, KR’12, pages 552–561. AAAI Press, Rome, Italy. Kevin Lin, Oyvind Tafjord, Peter Clark, and Matt Gardner. 2019. Reasoning over paragraph effects in situations. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 58–62, Hong Kong, China. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153–157. 608 Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2381–2391, Brussels, Belgium. Association for Computational Linguistics. Roser Morante and Walter Daelemans. 2012. ConanDoyle-neg: Annotation of negation cues and their scope in conan doyle stories. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC’12), pages 1563–1568, Istanbul, Turkey. European Language Resources Association (ELRA). Roser Morante, Sarah Schrauwen, and Walter Daelemans. 2011. Annotation of negation cues and their scope: Guidelines v1. Technical Report CTRS003, Computational Linguistics and Psycholinguistics Technical Report Series. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. Jason Phang, Phil Yeres, Jesse Swanson, Haokun Liu, Ian F. Tenney, Phu Mon Htut, Clara Vania, Alex Wang, and Samuel R. Bowman. 2020. jiant 2.0: A software toolkit for research on general-purpose text understanding models. http://jiant.info/. Mohammad Taher Pilehvar and Jose Camacho-Collados. 2019. WiC: the word-in-context dataset for evaluating context-sensitive meaning representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1267–1273, Minneapolis, Minnesota. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1–67. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789, Melbourne, Australia. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Abhilasha Ravichander, Matt Gardner, and Ana Marasovic. 2022. CONDAQA: A contrastive reading comprehension dataset for reasoning about negation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8729–8755, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Matthew Richardson, Christopher J.C. Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 193–203, Seattle, Washington, USA. Association for Computational Linguistics. Melissa Roemmele, Cosmin Bejan, and Andrew Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Winogrande: An adversarial winograd schema challenge at scale. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8732–8740. Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social IQa: Commonsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4463– 4473, Hong Kong, China. Association for Computational Linguistics. Zahra Sarabi, Erin Killian, Eduardo Blanco, and Alexis Palmer. 2019. A corpus of negations and their underlying positive interpretations. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 158–167, Minneapolis, Minnesota. Association for Computational Linguistics. Rituraj Singh, Rahul Kumar, and Vivek Sridhar. 2023. NLMs: Augmenting negation in language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 13104–13116, Singapore. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense 609 knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149–4158, Minneapolis, Minnesota. Association for Computational Linguistics. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A machine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 191–200, Vancouver, Canada. Association for Computational Linguistics. Thinh Truong, Timothy Baldwin, Trevor Cohn, and Karin Verspoor. 2022. Improving negation detection with negation-focused pre-training. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4188–4193, Seattle, United States. Association for Computational Linguistics. Thinh Hung Truong, Timothy Baldwin, Karin Verspoor, and Trevor Cohn. 2023. Language models are not naysayers: an analysis of language models on negation benchmarks. In Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023), pages 101–114, Toronto, Canada. Association for Computational Linguistics. Chantal van Son, Emiel van Miltenburg, and Roser Morante. 2016. Building a dictionary of affixal negations. In Proceedings of the Workshop on ExtraPropositional Aspects of Meaning in Computational Linguistics (ExProM), pages 49–56, Osaka, Japan. The COLING 2016 Organizing Committee. Vladimir Vorobev and Maxim Kuznetsov. 2023. A paraphrasing model based on chatgpt paraphrases. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. CoRR, abs/1905.00537. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. A Paraphrases vs. Affirmative Interpretations Affirmative interpretations are paraphrases without negation. Table 5 shows examples of automatically generated paraphrases from a negated sentence. Not all of them are correct affirmative interpretations: some (a) contain negation or (b) do not preserve the meaning of the original sentence with negation (and thus they are not actual paraphrases to begin with). The definition of affirmative interpretation is a paraphrase (i.e., rewording that preserves meaning) not containing negation. Note that an automatically obtained paraphrase that does not preserve the full meaning (and thus does not satisfy the definition of affirmative interpretation) does not necessarily contradict the meaning of the original sentence with negation. For example, I stayed home today is not a true paraphrase of I didn’t go shopping today but is not a contradiction either. In this example, obtaining I stayed home today, despite being only plausible and not a paraphrase of I didn’t go shopping today, could be useful to answer questions such as “Did I go shopping today?” as staying home contradicts going shopping. B NLU Corpora Table 6 shows examples from the five NLU corpora that we experiment with. The examples are from the development set of each corpus. In our experiments, we append the affirmative interpretation of the negated sentence in the input to the end of the input after a special token. C Attempting to Generate Affirmative Interpretations with ChatGPT At the time of writing, ChatGPT cannot reliably generate affirmative interpretations (i.e., paraphrase without using negation). In the example in Figure 1, it appears convinced to be able to do so, yet it clearly fails: unhappy and lack are negations. Perhaps surprisingly, ChatGPT appears to know that the generated output does contain negation. 610 Negation? Same Meaning? Original Sentence with Negation: The lightning strikes caused no serious permanent damage. Yes n/a Automatically Generated Paraphrases (unfiltered): The lightning did not cause any damage. Yes No The lightning did not cause any significant and permanent damage. Yes Yes The lightning strikes caused serious permanent damage. No No Lightning strikes caused short-term damage. No Yes Table 5: Examples of automatically generated paraphrases from a negated sentence. The first two paraphrases contain negation, and only the second one preserves meaning. The next two paraphrases do not contain negation, and only the fourth one preserves meaning. Only the fourth automatically obtained paraphrase is an affirmative interpretation: it does not contain negation and it is a true paraphrase of the original sentence with negation—not causing serious permanent damage carries roughly the same meaning than causing short-term damage. Input Output Question Answering CommonsenseQA What are you waiting alongside with when you’re in a reception area? D A) Motel, B) Chair, C) Hospital, D) People, E) Hotel Similarity and Paraphrasing STS-B Three men are playing guitars. 3.75 (out of 5) Three men are on stage playing guitars. Inference QNLI What happened to Dane? Entailment (i.e., question is answered) Dane was killed in a horse-riding accident when Nikola was five. Word Sense Disambiguation WiC Room and board. Not same meaning He nailed boards across the windows. Coreference Resolution WSC Mark told Pete many lies about himself, which Pete included in his book. He should have been more truthful. Not coreferent Table 6: Examples of instances from the NLU tasks used in our experiments. The first column indicates the task and the corpus. The second column shows the input to the system. The third column shows the expected output. Negated Sentence and Affirmative Interpretations Correct? Negated Sentence The National Palace is one of Managua’s oldest buildings, undamaged by the 1972 earthquake. n/a AHB The National Palace, one of Managua’s oldest buildings, survived the 1972 earthquake. Yes ACG The National Palace, which was one of the oldest structures in Managua, remained intact following the 1972 earthquake. Yes Negated Sentence It is not rare to find pearls that measure as much as 14mm across. n/a AHB It is not uncommon to find pearls that measure as much as 14mm across. No ACG The size of 14mm pearls is not uncommon. No Human It is common to find pearls that measure as much as 14mm across. Yes Table 7: Examples of negated sentences and affirmative interpretations generated by T5-HB (AHB) and T5-CG (ACG). The last column indicates whether affirmative interpretation are correct (i.e., meaning preserving and without negation). Human is a human-generated affirmative interpretation. 611 - Negated Sentence: An increasing minority of young people cannot understand Japanese and instead use the Ryukyuan languages only. - (wrong) Affirmative Interpretation: An increasing minority of young people understand only Ryukyuan languages instead. - Observation: The affirmative interpretations drops an important part of the meaning of the negated sentence: not understanding Japanese. We note, however, that the affirmative interpretation is factual given the negated sentence and it is likely to be useful for downstream tasks. - Negated Sentence: By war’s end, no other nation formally recognized the Confederacy. - (wrong) Affirmative Interpretation: Only one nation recognized the Confederacy at the end of war. - Observation: This error seems to be due to lack of context of the negated sentence. The affirmative interpretation does not have negation and is plausible, but it is impossible to know how many nations recognized the Confederacy without additional context. Indeed, no other nation did X could mean that any number of nations did X. Table 8: A couple examples of negated sentence from CondaQA and automatically generated affirmative interpretations that are wrong. We also provide our observations. Figure 1: Attempting to generate affirmative interpretations with ChatGPT results in a nonsensical conversation. ChatGPT appears to be able to identify negations yet uses them when instructed to not do so Instance Representation Training Testing Learning Rate P+Q P+Q 1e-5 P+Q+S P+Q+S 5e-6 P+Q+PCG P+Q+PCG 1e-5 P+Q+AHB P+Q 1e-5 P+Q+AHB P+Q+AHB 1e-4 P+Q+ACG P+Q 1e-5 P+Q+ACG P+Q+ACG 1e-4 P+Q+AHB+ACG P+Q+AHB+ACG 1e-5 P+Q+AG P+Q 1e-5 P+Q+AG P+Q+AHB 1e-5 P+Q+AG P+Q+ACG 1e-5 P+Q+AG or AHB P+Q 1e-5 P+Q+AG or AHB P+Q+AHB 5e-5 P+Q+AG or ACG P+Q 1e-5 P+Q+AG or ACG P+Q+ACG 5e-5 Table 9: Learning rates used in our experiments with CondaQA. Note that AG affirmative interpretations are not available at testing time. D Affirmative Interpretations Examples Table 7 shows two negated sentences and their automatically obtained affirmative interpretations. The bottom half of the table includes errors, as the automatically generated affirmative interpretations contain negations. Table 8 contains a couple examples from CondaQA in which the process to generate affirmative interpretations made mistakes along with our observations. E Training Details with CondaQA We use the RoBERTa-Large model (Liu et al., 2019) for our experiments with CondaQA. We use the implementation of RoBERTa-Large in the HuggingFace Transformers library (Wolf et al., 2020). The model is trained using early stopping with a patience of 3 epochs and batch size 16. Table 9 shows the learning rates that we used for our experiments with CondaQA. We use the default values 612 for the other hyperparameters. F Training Details with Additional NLU Tasks We use the implementation by Phang et al. (2020) with RoBERTa-Large as the base model. We use the default values for the hyperparameters, with the exception of the learning rate, batch size and maximum number of epochs for early stopping. Table 10 shows the learning rates and batch sizes that we used for our experiments on each corpus. G CondaQA Dataset Figure 2 shows an example from CondaQA. Note that CondaQA highlights the original negated sentences from the original passages but not the edited sentences. However, we use the available information in the dataset such as the original sentence, the original passage and the edited passage to extract the edited sentences. Specifically, we identify sentence boundaries in the original passage and pinpoint the index of the sentence that contains negation. Then, we identify sentence boundaries in the edited passage and use the same index to extract the edited sentence. We use the extracted edited sentence to generate affirmative interpretations. The authors manually analyzed 100 samples of the extracted edited sentences and confirmed that in 96% of the cases, the extracted edited sentences are the same as the edited sentences in the passage. Additionally, Table 11 shows the basic properties of the edits made by crowdworkers. H Additional Results with CondaQA Table 12 shows additional results with RoBERTaLarge and CondaQA for each edit type. The results show that incorporating affirmative interpretations with RoBERTa-Large improves results not only with the entire test set, but also with each edit type individually. However, not all of the improvements are statistically significant. The only statistically significant improvements are with (1) the scope edit type when trained with P+Q+ACG or ACG and tested with P+Q+ACG, and (2) the affirmative edit type when trained with P+Q+AHB and tested with P+Q+AHB. I UnifiedQA-v2 Training Corpora Table 13 shows the QA corpora that Khashabi et al. (2022) used to train UnifiedQA-v2. These corpora span the following QA formats: extractive, abstractive, multiple-choice, and yes-no questions. 613 CmmnsnsQA STS-B QNLI WiC WSC RoBERTa 1e-5 (16) 1e-5 (16) 1e-5 (8) 1e-5 (16) 1e-6 (16) RoBERTa w/ Affirmative Interpret. obtained using T5-HB (AHB) 5e-6 (16) 5e-6 (8) 5e-6 (16) 1e-5 (16) 5e-6 (16) obtained using T5-CG (ACG) 5e-6 (16) 5e-6 (16) 1e-5 (16) 5e-6 (16) 5e-6 (16) Table 10: The learning rates (and batch sizes) used in our experiments with each corpus. Original Passage: A semiconductor diode is a device typically made from a single p-n junction. At the junction of a p-type and an n-type semiconductor, there forms a depletion region where current conduction is inhibited by the lack of mobile charge carriers. When the device is "forward biased" (connected with the p-side at higher electric potential than the n-side), this depletion region is diminished, allowing for significant conduction, while only very small current can be achieved when the diode is "reverse biased" and thus the depletion region expanded. Original Sentence (with Negation): At the junction of a p-type and an n-type semiconductor, there forms a depletion region where current conduction is inhibited by the lack of mobile charge carriers. Negation Cue: lack Edited Passage: A semiconductor diode is a device typically made from a single p-n junction. At the junction of a p-type and an n-type semiconductor there forms a depletion region where current conduction is inhibited by the absence of mobile charge carriers. When the device is "forward biased" (connected with the p-side at higher electric potential than the n-side), this depletion region is diminished, allowing for significant conduction, while only very small current can be achieved when the diode is "reverse biased" and thus the depletion region expanded. Edit Type: Paraphrase Question: Is the current conduction negatively affected by the amount of mobile charge carriers? Answer: Yes Extracted Edited Sentence: At the junction of a p-type and an n-type semiconductor there forms a depletion region where current conduction is inhibited by the absence of mobile charge carriers. Figure 2: An example from CondaQA. The negation in the original sentence is lack. The crowdworkers wrote a paraphrase of the original sentence, which is included in the edited passage ([. . . ] by the absence of mobile charge carriers). The question is written based on the original paragraph and answered based on the original and all three edited passages (only paraphrase edit shown). The answer to the question (for the edited passage) is Yes. The dataset does not explicitly indicate the edited sentence. However, we extract it as explained in Appendix G. Edit % Negated Meaning Answer Paraphrase 59.5 Same Unchanged Scope 97.7 Changed Unchanged or changed Affirmative 43.6 Reversed Reversed Table 11: Basic properties of the edits made by crowdworkers in the process of creating CondaQA. The Negated column shows the percentage of edits that have negation. The Meaning and Answer columns indicate the differences in meaning (if any) between (1) the original and edited passage and (2) answers to the same question according to the original and edited passage. Changed does not necessarily mean reversed. 614 Input Representation Accuracy # Params. Training Testing All Ori. Par. Sco. Aff. RoBERTa-Large 355M Q Q 47.4 52.1 52.3 47.4 39.0 P P 45.4 46.5 46.1 45.2 43.9 P+Q P+Q 64.9 67.2 66.0 59.5 66.0 w/ sentence with neg. from P (S) P+Q+S P+Q+S 65.2 66.0 64.6 61.8 68.3 w/ 1st par. of S by T5-CG (SCG) P+Q+SCG P+Q+SCG 65.7 68.3 67.1 60.2 67.0 w/ Affirmative Interpretations P+Q+AHB P+Q 62.8 64.6 62.9 58.6 64.9 P+Q+AHB P+Q+AHB 67.1∗ 68.5 68.0 61.8 69.7∗ P+Q+ACG P+Q 61.3 64.7 62.3 58.2 59.8 P+Q+ACG P+Q+ACG 66.4∗ 68.6 67.2 61.7 67.8 P+Q+AHB+ACG P+Q+AHB+ACG 65.6 68.4 66.6 59.4 67.6 P+Q+AG P+Q 63.6 65.2 64.8 58.6 65.5 P+Q+AG P+Q+AHB 64.4 65.5 65.3 60.3 66.2 P+Q+AG P+Q+ACG 65.6 67.2 66.8 59.7 68.2 P+Q+AG or AHB P+Q 62.5 64.2 63.4 58.5 63.6 P+Q+AG or AHB P+Q+AHB 65.7 67.2 67.2 59.6 68.2 P+Q+AG or ACG P+Q 60.6 62.6 61.7 57.6 60.3 P+Q+AG or ACG P+Q+ACG 66.7∗ 69.0 67.2 62.4∗67.8 Table 12: The accuracy of RoBERTa-Large on the CondaQA test set for each edit type. We indicate statistically significant improvements (McNemar’s test (McNemar, 1947), p < 0.05) with respect to the model trained without affirmative interpretations (P+Q during training and testing) on each edit type with an asterisk (∗). Corpus # Train Inst. Reference Squad 1.1 87,599 Rajpurkar et al. (2016) Squad 2 130,319 Rajpurkar et al. (2018) Newsqa 92,549 Trischler et al. (2017) Quoref 19,399 Dasigi et al. (2019) Ropes 10,924 Lin et al. (2019) NarrativeQA 32,747 Koˇciský et al. (2018) DROP 77,409 Dua et al. (2019) NaturalQuestions 307,373 Kwiatkowski et al. (2019) MCTest 1,480 Richardson et al. (2013) RACE 87,866 Lai et al. (2017) OpenBookQA 4,957 Mihaylov et al. (2018) ARC 2,590 Clark et al. (2018) CommonsenseQA 9,741 Talmor et al. (2019) QASC 8,134 Khot et al. (2020) PhysicalIQA 16,000 Bisk et al. (2019) SocialIQA 33,410 Sap et al. (2019) Winogrande 40,398 Sakaguchi et al. (2020) BoolQ 9,427 Clark et al. (2019) MultiRC (yes/no) 6,000 Khashabi et al. (2018) BoolQ-NP 9,727 Khashabi et al. (2020) Table 13: The corpora that Khashabi et al. (2022) used to train UnifiedQA-v2, and the number of training instances in each corpus. 615
2024_short
55
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 616–629 August 11-16, 2024 ©2024 Association for Computational Linguistics Exploring Conditional Variational Mechanism to Pinyin Input Method for Addressing One-to-Many Mappings in Low-Resource Scenarios Bin Sun1∗, Jianfeng Li2, Hao Zhou2, Fandong Meng2, Kan Li1†, Jie Zhou2 1School of Computer Science & Technology, Beijing Institute of Technology 2WeChat AI, Tencent Inc., China {binsun,likan}@bit.edu.cn {lijfli,tuxzhou,fandongmeng,withtomzhou}@tencent.com Abstract Pinyin input method engine (IME) refers to the transformation tool from pinyin sequence to Chinese characters, which is widely used on mobile phone applications. Due to the homophones, Pinyin IME suffers from the oneto-many mapping problem in the process of pinyin sequences to Chinese characters. To solve the above issue, this paper makes the first exploration to leverage an effective conditional variational mechanism (CVM) for pinyin IME. However, to ensure the stable and smooth operation of Pinyin IME under low-resource conditions (e.g., on offline mobile devices), we should balance diversity, accuracy, and efficiency with CVM, which is still challenging. To this end, we employ a novel strategy that simplifies the complexity of semantic encoding by facilitating the interaction between pinyin and the Chinese character information during the construction of continuous latent variables. Concurrently, the accuracy of the outcomes is enhanced by capitalizing on the discrete latent variables. Experimental results demonstrate the superior performance of our method. 1 Introduction Input method engines (IMEs)1 are important tools to connect users with mobile applications, drawing dramatic attentions (Chen and Lee, 2000; Li et al., 2004; Zheng et al., 2011; Han and Chang, 2013; Chen et al., 2013; Jia and Zhao, 2014; Huang et al., 2015, 2018; Zhang et al., 2019; Liu et al., 2021; Tan et al., 2022; Ding et al., 2023). In China, there are two common Pinyin IMEs2 for cellphones: the 9-key IMEs and the 26-key IMEs, which are used by more than 97% of Chinese people (Hu et al., 2022). As shown in Figure 1, the 26-key keyboard *Work done at WeChat AI, Tencent Inc. †Corresponding Author 1https://en.wikipedia.org/wiki/Input_method 2https://en.wikipedia.org/wiki/Pinyin_input_method Figure 1: The 9-key and 26-key IME. uses the 26 English letters as Chinese pinyin syllables, while the 9-key keyboard maps the 26 pinyin syllables onto 8 keys. Due to the Chinese homophones, the process of converting pinyin sequences to Chinese character sequences inevitably presents a one-to-many mapping challenge for Pinyin IME. In the perfect pinyin mode of a 26-key IME, 500 pinyin combinations need to correspond to nearly 10,000 Chinese characters (Jia and Zhao, 2014; Zhang et al., 2019). For instance, inputting the pinyin sequence "bei zi" can map to various Chinese characters with completely different meanings, such as "被子" (blanket), "杯 子" (cup), and "辈子" (lifetime). In the case of the abbreviated pinyin mode, entering the initial letters "b z" for "bei zi" can result in not only the aforementioned characters but also others like "不 止" (more than), "不在" (not present), and "步骤" (steps). As for 9-key IMEs, each key can represent 3 to 4 pinyin syllables, which means that inputting "23494" offers 323 possible pinyin combinations except "beizi". While some pinyin combinations that do not adhere to standard rules can be pruned, this undoubtedly expands the solution space. One effective method to alleviate the one-tomany problem is to generate more candidates for users to autonomously choose the one they need. Existing methods typically employ beam search to generate additional candidates. However, Holtzman et al. (2020) found that unlike beam search, which selects the token with the highest probability, humans tend to choose more surprising and 616 diverse tokens. Furthermore, beam search requires sorting multiple candidates during the generation process, and in some low-resource scenarios (such as on offline mobile devices), it is challenging to ensure stable and rapid generation due to the lack of sufficient memory and computational resources. To alleviate the aforementioned problems, we take inspirations from conditional variational mechanism, which models the one-to-many cases through the latent variable space and generate various results by sampling different latent variables (Shen et al., 2017; Zhao et al., 2017; Bao et al., 2020; Lin et al., 2020; Fang et al., 2021; Sun et al., 2021). Therefore, instead of prioritizing the arrangement of the highest-scoring candidate result, our primary study of interest is to recall more eligible candidates within the same inference time. To this end, we propose a conditional variational IME model (CV-IME) with a novel hybrid latent variables strategy. Please refer to § 2.2 for details. Our contributions are as follow: To the best of our knowledge, this is the first exploration and investigation of the impact of CVM on the performance of Pinyin IME in low-resource scenarios, specifically on offline mobile platforms. Furthermore, we propose a novel hybrid latent variable that designed to balance the performance and efficiency of our CV-IME model. 2 Methodology 2.1 Base Model With the advancement of technology, the latest Pinyin IMEs, e.g., PinyinGPT (Tan et al., 2022) and GeneInput (Ding et al., 2023), primarily adopt models based on the transformer architecture (Vaswani et al., 2017). Therefore, we have adopted the transformer structure and conducted a series of experiments to identify the most suitable configuration. 2.2 Conditional Variational IME Following the previous work of CVM, CV-IME primarily consists of four components: a encoderdecoder model, a prior network pθ(z|c), a recognition network qϕ(z|r, c) and a discrete latent variable matrix M. c, r and z represent the user input (i.e., context and pinyin sequence), the character result and the continues latent variable. Hybrid Latent Variable. Previous researches indicate that continuous latent variables can enhance diversity but may reduce relevance, whereas discrete latent variables strengthen relevance but lack diversity (Gao et al., 2019; Bao et al., 2020; Sun et al., 2021, 2023). Therefore, a promising direction is to hybrid the continuous and discrete latent variables, leveraging their respective strengths to complement and offset their weaknesses. To build the hybrid latent variables H, we follow Sun et al. (2023), adding sentence-level continuous latent variable z′ s to the discrete latent variables M: H = (z′ s + M[1], · · · , z′ s + M[k]), where K represents the number of discrete latent variables. Continuous Latent Variables. We initially employ the model encoder to transform c and c+r into prior memory h and posterior memory h′. Given that there is a degree of alignment between the pinyin and character sequences in the task of pinyinto-character conversion, relying solely on the encoder’s self-attention mechanism for interaction may not yield effective information. Therefore, we have introduced an interaction between the prior memory and the posterior memory: h′ = SoftMax(h · h′T ) · h′ (1) To enhance the recognition process, we use h and h′ together to estimate the isotropic Gaussian distribution qϕ(z|c, r) ∼N(µ′, σ′2I) :  µ′ 1, ..., µ′ n log(σ′2 1 ), ..., log(σ′2 n )  =   [h1; h′ 1] · · · [hn; h′ n]  W ′ u , where W ′ u is trainable parameters of qϕ(z|r, c). After that, we follow additive Gaussian mixing (Wang et al., 2017) to obtain the sentence-level continuous latent variables. (see more details in Appendix B) 3 Experimental Settings Benchmarks. We used two public benchmarks, namely the People’s Daily (PD) corpus (Yang et al., 2012) and WD dataset (Tan et al., 2022), in the experiments. PD is extracted from the People’s Daily from 1992 to 1998, while WD is extracted from the WuDaoCorpora (Yuan et al., 2021). Different from PD, WD contains test cases from 16 different domains of test cases. Evaluation Metrics. We use the precision of topN, indicating whether the desired result is included in the generated top-N results. We also use the inference time for one instance as a metric. 617 Training Dataset. To train our CV-IME and base model, we built a training dataset for the pinyin-tocharacter task based on the news2016 corpus3. We randomly extract sentence from news2016 corpus and incorporate “pypinyin” tool to convert Chinese Characters into Pinyin syllables. Table 1 shows the statistics of this dataset. (Please refer to appendix C for more details) Number of Samples Average Sequence Length # Perfect 9692887 Context 18.28 # Abbreviated 9637283 Pinyin 14.68 # Total 19330170 Character 7.31 Table 1: Key statistics of our training dataset. Baseline Models. We introduced some IMEs, i.e., Google IME4, On-OMWA (Zhang et al., 2017) and On-P2C (Zhang et al., 2019) as baselines. • GoogleIME is a commercial Chinese IME that offers an API with debugging capabilities. • On-OMWA system, introduced by Zhang et al. (2017), is an adaptive online model designed for the acquisition of new words, specifically tailored for Chinese IMEs. • On-P2C model, as described in Zhang et al. (2019) on open vocabulary learning, is a neural network-based Pinyin-to-Chinese conversion system that improves its performance by dynamically updating its word database to facilitate learning of an open vocabulary. Training Detail. The hidden size of all models is set to 512. Our CV-IME employs a Transformer model with 2 encoder layers and 1 decoder layer, and additionally incorporates two fully-connected layers as a prior network. We set the batch sizes to 1024 and 256 for base model and CV-IME, respectively. Adam is used for optimization. The initial learning rate is set to 0.0001. We also introduce KL annealing trick to leverage the KL divergence during the training. The KL weight increases linearly from 0 to 1 in the first 3000000 batches. We train all models in 100 epochs on four A100 GPU cards with Pytorch, and save the model parameters when the validation loss reaching minimum. 3https://github.com/brightmart/nlp_chinese_corpus 4https://www.google.com/inputtools/services/features/inputmethod.html Figure 2: Results of different encoder-decoder layer configurations over PD using 9-key IME. Model # Enc # Dec # Parameters Base Model 1 1 21.89M 2 1 24.89M 3 1 27.90M 4 1 30.91M 5 1 33.91M CV-IME 2 1 28.90M Table 2: The number of parameters contained in different configurations of base model and CV-IME. 4 Result and Analysis 4.1 Model Structure Selection. Figure 2 and Table 2 show the generation latency, accuracy and parameters of base models with different configurations, which illustrates that: (1) maintaining a fixed number of encoder while solely increasing decoder layers significantly raises latency (≈10ms) without notably improving accuracy. (2) while the increase in the number of encoder layers leads to a gradual rise in latency (≈2ms), accompanied by an upward trend in accuracy. (3) the number of parameters in an encoder is to some extent positively correlated with accuracy. In selecting the final encoder-decoder configuration, we primarily considered constraints on memory storage and latency. In this work, we posit that under low-resource constraints, with a storage ceiling of no more than 32MB and a generation latency not exceeding 30ms for a single candidate, the system can operate reliably. Regarding storage, we aimed to emulate a realistic mobile environment, mindful of the fact that an IME system houses multiple models, such as PinyinIME, speech recognition, handwriting recognition, etc. Given the overall storage consumption of the system must remain low, we endeavored to limit the size of the PinyinIME model to within 618 Figure 3: Comparative results of different methods to generate delay. 32MB. To ensure stable performance, the model was restricted to using at most INT8 quantization, which means the parameters had to be kept under 32M. As for latency, we set a strict benchmark: the CPU latency for generating a single case under lengthy text conditions must not exceed 30ms. Since the model needs to regenerate results immediately with each pinyin character input by the user, to prevent perceptible delays, we aimed to set even more stringent latency requirements. Therefore, a configuration comprising a 4-layer encoder and a 1-layer decoder represents the most cost-effective choice. 4.2 Generation Latency. Figure 3 shows a comparative analysis of the latency incurred by CV-IME (ours), Base+Beam search (beam) and Base+TopK sample (topk) (Fan et al., 2018) in generating a varying number of candidates. As can be observed from the figure, the latency of CV-IME when generating four candidates is nearly identical to that of the base model when producing two candidates, which demonstrates the superiority of the CV-IME approach in recalling more candidate results under low-resource conditions. We also observed that the latency of the topk significantly increases when generating on CPU devices, which may be attributed to the higher computational complexity of the multinomial function in PyTorch on CPU. 4.3 PD Benchmark. Table 3, 4 show the results of PD. CV-IME-i means the results of CV-IME using i-th hybrid latent variable. Beam represents the beam search. From these results, we can observe that: (1) Our models outperform the baselines on the PD benchmark; (2) our models show more significant results in the task of converting from a abbreviated pinyin to characters; (3) Under the condition of equivalent Model Top-N 26-key IME Time Perfect Abbreviated Google IME P@1 70.90% – – On-OMWA P@1 64.40% – – On-P2C P@1 71.30% – – Base-Beam1 P@1 71.53% 21.65% 20 CV-IME-1 P@1 71.43% 23.04% 20 CV-IME-2 P@1 67.14% 21.20% 20 CV-IME-3 P@1 68.82% 22.09% 20 CV-IME-4 P@1 68.64% 20.18% 20 Google IME P@10 82.30% – – On-OMWA P@10 77.90% – – On-P2C P@10 81.30% – – Base-Beam2 P@2 81.08% 27.32% 27 CV-IME P@4 82.97% 29.90% 27 Table 3: Results of different methods over PD. Each score is averaged over all context-target configurations. Model Top-N 9-key IME Time Perfect Abbreviated Base-Beam1 P@1 54.49% 10.14% 20 CV-IME-1 P@1 45.54% 12.04% 20 CV-IME-2 P@1 52.05% 9.90% 20 CV-IME-3 P@1 57.85% 10.80% 20 CV-IME-4 P@1 49.22% 10.22% 19 Base-Beam2 P@2 65.62% 13.84% 27 CV-IME P@4 66.06% 15.94% 27 Table 4: Results of different 9-key IMEs over PD. time expenditure, our model is capable of generating more candidates and achieve better accuracy compared to the baselines; (4) The four hybrid latent variables exhibited a clustering effect in the 9-key IME, where CV-IME-1 excelled in abbreviated pinyin and CV-IME-3 in perfect pinyin. However, this phenomenon was not replicated in the 26key IMEs. These results suggest that the current training methodology for hybrid latent variables has certain limitations, as it struggles to encourage different latent variables to focus on distinct data categories during training. This will be a direction for our future research. 4.4 WD Benchmark. Table 5 reports the results of different domains over WD. We have selected the results from four domains where the differences between CV-IME and the Base model are the smallest and the largest under various pinyin input patterns. This result demonstrates that the CV-IME achieves a superior performance than base model in terms of all domains in WD. We also conduct experiments with different configurations on WD, which are detailed 619 26-key Perfect Entertainment (%) Education (%) Journey (%) Agriculture (%) Base+Beam2 77.67±0.00 80.99±0.00 74.71±0.00 73.68%±0.00 CV-IME 79.48±0.29 (∆1.80) 83.26±0.10 (∆2.27) 78.73±0.02 (∆4.02) 78.65±0.19 (∆4.97) 9-key Perfect Entertainment (%) Sports (%) Real Estate (%) Agriculture (%) Base+Beam2 60.55±0.00 59.93±0.00 60.94±0.00 57.02±0.00 CV-IME 62.48±0.20 (∆1.94) 62.12±0.06 (∆2.19) 65.76±0.07 (∆4.82) 61.94±0.14 (∆4.92) 26-key Abbreviated Journey (%) Sports (%) Real Estate (%) Economy (%) Base+Beam2 19.75±0.00 20.65±0.00 20.40±0.00 20.85±0.00 CV-IME 20.87±0.09 (∆1.12) 22.18±0.20 (∆1.53) 24.38±0.02 (∆3.98) 25.32±0.22 (∆4.47) 9-key Abbreviated Agriculture (%) Automobile (%) Real Estate (%) International (%) Base+Beam2 8.60±0.00 9.25±0.00 8.60±0.00 7.85±0.00 CV-IME 9.35±0.07 (∆0.75) 10.33±0.15 (∆1.08) 12.15±0.15 (∆3.55) 11.40±0.08 (∆3.55) Table 5: Results of different domains over WD. 9-key IMEs Perfect Abbreviated PD (%) WD (%) PD (%) WD (%) CLS 65.690 62.901 15.878 10.190 CHVT 65.781 62.992 15.924 10.165 CV-IME 66.056 63.269 15.935 10.310 w/o. CLV 66.343 62.656 16.899 10.259 w/o. DLV 57.649 53.446 7.885 4.420 Table 6: The results of ablation study. in the appendix D.3. 4.5 Ablation Study. Table 6 presents the results of ablation experiments, where “CLS” and “CHVT” are two alternative strategies for constructing hybrid latent variables that differ from our approach: • “CLS” means using the [CLS] token to determine the prior distribution of continuous latent variables. • “CHVT” stands for Conditional Hybrid Variational Transformer (Sun et al., 2023), which also utilizes hybrid latent variables, but it is primarily used in dialogue tasks. Compared to CLS and CHVT, CV-IME achieves better performance on PD and WD benchmarks, indicating the effectiveness of the proposed strategy in the pinyin-to-character task. Moreover, “w/o. CLV” and “w/o. DLV” denote the CV-IME model variants with the continuous latent variables (CLV) and discrete latent variables (DLV) removed, respectively. The findings indicate that DLV may excel in accuracy but fall short in generalizing across diverse scenarios. Therefore, the “w/o. CLV” performs well on news data (PD) similar to the training set but not as well on OOD data (WD). Similarly, CLV may excel in diversity but compromise on precision, which exhibit a marked decline in performance when the DLV is removed. Furthermore, from perfect to abbreviated pinyin, the degradation in performance becomes more pronounced. 4.6 Discussion on Top-1 Results. For the Pinyin-to-character task, there is a clear correlation between the top-1 accuracy and the distribution differences between training and testing data. This is due to the presence of one-to-many samples in the data, where identical Pinyin corresponds to completely different outcomes. If the most proportionate samples in the test data also happen to be the highest probability samples in the training data, the model’s top-1 results are likely to be high. The CV-IME is proposed to internalize one-to-many data through latent variables, mitigating the excessive influence of training data distribution on the test data distribution. Experimental results reveal differentiated outcomes presented by various hybrid latent variables, indicating that latent variables can indeed diversify data distributions. However, the current training is unsupervised, and the overall differentiation effect is not pronounced, necessitating further research. 5 Conclusion This paper introduces the conditional variational mechanism into the IME model, presenting the CVIME model. By incorporating hybrid latent variables, CV-IME enhances diversity while maintaining the quality of generated results. In comparison to existing IME models, the experimental results demonstrate that CV-IME can recall more diverse and accurate results within similar time constraints, exhibiting significant advantages in low-resource scenarios, such as offline mobile devices. 620 Limitations Application Scenarios. The CV-IME model is proposed to effectively mitigate the severe one-tomany problem in the task of pinyin-to-character conversion under low-resource conditions (e.g., on the mobile phone devices). Therefore, under conditions of abundant computational and storage resources, the introduction of larger pre-trained language models with more parameters may yield better results. After all, the practical application of latent variable-based pre-training techniques remains to be tested, which also constitutes one of our future research directions. Data Distribution. The training data and evaluation benchmarks are extracted from different Chinese corpora, which are not not consistent with the data generated by real users of the Pinyin IME, and there are certain differences in their distributions. Consequently, in constructing our training dataset, we selected news data closely aligned with the PD benchmark to approximate independent and identically distributed scenarios (comparison with PD), and out of domain scenarios (comparison with WD). Through the aforementioned configurations, we have rudimentarily simulated real-world scenarios of general distribution and user-specific personalization, which to some extent, demonstrates the efficacy of our approach in practical applications. Flexibility and Differentiation. The hyperparameters (e.g., the number of discrete latent variables, the annealing steps of KL and so on) need to be determined through multiple experiments, which cannot be set adaptively. Additionally, the experimental results indicate that the different mixed latent variables are not sufficiently independent, as the corresponding generated texts are not entirely distinct. This may be attributed to the current training methodology being guided by unsupervised gradient backpropagation. It might be necessary to introduce regularization terms or to devise a novel training approach to enhance the discriminability between the mixed latent variables. These initial promising results for distinguishing different hybrid latent variables for recalling diverse candidate results will hopefully lead to future work in this interesting direction. Ethics Statement We acknowledge and ensure that our study is compatible with the provided Code of Ethics. Pinyin input method engine (IME) is crucial for building connection between Chinese people and mobile applications, which is an import topic in Chinese natural language process field. All our experiments are conducted on public available datasets to avoid ethical concerns. All terms for using these datasets are strictly followed in our study. There are no direct ethical concerns in our research. Acknowledgements We would like to thank the anonymous reviewers for their constructive comments. This work is supported by the Beijing Natural Science Foundation, China (Nos. 4222037, L181010). References David H. Ackley, Geoffrey E. Hinton, and Terrence J. Sejnowski. 1985. A learning algorithm for boltzmann machines. Cogn. Sci., 9(1):147–169. Siqi Bao, Huang He, Fan Wang, Hua Wu, and Haifeng Wang. 2020. PLATO: pre-trained dialogue generation model with discrete latent variable. In ACL, pages 85–96. ACL. Christopher M Bishop and Nasser M Nasrabadi. 2006. Pattern recognition and machine learning, volume 4. Springer. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Józefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In CoNLL, pages 10–21. Hongshen Chen, Zhaochun Ren, Jiliang Tang, Yihong Eric Zhao, and Dawei Yin. 2018. Hierarchical variational memory network for dialogue generation. In WWW, pages 1653–1662. ACM. Kuan-Yu Chen, Hung-Shin Lee, Chung-Han Lee, HsinMin Wang, and Hsin-Hsi Chen. 2013. A study of language modeling for chinese spelling check. In SIGHAN, pages 79–83. Asian Federation of NLP. Wei Chen, Yeyun Gong, Song Wang, Bolun Yao, Weizhen Qi, Zhongyu Wei, Xiaowu Hu, Bartuer Zhou, Yi Mao, Weizhu Chen, Biao Cheng, and Nan Duan. 2022. Dialogved: A pre-trained latent variable encoder-decoder model for dialog response generation. In ACL, pages 4852–4864. ACL. Zheng Chen and Kai-Fu Lee. 2000. A new statistical approach to chinese pinyin input. In ACL, pages 241–247. ACL. Hsun-wen Chiu, Jian-Cheng Wu, and Jason S. Chang. 2013. Chinese spelling checker based on statistical machine translation. In SIGHAN, pages 49–53. Asian Federation of NLP. 621 Keyu Ding, Yongcan Wang, Zihang Xu, Zhenzhen Jia, Shijin Wang, Cong Liu, and Enhong Chen. 2023. Generative input: Towards next-generation input methods paradigm. CoRR, abs/2311.01166. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In ACL, pages 889–898, Melbourne, Australia. ACL. Le Fang, Tao Zeng, Chaochun Liu, Liefeng Bo, Wen Dong, and Changyou Chen. 2021. Transformerbased conditional variational autoencoder for controllable story generation. CoRR, abs/2101.00828. Xiang Gao, Sungjin Lee, Yizhe Zhang, Chris Brockett, Michel Galley, Jianfeng Gao, and Bill Dolan. 2019. Jointly optimizing diversity and relevance in neural response generation. In NAACL-HLT (1), pages 1229–1238. Dongxu Han and Baobao Chang. 2013. A maximum entropy approach to chinese spelling check. In SIGHAN, pages 74–78. Asian Federation of NLP. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In ICLR. OpenReview.net. Yong Hu, Fandong Meng, and Jie Zhou. 2022. CSCDIME: correcting spelling errors generated by pinyin IME. CoRR, abs/2211.08788. Guoping Huang, Jiajun Zhang, Yu Zhou, and Chengqing Zong. 2015. A new input method for human translators: Integrating machine translation effectively and imperceptibly. In IJCAI, pages 1163–1169. AAAI. Yafang Huang, Zuchao Li, Zhuosheng Zhang, and Hai Zhao. 2018. Moon IME: neural-based chinese pinyin aided input method with customizable association. In ACL, pages 140–145. ACL. Yafang Huang and Hai Zhao. 2018. Chinese pinyin aided ime, input what you have not keystroked yet. In EMNLP, pages 2923–2929. ACL. Zhongye Jia and Hai Zhao. 2014. A joint graph model for pinyin-to-chinese conversion with typo correction. In ACL, pages 1512–1523. ACL. Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In ICLR. Haizhou Li, Min Zhang, and Jian Su. 2004. A joint source-channel model for machine transliteration. In ACL, pages 159–166. ACL. Zhaojiang Lin, Genta Indra Winata, Peng Xu, Zihan Liu, and Pascale Fung. 2020. Variational transformers for diverse response generation. CoRR, abs/2003.12738. Shulin Liu, Tao Yang, Tianchi Yue, Feng Zhang, and Di Wang. 2021. PLOME: pre-training with misspelled knowledge for chinese spelling correction. In ACL/IJCNLP, pages 2991–3000. ACL. Clara Meister, Tiago Pimentel, Gian Wiher, and Ryan Cotterell. 2022. Locally typical sampling. Transactions of the ACL. Xiaoyu Shen, Hui Su, Yanran Li, Wenjie Li, Shuzi Niu, Yang Zhao, Akiko Aizawa, and Guoping Long. 2017. A conditional variational framework for dialog generation. In ACL (2), pages 504–509. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In NeurIPS, pages 3483–3491. Bin Sun, Shaoxiong Feng, Yiwei Li, Jiamou Liu, and Kan Li. 2021. Generating relevant and coherent dialogue responses using self-separated conditional variational autoencoders. In ACL/IJCNLP, pages 5624– 5637. ACL. Bin Sun, Yitong Li, Fei Mi, Weichao Wang, Yiwei Li, and Kan Li. 2023. Towards diverse, relevant and coherent open-domain dialogue generation via hybrid latent variables. In AAAI, pages 13600–13608. AAAI Press. Minghuan Tan, Yong Dai, Duyu Tang, Zhangyin Feng, Guoping Huang, Jing Jiang, Jiwei Li, and Shuming Shi. 2022. Exploring and adapting chinese GPT to pinyin input method. In ACL, pages 1899–1909. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 5998–6008. Liwei Wang, Alexander G. Schwing, and Svetlana Lazebnik. 2017. Diverse and accurate image description using a variational auto-encoder with an additive gaussian encoding space. In NIPS, pages 5756–5766. Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. 2016. Attribute2image: Conditional image generation from visual attributes. In ECCV, volume 9908, pages 776–791. Shaohua Yang, Hai Zhao, and Bao-liang Lu. 2012. A machine translation approach for chinese wholesentence pinyin-to-character conversion. In PACLIC, pages 333–342. Sha Yuan, Hanyu Zhao, Zhengxiao Du, Ming Ding, Xiao Liu, Yukuo Cen, Xu Zou, Zhilin Yang, and Jie Tang. 2021. Wudaocorpora: A super large-scale chinese corpora for pre-training language models. AI Open, 2:65–68. Xihu Zhang, Chu Wei, and Hai Zhao. 2017. Tracing a loose wordhood for chinese input method engine. CoRR, abs/1712.04158. Zhuosheng Zhang, Yafang Huang, and Hai Zhao. 2019. Open vocabulary learning for neural chinese pinyin IME. In ACL, pages 1584–1594. ACL. 622 Tiancheng Zhao, Ran Zhao, and Maxine Eskénazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In ACL (1), pages 654–664. Yabin Zheng, Chen Li, and Maosong Sun. 2011. CHIME: an efficient error-tolerant chinese pinyin input method. In IJCAI, pages 2551–2556. IJCAI/AAAI. A Related Work A.1 Input Method Engine Input method engines (IMEs) are important tools to connect users with mobile applications, By providing an efficient and user-friendly interface, it enables users to input text with ease, thereby enhancing their overall experience with the apps. Different from alphabetic languages, the input of some Asian language (i.e. Chinese) characters must rely on the IMEs. In China, there are two common Pinyin IMEs for cellphones: the 9-key IMEs and the 26-key IMEs (Hu et al., 2022). Previous research on Chinese IMEs primarily focused on three tasks associated with the 26-key keyboard: (1) The perfect (abbreviated) pinyin to Chinese characters (PTC) task (Chen and Lee, 2000; Li et al., 2004; Zhang et al., 2017; Huang et al., 2018; Zhang et al., 2019; Tan et al., 2022). This task represents the most fundamental aspect of the Pinyin IME, revealing the core performance capabilities of the IME model. (2) The input noise correction tasks, such as input typo correction (Zheng et al., 2011; Jia and Zhao, 2014; Liu et al., 2021) and Chinese spelling check (Chiu et al., 2013; Han and Chang, 2013; Chen et al., 2013). Due to the limited screen size of mobile phones, users may accidentally press the wrong keys on a 26-key keyboard, leading to incorrect pinyin syllables being entered. For instance, while typing ‘songgei’ (送给, give), the ‘i’ might be mistakenly hit as ‘u’, resulting in ‘songgeu’. Identifying the noise caused by these accidental touches and correcting them to output what the user intended is a significant challenge. (3) The intelligent association task (Huang et al., 2015; Huang and Zhao, 2018; Ding et al., 2023). Usually Pinyin IMEs simply predict a list of character sequences for user choice only according to the pinyin input. However, Chinese inputting is a multi-turn procedure, which can be supposed to be exploited for further user experience promoting. This task is a commonly used input assistance function, which predicts possible next sentences based on the content already entered by the user for selection, to improve input efficiency. A.2 Conditional Variational Mechanism Conditional variational mechanisms (Kingma and Welling, 2014; Sohn et al., 2015; Yan et al., 2016; Bowman et al., 2016) are powerful tools in text generation task, and they are usually used in dialogue generation models. By using continuous latent variables, previous conditional variational mechanisms are introduced into dialogue generation models to tackle short, dull and general responses problem (Shen et al., 2017; Zhao et al., 2017; Chen et al., 2018; Lin et al., 2020; Fang et al., 2021; Sun et al., 2021; Chen et al., 2022; Sun et al., 2023). The conditional variational mechanism estimates the posterior probability distributions p(z|c, r) and the prior probability distribution p(z|c) of latent variable z based on the dialogue corpora, where c denotes the context, r denotes the response, and a context and a response together constitute a singleturn dialogue pair. During training, these models sample the continuous latent variable z from p(z|c, r) and maximize the conditional probability p(r|c, z) to encode context and response into latent space. Meanwhile, they also minimize the KL-divergence DKL(p(z|c, r)||p(z|c)) to bring the two distributions closer together, thus constraining the continuous latent variables z sampled from the prior distribution p(z|c) for inference. In practically, the continuous latent variables effectively help dialogue models to generate diverse responses. Nevertheless, owing to the one-to-many and many-to-one phenomena, the continuous latent variables frequently struggle to encapsulate the precise contextual semantics, leading to responses that are irrelevant and lack coherence (Sun et al., 2021). Different from the continuous latent variables, discrete latent variables are better at producing relevant and coherent responses. For example, Bao et al. (2020) uses Latent Act Recognition to model the relationship between discrete latent variables and multiple responses, and proposes Response Selection to chose the generated responses of most coherent with the context. However, owing to their limited scale, discrete latent variables might encapsulate a narrower range of features compared to their continuous counterparts. Therefore, combining continuous and discrete latent variables presents a promising direction. By doing so, the strengths of each can be harnessed and their weaknesses mitigated, allowing for a more 623 balanced approach that capitalizes on the diversity provided by continuous variables and the specificity afforded by discrete variables. This hybrid approach could potentially lead to more robust and nuanced models that better capture the complexities of the data they are designed to represent. Based on this, (Sun et al., 2023) propose a hybrid latent variable strategy and a Conditional Hybrid Variational Transformer (CHVT) for dialogue generation task. Different from the CHVT, our CV-IME focus on the pinyin-to-characters task. Owing to the pronounced alignment between the input pinyin sequences and the target character sequences within IME data, the conventional approach to information interchange employed during the construction of continuous latent variables in the CHVT framework can inadvertently overlook salient character sequence details. This oversight has the potential to compromise model performance. In response to this challenge, we introduce an innovative strategy for the formulation of continuous latent variables. This strategy is designed to intensify the interaction of information between pinyin and character sequences, consequently bolstering the efficacy of the training phase. A.3 Generation Methods Beam Search (BS), a popular breadth-first decoding method, is widely used in text generation task. Unfortunately, they inherently exhibit a deficiency in diversity, which frequently results in performance degradation within human-like contexts (Holtzman et al., 2020). Additionally, BS necessitates the computation of cumulative scores for each candidate during the decoding process, and concurrently requires the sorting and recombination of samples, thereby augmenting the computational burden. Under conditions of constrained computational resources, this may impede the realization of its advantages. To enrich the diversity of BS, stochastic decoding strategies are introduced in the generation phrase. Ancestral sampling (AS) (Bishop and Nasrabadi, 2006) is the most straightforward but less effective sampling method. Temperature sampling (Ackley et al., 1985) is an improvement of AS, which introduces temperature to shape the probability distribution. However, due to the randomness, both of them will damage the quality of generated results. To mitigate this problem, top-k (Fan et al., 2018), nucleus sampling (Holtzman et al., 2020), and locally typical (Meister et al., 2022) sampling are proposed to truncate the distributions, which aim at improving quality while preserving diversity. However, the truncation and re-scaling of probabilities also demand additional computational effort, similarly presenting challenges with respect to latency. Diverging from the aforementioned approaches, we introduce a conditional variational mechanism into the IME model, optimizing the sampling process through adjustments to the model structure, while exclusively employing greedy search to circumvent additional computational overhead. Leveraging the latent variables in sampling, CV-IME is capable of enhancing the diversity of generated results under conditions of limited latency. B Method Construction of Continuous Latent Variables. The prior and recognition network are responsible for estimating the prior and the posterior distribution of continues latent variables. We first use the Transformer encoder to encode the input sequence (x = x1, x2, · · · , xn) to obtain its final hidden state (h = h1, h2, · · · , hn) as prior memory, where n denotes the length of c. Then, we use the same encoder to encode the input and target sequence (x′ = x1, · · · , xn, · · · , xn+m) to obtain the posterior memory (h′ = h′ 1, · · · , h′ n, · · · , h′ n+m), where m means the length of r. Next, we use prior memory to recompute the posterior memory: h′ = SoftMax(h · h′T ) · h′ (2) Finally, similar with previous works (Bowman et al., 2016; Zhao et al., 2017; Shen et al., 2017) that assume z follows isotropic Gaussian distribution, we use fully-connected networks as pθ(z|c) ∼ N(µ, σ2I) and qϕ(z|c, r) ∼N(µ′, σ′2I) :  µ1, ..., µn log(σ2 1), ..., log(σ2 n)  = tanh(   h1 · · · hn  Wd)Wu  µ′ 1, ..., µ′ n log(σ′2 1 ), ..., log(σ′2 n )  =   [h1; h′ 1] · · · [hn; h′ n]  W ′ u , where W{d,u}, W ′ u are trainable parameters of prior network and recognition network. At this point we have n token-level probability distributions for n tokens in c. To take full use of these distributions, we follow the additive Gaussian mixing (Wang et al., 624 2017) to compute the sentence-level distribution: pθ(zs|c) ∼N( n X i=1 wiµi, n Y i=1 σ2wi i ) qϕ(z′ s|c, r) ∼N( n X i=1 wiµ′ i, n Y i=1 σ′2wi i ) , where zs represents the sentence-level latent variable, wi denotes the weight of the i-th distribution. Finally, we use the reparameterization trick (Kingma and Welling, 2014; Zhao et al., 2017) to obtain samples of zs either from p(zs|c, r) (training) or p(zs|c) (inference). The sentence-level latent variable zs will be used for constructing the hybrid latent variable afterwards. Construction of Hybrid Latent Variables. To build the hybrid latent variables H, during training, we first sample the z′ s from the p(z′ s|c, r) and then expanded K times that make it added to the discrete latent variables M: H =   z′ s + M[1] · · · z′ s + M[K]  , where K represents the number of discrete latent variables. Loss Function. During training, CV-IME introduce the self-separation training and aims to maximizing the variational lower bound of the conditional log likelihood (Kingma and Welling, 2014; Sohn et al., 2015; Yan et al., 2016): L(θ, ϕ, Ω, M; r, c) = K X i=1 αiEqϕ(z′s|r,c) [log p(r|[Hi; c])] −λ DKL(qϕ(z′ s|r, c)||pθ(zs|c)) αi =  1 if Ei = max(E1, · · · , EK) 0 otherwise Ei = Eqϕ(z′s|r,c)[log p(r|[Hi; c])], where θ, ϕ, ψ, Ω, M are parameters of CV-IME, and λ is the scale factor of KL divergence. Inference Phase. During inference, CV-IME use the prior distribution p(zs|c) to sample the sentence-level continuous latent variable zs and mix zs with discrete latent variables to construct hybrid latent variables. Based on the K discrete latent variables, CV-IME can directly generate K results for the same input. C Experimental Settings Benchmarks. We used two benchmarks: (1) PD benchmark, a commonly used benchmark dataset for the Chinese IME task, is extracted from the People’s Daily from 1992 to 1998 that has word segmentation annotations by Peking University. It contains 2,000 segments of consecutive Chinese characters for testing. For each test case, the input pinyin are all perfect pinyin and the context is null. (2) WD benchmarks is extracted from the WuDaoCorpora (Yuan et al., 2021) that contains 3TB Chinese corpus collected from 822 million Web pages. Tan et al. (2022) randomly select 16 domains from WuDaoCorpora, and segment those documents into sentences. For each sentence, they randomly selected a context ranging from 0-3, 4-9, and 10+ words, while continuously selecting a target of 1-3, 4-9, or 10+ words. Each context-target length tuple like (0-3, 1-3) serves as an evaluation configuration and contains 2,000 test instances. Training Data. To train our CV-IME and the base model, we built a new pinyin-to-character dataset based on the news2016 corpus5. The news2016 corpus comprises 2.5 million news articles, each containing keywords and descriptions. Initially, we extract paragraphs from the data and segment them into sentences using periods, exclamation points, and question marks as delimiters. Subsequently, we divide each sentence from the end into two parts: context and target, omitting segmentation points where the target part includes numbers or special characters. Following this, we retain the case with a probability of 50%. Subsequent to the initial processing, we incorporate the ’pypinyin’ package to construct Pinyin sequences for the target portion of the retained cases. For each case, we determine with a 50% probability whether the data will represent a ’perfect’ Pinyin or an ’abbreviated’ Pinyin. Ultimately, we retained a total of 19,330,170 training samples. Table 1 shows the statistics of our training set. Training Detail. The hidden size of all models is set to 512. The base model consists of 4 layers of encoder and 1 layer of decoder. The CV-IME employs a Transformer model with 2 encoder layers and 1 decoder layer, and additionally incorporates two fully-connected layers as a prior network. The maximum length of input sequence (pinyin + context) and result are set to 42 and 20, respectively. 5https://github.com/brightmart/nlp_chinese_corpus 625 Figure 4: Results of different encoder-decoder layers configurations over PD using 9-key IME Perfect Sample-1 Sample-2 Sample-3 Sample Aver. Time Base+Beam1 84.434%±0.00% 76.002%±0.00% 69.624%±0.00% 56.040%±0.00% 71.525% 19 CV-IME-1 83.050%±0.29% 76.536%±0.10% 70.075%±0.12% 56.040%±0.52% 71.425% 20 CV-IME-2 80.097%±0.21% 71.543%±0.29% 65.681%±0.21% 51.221%±0.43% 67.135% 19 CV-IME-3 81.465%±0.21% 73.480%±0.05% 66.917%±0.22% 53.421%±0.31% 68.821% 20 CV-IME-4 79.696%±0.17% 73.163%±0.14% 67.034%±0.40% 54.675%±0.11% 68.642% 19 Base+Beam2 90.841%±0.00% 85.471%±0.00% 79.950%±0.00% 68.053%±0.00% 81.079% 27 CV-IME 93.093%±0.15% 87.141%±0.09% 82.038%±0.13% 69.593%±0.33% 82.966% 27 Abbreviated Sample-1 Sample-2 Sample-3 Sample Aver. Time Base+Beam1 44.645%±0.00% 24.649%±0.00% 13.677%±0.00% 3.614%±0.00% 21.646% 20 CV-IME-1 45.362%±0.23% 26.703%±0.18% 15.715%±0.17% 4.367%±0.21% 23.037% 20 CV-IME-2 41.508%±0.17% 24.933%±0.17% 14.412%±0.09% 3.932%±0.09% 21.196% 20 CV-IME-3 42.809%±0.05% 26.486%±0.05% 15.230%±0.04% 3.849%±0.06% 22.094% 20 CV-IME-4 37.504%±0.31% 24.516%±0.17% 14.763%±0.15% 3.932%±0.06% 20.179% 19 Base+Beam2 55.355%±0.00% 30.962%±0.00% 18.337%±0.00% 4.618%±0.00% 27.318% 27 CV-IME 58.909%±0.15% 34.469%±0.22% 20.107%±0.31% 6.108%±0.13% 29.898% 27 Table 7: Results of different context-target length configurations over PD using 26-key IME. We set the batch sizes to 1024 and 256 for base model and CV-IME, respectively. Adam is used for optimization. The initial learning rate is set to 0.0001. We also introduce KL annealing trick to leverage the KL divergence during the training. The KL weight increases linearly from 0 to 1 in the first 3000000 batches. We train all models in 100 epochs on four A100 GPU cards with Pytorch, and save the model parameters when the validation loss reaching minimum. D Experimental Results D.1 Model Structure Selection Figure 4 elucidates the following points: (1) Merely augmenting the number of layers in the decoder significantly increases the generation latency without improving the accuracy of the generated results; (2) A model with a single-layer encoder and a sixlayer decoder exhibits a base latency of 69.62 on CPU devices, yet its accuracy is inferior to that of a model with a four-layer encoder and a single-layer decoder; (3) Increasing the number of encoder layers effectively enhances the accuracy of the generated results. Consequently, retaining a single-layer decoder offers the best cost-effectiveness in lowresource scenarios, and, where possible, augmenting the number of encoder layers under constrained conditions contributes to improved accuracy. D.2 PD Benchmark Table 7 and Table 8 show the results of different context-target length configurations over PD benchmakrs. Sample-i means that the target in the set 626 Perfect Sample-1 Sample-2 Sample-3 Sample Aver. Time Base+Beam1 72.322%±0.00% 60.521%±0.00% 50.877%±0.00% 34.257%±0.00% 54.494% 19 CV-IME-1 48.415%±0.25% 54.142%±0.38% 48.488%±0.20% 31.122%±0.61% 45.542% 20 CV-IME-2 66.867%±0.36% 58.333%±0.10% 50.192%±0.33% 32.827%±0.10% 52.055% 19 CV-IME-3 75.592%±0.10% 63.945%±0.40% 54.737%±0.25% 37.118%±0.21% 57.848% 19 CV-IME-4 56.390%±0.31% 56.446%±0.16% 49.925%±0.05% 34.103%±0.45% 49.216% 19 Base+Beam2 81.431%±0.00% 72.295%±0.00% 63.058%±0.00% 45.677%±0.00% 65.615% 27 CV-IME 83.600%±0.23% 72.495%±0.40% 63.442%±0.33% 44.686%±0.20% 66.056% 27 Abbreviated Sample-1 Sample-2 Sample-3 Sample Aver. Time Base+Beam1 27.327%±0.00% 9.469%±0.00% 3.507%±0.00% 0.251%±0.00% 10.139% 20 CV-IME-1 29.696%±0.43% 12.391%±0.13% 4.843%±0.23% 1.222%±0.03% 12.038% 20 CV-IME-2 24.975%±0.49% 9.786%±0.30% 4.225%±0.10% 0.602%±0.13% 9.897% 20 CV-IME-3 27.127%±0.28% 11.055%±0.32% 4.409%±0.05% 0.602%±0.13% 10.798% 20 CV-IME-4 25.325%±0.18% 10.387%±0.38% 4.242%±0.03% 0.937%±0.03% 10.223% 19 Base+Beam2 35.836%±0.00% 13.778%±0.00% 5.110%±0.00% 0.653%±0.00% 13.844% 27 CV-IME 39.256%±0.19% 16.433%±0.35% 6.480%±0.12% 1.573%±0.03% 15.935% 27 Table 8: Results of different context-target length configurations over PD using 9-key IME. Model Sports Journey Games Culture Base+Beam2 77.190%±0.000% 74.714%±0.000% 75.684%±0.000% 69.808%±0.000% CV-IME 80.439%±0.107% 78.730%±0.024% 78.833%±0.235% 72.760%±0.073% Model Military Real Estate Technology Finance Base+Beam2 73.007%±0.000% 77.650%±0.000% 79.677%±0.000% 79.300%±0.000% CV-IME 75.977%±0.195% 81.027%±0.107% 82.560%±0.149% 82.959%±0.154% Model Education Economy Entertainment International Base+Beam2 80.995%±0.000% 78.486%±0.000% 77.675%±0.000% 77.207%±0.000% CV-IME 83.264%±0.099% 80.802%±0.191% 79.475%±0.294% 79.803%±0.297% Model Medical Automobile Agriculture Society Base+Beam2 81.584%±0.000% 78.486%±0.000% 73.684%±0.000% 78.127%±0.000% CV-IME 84.260%±0.227% 81.393%±0.242% 78.655%±0.190% 80.666%±0.310% Table 9: Results of different domains over WD using 26-key IME perfect pinyin mode. Model Sports Journey Games Culture Base+Beam2 20.650%±0.000% 19.750%±0.000% 16.650%±0.000% 17.150%±0.000% CV-IME 22.183%±0.201% 20.867%±0.094% 20.117%±0.062% 18.967%±0.085% Model Military Real Estate Technology Finance Base+Beam2 16.150%±0.000% 20.400%±0.000% 19.450%±0.000% 21.750%±0.000% CV-IME 19.033%±0.094% 24.383%±0.024% 23.183%±0.103% 25.650%±0.212% Model Education Economy Entertainment International Base+Beam2 22.000%±0.000% 20.850%±0.000% 19.850%±0.000% 19.150%±0.000% CV-IME 24.967%±0.306% 25.317%±0.225% 23.450%±0.283% 22.300%±0.204% Model Medical Automobile Agriculture Society Base+Beam2 26.800%±0.000% 20.850%±0.000% 19.250%±0.000% 21.100%±0.000% CV-IME 30.183%±0.295% 24.017%±0.272% 23.200%±0.082% 23.517%±0.287% Table 10: Results of different domains over WD using 26-key IME abbreviated pinyin mode. contains i tokens, and Sample means that the number of tokens in the context in this data set is 0, and all tokens are in the target. CV-IME-i means the results of CV-IME using i-th hybrid latent variable. Base+Beam represents the results of base model with beam search. In the tables presented, we observe that our model outperforms across nearly all configurations of context-target lengths. D.3 WD Benchmark We conducted experiments on the WD dataset across different domains and with various contexttarget length configurations. Table 9, Table 10, Table 11 and Table 12 report the results of different domains over WD. From these tables, it can be observed that: (1) Our CV627 Model Sports Journey Games Culture Base+Beam2 59.927%±0.000% 56.542%±0.000% 57.976%±0.000% 52.356%±0.000% CV-IME 62.122%±0.065% 61.025%±0.065% 60.317%±0.175% 57.034%±0.088% Model Military Real Estate Technology Finance Base+Beam2 54.299%±0.000% 60.940%±0.000% 61.907%±0.000% 62.990%±0.000% CV-IME 56.609%±0.130% 65.762%±0.065% 64.947%±0.089% 65.656%±0.043% Model Education Economy Entertainment International Base+Beam2 63.874%±0.000% 63.764%±0.000% 60.545%±0.000% 60.177%±0.000% CV-IME 67.138%±0.193% 66.269%±0.129% 62.483%±0.198% 63.517%±0.107% Model Medical Automobile Agriculture Society Base+Beam2 67.471%±0.000% 59.634%±0.000% 57.018%±0.000% 59.032%±0.000% CV-IME 70.497%±0.089% 63.708%±0.259% 61.937%±0.135% 63.287%±0.064% Table 11: Results of different domains over WD using 9-key IME perfect pinyin mode. Model Sports Journey Games Culture Base+Beam2 8.600%±0.000% 7.300%±0.000% 6.600%±0.000% 6.750%±0.000% CV-IME 9.633%±0.047% 9.633%±0.103% 8.633%±0.094% 8.400%±0.147% Model Military Real Estate Technology Finance Base+Beam2 7.000%±0.000% 8.600%±0.000% 8.200%±0.000% 9.650%±0.000% CV-IME 8.183%±0.085% 12.150%±0.147% 10.533%±0.024% 11.417%±0.125% Model Education Economy Entertainment International Base+Beam2 8.650%±0.000% 9.050%±0.000% 7.550%±0.000% 7.850%±0.000% CV-IME 10.517%±0.062% 11.700%±0.122% 9.267%±0.165% 11.400%±0.082% Model Medical Automobile Agriculture Society Base+Beam2 11.700%±0.000% 9.250%±0.000% 8.600%±0.000% 8.650%±0.000% CV-IME 13.933%±0.170% 10.333%±0.155% 9.350%±0.071% 9.883%±0.094% Table 12: Results of different domains over WD using 9-key IME abbreviated pinyin mode. IME model consistently outperforms the baseline model across various domains. (2) The improvement ratio varies across different domains, ranging from a minimum of 0.75 points to a maximum of 4.97 points. We hypothesize that this variability may be attributed to the fact that the training data is extracted from news data, which differs in domain information from the various domains in WD. Table 13 presents the results of different contexttarget length configurations over WD. The data from the tables indicate the following observations: (1) CV-IME model achieves superior performance over the baseline in most configurations; (2) As the length of the target increases, the difficulty of achieving an exact match between the generated results and the ground truth progressively rises; (3) Extending the length of the context portion effectively enhances the accuracy of the generated outcomes; (4) CV-IME model exhibits improved performance in the target length phases of 0-3 and 4-9, yet its performance diminishes in scenarios where the target length exceeds 10. This may be attributed to the diversity introduced by latent variables, which leads to a discrepancy between the generated content and the ground-truth. D.4 Case Study Table 14 presents examples of perfect Pinyin mode in 9-key IME. From the table, it can be observed that our CV-IME can recall more diverse and accurate results within similar generation time constraints. However, since CV-IME utilizes unsupervised training of latent variables, the 4 generated results from CV-IME only represent the corresponding mixed latent variables and do not imply the priority of the results. Thus, identifying a timeefficient sorting method is our future research. 628 26-key Perfect model 0-3 4-9 10+ 0-3 Base+Beam2 77.082%±0.00% 61.814%±0.00% 35.983%±0.00% CV-IME 79.354%±0.19% 62.709%±0.20% 35.556%±0.61% 4-9 Base+Beam2 80.541%±0.00% 65.011%±0.00% 35.520%±0.00% CV-IME 84.136%±0.16% 67.356%±0.17% 37.841%±0.63% 10+ Base+Beam2 82.148%±0.00% 68.041%±0.00% 38.764%±0.00% CV-IME 85.800%±0.11% 69.952%±0.16% 39.700%±0.73% 26-key Abbreviated model 0-3 4-9 10+ 0-3 Base+Beam2 19.744%±0.00% 4.497%±0.00% 0.363%±0.00% CV-IME 21.653%±0.20% 4.507%±0.11% 0.308%±0.03% 4-9 Base+Beam2 26.144% ±0.00% 6.188%±0.00% 0.457%±0.00% CV-IME 30.340%±0.22% 7.105%±0.12% 0.451%±0.02% 10+ Base+Beam2 28.278%±0.00% 6.897%±0.00% 0.463%±0.00% CV-IME 32.977%±0.17% 8.064%±0.10% 0.487%±0.02% 9-key Perfect model 0-3 4-9 10+ 0-3 Base+Beam2 60.258%±0.00% 39.708%±0.00% 15.755%±0.00% CV-IME 62.053%±0.19% 39.197%±0.23% 14.759%±0.39% 4-9 Base+Beam2 64.176%±0.00% 42.432%±0.00% 15.764%±0.00% CV-IME 69.329%±0.14% 44.543%±0.13% 16.488%±0.45% 10+ Base+Beam2 67.647%±0.00% 45.197%±0.00% 16.923%±0.00% CV-IME 71.851%±0.18% 47.567%±0.16% 17.588%±0.49% 9-key Abbreviated model 0-3 4-9 10+ 0-3 Base+Beam2 7.463%±0.00% 0.606%±0.00% 0.000%±0.00% CV-IME 8.932%±0.11% 0.690%±0.04% 0.002%±0.00% 4-9 Base+Beam2 12.194% ±0.00% 1.022%±0.00% 0.009%±0.00% CV-IME 14.793%±0.16% 1.251%±0.05% 0.019%±0.00% 10+ Base+Beam2 13.253%±0.00% 1.125%±0.00% 0.006%±0.00% CV-IME 16.345%±0.12% 1.447%±0.04% 0.008%±0.00% Table 13: Results of different context-target length configuration over WD. Each score is averaged over all domains. id Case Predictions 1 Context 经常有这样的 Base+Beam CV-IME Pinyin 8432 1. 体罚 1. 同行对比(peer comparison) Abbreviated No (physical punishment) 2. 提法(statement) Target 提法 2. 提法 3. 体罚(physical punishment) Translation There is often such a (statement) 4. 体会答案 statement (experience solution) 2 Context Base+Beam CV-IME Pinyin 94 1. 恣(wantonly) 1. 中国(China) Abbreviated No 2. 一(one) 2. 优惠(discount) Target 以 3. 以(with) Translation with 4. 香菇(mushroom) 3 Context Base+Beam CV-IME Pinyin 9824364 1. 组成(composition) 1. 五成(fifty percent) Abbreviated No 2. 无成(no success) 2. 组成(composition) Target 组成 3. 禹城(Yucheng) Translation composition 4. 吴城(Wucheng) Table 14: Case study. 629
2024_short
56
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 630–639 August 11-16, 2024 ©2024 Association for Computational Linguistics Consistency Training by Synthetic Question Generation for Conversational Question Answering Hamed Hematian Hemati and Hamid Beigy AI Group, Computer Engineering Department, Sharif University of Technology [email protected], [email protected] Abstract Efficiently modeling historical information is a critical component in addressing user queries within a conversational question-answering (QA) context, as historical context plays a vital role in clarifying the user’s questions. However, irrelevant history induces noise in the reasoning process, especially for those questions with a considerable historical context. In our novel model-agnostic approach, referred to as CoTaH (Consistency-Trained augmented History), we augment the historical information with synthetic questions and subsequently employ consistency training to train a model that utilizes both real and augmented historical data to implicitly make the reasoning robust to irrelevant history. To the best of our knowledge, this is the first instance of research using synthetic question generation as a form of data augmentation to model conversational QA settings. By citing a common modeling error prevalent in previous research, we introduce a new baseline and compare our model’s performance against it, demonstrating an improvement in results, particularly in later turns of the conversation, when dealing with questions that include a large historical context. 1 Introduction Humans often seek data through an informationseeking process in which users engage in multiple interactions with machines to acquire information about a particular concept. A prominent example of this phenomenon is the introduction of ChatGPT (OpenAI, 2023). Conversational QuestionAnswering (CQA) systems address user questions within the context of information-seeking interactions. In CQA, unlike conventional question answering, questions are interconnected, relying on previous questions and their corresponding answers (history) to be fully understood without ambiguities. Qiu et al. (2021) showed that filtering irrelevant history can boost the model’s accuracy. However, it utilizes the gold answers of history instead of the predicted ones, like many previous methods. This setting deviates from the real-world scenario, where models have to rely on their own predictions for previous questions to answer the current question. Our work aligns with the framework of addressing irrelevant history. However, unlike Qiu et al. (2021), our method abstains from utilizing the gold answers of history. Moreover, unlike Qiu et al. (2021), which requires an iterative process to select relevant history, we utilize only one transformer (Vaswani et al., 2017) during prediction, resulting in reduced time and memory. We augment the history of questions in the training set with synthetic questions. Our underlying idea is to maintain the model’s consistency in its reasoning, whether utilizing the original historical data or the augmented version. Baselines like BERT-HAE (Qu et al., 2019a), HAM (Qu et al., 2019b), and GraphFlow (Chen et al., 2020) leverage the gold answers of history in their modeling. Siblini et al. (2021) conducted a re-implementation of BERT-HAE and HAM, and Li et al. (2022) conducted a re-implementation of HAM and GraphFlow using predicted history answers, which resulted in a significant performance decrease. As a result, in this paper, we employ the base transformer of our method as the baseline, as its performance surpasses the re-implementation of the mentioned methods. Our method results in a 1.8% upgrade in overall F1 score compared to this baseline, causing a significant improvement in the scores of questions in the later turns (questions with large historical context). Furthermore, our method introduces a substantial improvement in detecting unanswerable questions compared to the introduced baseline. 2 Related Works The task of CQA has been introduced to extend question answering to a conversational setting. 630 CoQA (Reddy et al., 2019) and QuAC (Choi et al., 2018) have been proposed as two extractive datasets in the CQA task. BERT-HAE (Qu et al., 2019a) employs a manually defined embedding layer to annotate tokens from previous answers within the document, and Qu et al. (2019b) extends this approach by introducing an ordering to these annotations. GraphFlow (Chen et al., 2020) utilizes a graph made out of document tokens to tackle the problem. FlowQA (Huang et al., 2019) utilizes multiple blocks of Flow and Context Integration to facilitate the transfer of information between the context, the question, and the history. ExCorD (Kim et al., 2021) uses consistency regularization (Laine and Aila, 2017; Xie et al., 2020) to regularize the training by leveraging re-written questions. Qiu et al. (2021) introduces the idea of irrelevant history and its effect on degrading performance, proposing a policy network to select the relevant history before reasoning. However, the mentioned models employ the gold answers from history in their modeling. This approach deviates from real-world scenarios, where systems should rely on their previous predictions to answer current questions (Siblini et al., 2021). Siblini et al. (2021) re-implements BERT-HAE and HAM, and Li et al. (2022) re-implements HAM, GraphFlow, and ExCorD using the model’s predictions, reporting a sharp decrease in performance. FlowQA experiences a performance drop from 64.6% to 59.0% on the development set when gold answers in history are not used (Huang et al., 2019). 3 Problem Definition To model a CQA setting, at dialog turn k, a model receives a question (qk), a document containing the answer (D), and the history of the question (Hk), which is represented as a set of tuples, such as Hk = {(q0, apred 0 ), · · · , (qk−1, apred k−1 )}, where apred j is the model’s prediction for qj. It’s important to note that the model may utilize only some of this information. For instance, we only employ history questions while excluding history answers. The objective is to predict the answer apred k for qk. apred k = arg max ak P(ak|qk, Hk, D) (1) 4 Methodology We seek to make the reasoning robust to irrelevant history implicitly by augmenting the dataset. To this end, for question qk, we augment its history by injecting some synthetic questions. Let H⋆ k be the augmented history. The intuition is that irrespective of whether the reasoning is performed with Hk or H⋆ k, the result should be the same. In other words: P(ak|qk, Hk, D) = P(ak|qk, H⋆ k, D) (2) To achieve this goal, we establish a two-stage pipeline. Our pipeline consists of a history augmentation module, whose goal is to augment the history and a question-answering module, whose objective is to consistently train a QA network so that the reasoning is consistent. The overall architecture of our model is depicted in Figure 1. 4.1 History Augmentation Module This module includes a conversational question generator, denoted as CQGθ, where θ represents the parameter set of the generator, and a question selector, denoted as QS, which is responsible for choosing a set of S synthetic questions generated to augment the history. Training The first step involves training CQGθ. While there has been research aimed at generating conversational questions (Gu et al., 2021; Pan et al., 2019), for the sake of simplifying the implementation, we employ a straightforward generative transformer for this task. To train this network, we input D, Hk, and ak into the network, intending to generate qk. We train this network using cross-entropy loss in an auto-regressive manner. Question Generation After training CQGθ, we aim to generate synthetic conversational questions for the training set. Suppose that we want to generate synthetic conversational questions for qk. We iteratively generate synthetic questions between qj and qj+1 for 1 ≤j ≤k −1. Suppose that aj is located in the i-th sentence of the document. We extract noun phrases from sentences i−1, i, and i+1 as potential answers. We make this choice because we want these answers to be similar to the flow of conversation, and if these answers are extracted from local regions, the likelihood increases. Let one of these answers be called asyn. We feed D, Hj+1 (all the questions and answers before asyn), and asyn to CQGθ to obtain the synthetic question of qsyn. We refer to all generated synthetic questions and real questions of history as the pool of questions (Pk) for qk. 631 Figure 1: Architecture of the Model: For a given question qk, the conversational question generator CQGθ constructs a pool of questions denoted as Pk. Questions in Hk are shown in blue. The synthetic questions are depicted in red and green: those similar to Hk questions are in red, and the dissimilar ones are in green. The question selector QS selects M questions with the highest scores, discards red questions, and chooses S = 3 synthetic questions from the green questions according to uniform distribution, along with Hk questions, to create H⋆ k. The QA network QAθ′ computes its output using both Hk and H⋆ k as input. The QA network is trained by minimizing the cross-entropy loss (LCE) and consistency loss (LCons). qk and Hk are from the QuAC dataset. Question Filtering & Injection We could set Pk as H⋆ k; however, Pk contains a multitude of synthetic questions which induces too much noise. Additionally, in the consistency training setting, the noise (perturbation) should be small. Thus, we only select S of synthetic questions from Pk, where S is a hyperparameter. Not all synthetic questions are helpful, necessitating the need to filter out degenerate ones. We want our selected synthetic questions to be similar and relevant to the trend of the conversation. To this end, we compute a score for each synthetic question and only keep the top M synthetic questions with the highest score. To compute the score, each question (real or synthetic) is encoded with LaBSE (Feng et al., 2022). For each synthetic question qsyn which is located between history turns qj and qj+1, the score is computed as Sim(h(qj), h(qsyn))+Sim(h(qj+1), h(qsyn)), where Sim is the cosine similarity function and h(x) is the LaBSE’s encoding of the sentence x. Additionally, sometimes, we generate questions that are too similar to previous or future questions, which are invaluable. Thus, we compare the similarity of the generated question qsyn with questions in {qk} S Hk and if the similarity is above γ, qsyn is discarded. This situation is depicted in Figure 1, where Pk contains real history questions, depicted in blue, and synthetic questions, depicted in red and green. Those synthetic questions that have high similarity with {qk} S Hk are depicted in red. As it can be seen, the two questions “Did she have any children” and “How many children did they have” have high similarity with the question “Did they have children”, and thus, they’re discarded. In addition, we need to set a distribution to guide the selection of S number of generated questions. We conduct experiments using two distributions: uniform and linear. In the uniform setting, the generated questions are selected with the same probability. For the linear, if qsyn is located between qj and qj+1, its probability of being selected (P(qsyn)) is P(qsyn) ∝j. We opt for the linear distribution, as we believe that closer synthetic questions to the original question might contribute to greater robustness, as questions that are further away are likely less relevant. 4.2 Question Answering Module For each question qk, as illustrated in Figure 1, we feed qk, Hk, and D to the QA network (QAθ′) to compute the answer distribution. In parallel, we feed qk, H⋆ k, and D to the QA network to compute another answer distribution. As mentioned in Section 4, we need to impose the condition outlined in Equation (2). To achieve this, we employ KL-Divergence between the answer distributions. 632 Additionally, we use cross-entropy loss to train the QA network for answer prediction. The losses are calculated as per Equation (3), where LCE, LCons, and LT represent the cross-entropy loss, consistency loss, and total loss. λ is a hyperparameter used to determine the ratio of the two losses. LCE = CE(QAθ′(qk, Hk, D), agold k ) LCons = DKL(QAθ′(qk, Hk, D), (3) QAθ′(qk, H⋆ k, D)) LT = LCE + λLCons Furthermore, we acknowledge that augmenting the history for all questions may not be optimal, as initial questions in a dialog, due to their little historical context, may not require augmentation for robust reasoning. In this case augmenting their history might add unnecessary noise, potentially degrading performance. Thus, we introduce a threshold named τ and only augment the history of qk if k ≥τ. According to Miyato et al. (2019), we only pass the gradients through one network. As shown in the Figure 1, the symbol × is used to denote gradient cut. It should be noted that our method is model-agnostic, and any architecture could be used as the QA network. 5 Setup We utilize the QuAC dataset (Choi et al., 2018), to conduct our experiments on, and data splitting is described in A. We utilize BERT (Devlin et al., 2019) as our base model to conduct experiments following the previous research. For question generation, we adopt Bart-Large (Lewis et al., 2020). Following Choi et al. (2018), we use F1, HEQ-Q, and HEQ-D as our evaluation metrics. F1 measures the overlap between agold k and apred k . HEQ-Q and HEQ-D are the ratio of questions and dialogs, for which the model performs better than human (Choi et al., 2018). We run multiple experiments to choose the best set of hyperparameters, resulting in setting S = 2, λ = 2.0, and τ = 6. In Appendix C, the process of choosing all hyperparameters and their analysis is described. For all of our models, we concatenate the question with history questions, feeding them to the network. More details on reproducibility are presented in Appendix E. 6 Results 6.1 Question Generation Results The results of question generation are evaluated in Table 1. These scores are obtained from the dev data. Bleu-1,4 (Papineni et al., 2002), Rouge-L (Lin, 2004), and BERTScore (Zhang et al., 2020) are used for criteria. We use the evaluate library1 to implement these metrics. Find more details in Appendix B. Table 1: Question generation results on the dev set. Bleu-1 Bleu-4 Rouge-L BERTScore 33.6 9.5 29.0 90.5 6.2 Baselines Performance Table 2 shows the results of our experiments in comparison to other baselines. As stated before, BERT-HAE, HAM, and GraphFlow leverage the gold answers of history. BERT-HAE reimplementation by Siblini et al. (2021), and those of HAM and GraphFlow by Li et al. (2022) are shown in the table as BERT-HAE-Real, HAM-Real, and GraphFlow-Real, respectively, indicating a significant drop in performance. 2 In this scenario, where common baselines experience a substantial decrease, we use a basic BERT model with history concatenation as the baseline, as its performance is superior. We include the results of the reinforced history backtracking model (Qiu et al., 2021) in the table. Since this model’s code is not publicly available, we have been unable to re-implement it with the correct settings and perform a meaningful comparison. However, it’s worth noting that this model utilizes unrealistic settings in two stages: once for history selection and once for question answering, potentially exacerbating the modeling issues even further. We have used “Unrealistic Settings” as a term to indicate that a method uses gold answers from history in its modeling. 6.3 CoTaH Results Analysis In Table 2, CoTaH-BERT outperforms BERT (Baseline) by 1.8% in the F1 score3. According to Figure 2 in Appendix D, this improvement is mostly due to an improvement in the performance of questions with a large amount of history. This 1https://github.com/huggingface/evaluate 2For a fair comparison, the ExCorD (Kim et al., 2021) model result is not included in this table, as its best-performing model by Kim et al. (2021) and the re-implementation by Li et al. (2022) use RoBERTa (Liu et al., 2019). 3It should be noted that our test set for BERT (Baseline) and CoTaH-BERT is different from previous methods, but it has been drawn from the same distribution. 633 Table 2: Comparison of our methods with other benchmarks on the test set. Hist.: History. Model Name F1 HEQ-Q HEQ-D Unrealistic Settings GraphFlow-Real (Li et al., 2022) 49.6 BERT-HAE-Real (Siblini et al., 2021) 53.5 HAM-Real (Li et al., 2022) 57.2 BERT (Baseline) 58.9 52.9 5.3 CoTaH-BERT 60.7 55.3 5.9 BERT-HAE (Qu et al., 2019a) 62.4 57.8 5.1 ✓ HAM (Qu et al., 2019b) 64.4 60.2 6.1 ✓ GraphFlow (Chen et al., 2020) 64.9 60.3 5.1 ✓ Reinforced Hist. Backtracking (Qiu et al., 2021) 66.1 62.2 7.3 ✓ confirms that our intuition is valid that our method enhances the base model’s ability to answer questions with a large historical context. Moreover, while BERT-HAE outperforms CoTaH-BERT in terms of F1 score, CoTaH-BERT exhibits superior performance in HEQ-D. This highlights the better consistency of our model to maintain its performance throughout the entire dialog, which is achieved through superiority in answering the questions in the later turns. Table 3: Unanswerable accuracy on the test set. Unanswerable Accuracy BERT (Baseline) 61.9 CoTaH-BERT 68.6 Avoiding answering unanswerable questions is an indication of language understanding (Zhu et al., 2019). Table 3 shows that CoTaH-BERT brings a considerable improvement in terms of detecting unanswerable questions. 6.4 Ablation Study Table 4 demonstrates the effectiveness of using the threshold (τ) in enhancing the model capability, with more details provided in Appendix C. Moreover, the table indicates that question filtering has a tangible effect on improving performance by filtering out degenerate questions with high similarity. Lastly, we observe that using a uniform distribution is more advantageous than a linear one for question selection. We observe a relatively 1% drop in both F1 and HEQ-Q scores with the linear distribution, concluding that our hypothesis has not been true regarding the greater robustness that the linear distribution might pose. We suspect that since the linear distribution picks more synthetic questions near the original question, it undermines the importance of immediate history, which is potentially more important than distant history, causing the consistency loss to act as a misleader instead of a regularizer in some cases. Table 4: The effect of threshold, question filtering, and question selection distribution type on the dev set. QS Dist.: Question Selection Distribution. CoTaH-BERT F1 HEQ-Q HEQ-D w/o Threshold 59.4 54.8 5.1 w/ Threshold 59.9 55.2 5.5 w/o Question Filtering 59.9 55.2 5.5 w/ Question Filtering 60.9 56.3 5.3 w/ Linear QS Dist. 59.9 55.2 5.9 w/ Uniform QS Dist. 60.9 56.3 5.3 7 Conclusions In this paper, we introduced a novel model-agnostic method to make the reasoning of conversational question-answering models robust to irrelevant history. We coped with this issue by augmenting the history and training the model with consistency training. In our experiments, we didn’t follow the wrong modeling of past research in using the gold answers of history. We examined our method with BERT which exhibited a 1.8% performance boost compared to the baseline model. It was demonstrated that this improvement is primarily attributed to the enhancement of the model’s performance on questions with a substantial historical context, suggesting that our method has been successful in making the reasoning robust for these questions. 634 8 Limitations Our model requires a phase of question generation. For synthetic question generation, the history augmentation module could be slow and the speed is directly correlated to the number of questions that one opts to generate. However, question generation is trained only once and all questions are generated in a single run, and all other experiments are conducted by only training the QA module. Moreover, although our model doesn’t need any further computation during evaluation than merely running the QA network, we need two forward passes during the training phase, which makes the training of the QA network a bit more time-consuming than training the baseline model. We have used only the QuAC dataset to report our experiments. This choice was made so that we are able to compare our results with other research, such as Qu et al. (2019a), Qu et al. (2019b), Siblini et al. (2021), and Li et al. (2022), which only use QuAC for their experiments. Thus, other datasets, such as CoQA (Reddy et al., 2019), are not tested in our research. Lastly, our research does not cover experiments on high-performing large language models, like ChatGPT. Brown et al. (2020) reports the results on the QuAC, using GPT-3 (Brown et al., 2020) in zero-shot, one-shot, and few-shot manners. However, these results are substantially inferior compared to other fine-tuning-based models that are mentioned in Table 2. Therefore, further experiments on ChatGPT and other state-of-the-art large language models are needed to better determine the placement of CoTaH and previous baselines in terms of performance. References Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. CoRR, abs/2005.14165. Yu Chen, Lingfei Wu, and Mohammed J. Zaki. 2020. Graphflow: Exploiting conversation flow with graph neural networks for conversational machine comprehension. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 1230–1236. ijcai.org. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. Quac: Question answering in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2174–2184. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2022. Languageagnostic BERT sentence embedding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 878–891. Association for Computational Linguistics. Jing Gu, Mostafa Mirshekari, Zhou Yu, and Aaron Sisto. 2021. Chaincqg: Flow-aware conversational question generation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 2061–2070. Association for Computational Linguistics. Hsin-Yuan Huang, Eunsol Choi, and Wen-tau Yih. 2019. Flowqa: Grasping flow in history for conversational machine comprehension. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Gangwoo Kim, Hyunjae Kim, Jungsoo Park, and Jaewoo Kang. 2021. Learn to resolve conversational dependency: A consistency training framework for conversational question answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 6130– 6141. Association for Computational Linguistics. Samuli Laine and Timo Aila. 2017. Temporal ensembling for semi-supervised learning. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer 635 Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7871–7880. Association for Computational Linguistics. Huihan Li, Tianyu Gao, Manan Goenka, and Danqi Chen. 2022. Ditch the gold standard: Re-evaluating conversational question answering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 8074–8085. Association for Computational Linguistics. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2019. Virtual adversarial training: A regularization method for supervised and semisupervised learning. IEEE Trans. Pattern Anal. Mach. Intell., 41(8):1979–1993. OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774. Boyuan Pan, Hao Li, Ziyu Yao, Deng Cai, and Huan Sun. 2019. Reinforced dynamic reasoning for conversational question generation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 2114–2124. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA, pages 311–318. ACL. Minghui Qiu, Xinjing Huang, Cen Chen, Feng Ji, Chen Qu, Wei Wei, Jun Huang, and Yin Zhang. 2021. Reinforced history backtracking for conversational question answering. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, ThirtyThird Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 13718–13726. AAAI Press. Chen Qu, Liu Yang, Minghui Qiu, W. Bruce Croft, Yongfeng Zhang, and Mohit Iyyer. 2019a. BERT with history answer embedding for conversational question answering. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019, Paris, France, July 21-25, 2019, pages 1133– 1136. ACM. Chen Qu, Liu Yang, Minghui Qiu, Yongfeng Zhang, Cen Chen, W. Bruce Croft, and Mohit Iyyer. 2019b. Attentive history selection for conversational question answering. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM 2019, Beijing, China, November 3-7, 2019, pages 1391–1400. ACM. Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. Coqa: A conversational question answering challenge. Trans. Assoc. Comput. Linguistics, 7:249–266. Wissam Siblini, Baris Sayil, and Yacine Kessaci. 2021. Towards a more robust evaluation for conversational question answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP, Virtual Event, pages 1028–1034. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Qizhe Xie, Zihang Dai, Eduard H. Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmentation for consistency training. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with BERT. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Haichao Zhu, Li Dong, Furu Wei, Wenhui Wang, Bing Qin, and Ting Liu. 2019. Learning to ask unanswerable questions for machine reading comprehension. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4238–4248. Association for Computational Linguistics. A Data Splitting Since the test set of QuAC is not publicly available, we divide the development (dev) set into dev/test sets randomly, such that the number of questions in dev and test sets is almost equal. The total number 636 of dev and test questions is 3678 and 3676, respectively, after splitting. In our splitting, each dialog, with all of its questions, is either attributed to the dev set or the test set, in order to prevent test data leakage. Further, according to Choi et al. (2018), the original dev set of QuAC contains unique documents, meaning that a single document will not be shared among the final dev and test sets, potentially preventing test data leakage. B Question Generation Considerations Gu et al. (2021) reports better results for the question generation, yet we didn’t aim to optimize BartLarge meticulously as the generated questions have a good quality for our task. The point is that in this research, we only utilize questions alone without considering answers. Thus, if the generated questions have less correlation with answers, it’s tolerable as they are still relevant questions considering the overall flow of the conversation. It should be noted that if a future research wants to incorporate predicted answers into its modeling, it should be more cautious about the quality of the question generation to ensure that the right synthetic questions are generated concerning their answers. Moreover, it should be noted that while it is true we use gold answers from history in the training of CQGθ, this does not threaten the realism of our model. The point is that only the training set of QuAC is used to train CQGθ, and later, the history of the training set is augmented for the use of the QA network. On the other hand, we never augment the history of the dev and test sets for the use of the QA network. C Hyperparameter Selection & Sensitivity Analysis Initially, we determine M and γ by assessing some examples of the training data, setting M = 10 and γ = 0.8 based on our appraisal. Next, we determine the values of S, λ, and τ by conducting experiments on the dev set. In Table 5, we evaluate the effects of the model’s two main hyperparameters, S and λ, through a grid search with the following values: S ∈{1, 2, 3} and λ ∈{1.0, 1.5, 2.0}. Firstly, it is evident that the model performs better when S ∈{1, 2} compared to when S = 3 overall. This suggests that S = 3 introduces too much noise, which could be detrimental to performance. Furthermore, when λ ∈{1.5, 2.0}, the performance is better compared to λ = 1.0, indicating that the introduction of λ is helpful, as simply adding LCE and LKL (or equally setting λ = 1.0) produces inferior performance. For the remaining experiments, we set S = 2 and λ = 2.0 as these settings yield the best F1 and HEQ-Q scores. Table 5: The effect of S and λ on the dev set. F1 HEQ-Q HEQ-D λ = 1.0 58.6 53.5 4.8 S = 1 λ = 1.5 59.1 54.8 5.5 λ = 2.0 59.0 54.2 4.4 λ = 1.0 57.9 52.7 4.0 S = 2 λ = 1.5 58.2 53.5 4.2 λ = 2.0 59.4 54.8 5.1 λ = 1.0 58.3 53.5 5.1 S = 3 λ = 1.5 58.6 53.5 5.0 λ = 2.0 58.8 54.1 4.2 After setting the right amount for S and λ, we opt to examine whether the introduction of the threshold (τ) is effective. Thus, we conduct experiments on three different amounts of this hyperparameter. In Table 6, it’s evident that the right amount of τ has a considerable effect on the performance, confirming our intuition about the functionality of τ. For all tested values of τ within the set {5, 6, 7}, performance has increased compared to the base settings with τ = 0 (or equivalently, using no threshold). Notably, the maximum performance improvement is observed when τ = 6. Table 6: The effect of τ on the dev set F1 HEQ-Q HEQ-D τ = 0 59.4 54.8 5.1 τ = 5 59.6 55.2 5.5 τ = 6 59.9 55.2 5.5 τ = 7 59.5 54.9 5.1 D Additional Results In Figure 2, a comparison between the F1 scores of questions for each turn in BERT and CoTaH-BERT on the test set is presented. The score for the k-th turn represents the average F1 score for all questions in the k-th turn across all dialogs in the test set. Questions with a considerable amount of historical context are answered more effectively with our method. For 0 ≤k ≤1, the performances of both 637 BERT and CoTaH-BERT are nearly equal, which is sensible as these questions contain little historical context and thus have little irrelevant history. However, for most of k > 1 dialog turns, CoTaH-BERT outperforms BERT or it has on par performance with BERT. The performance upgrade is especially evident towards the end of dialogs, where questions contain significant historical context. This finding indicates the superiority of CoTaH-BERT over BERT in establishing greater robustness in answering these questions, by identifying and ignoring the irrelevant history turns. Figure 2: The F1 score of the test set dialog turns A case study regarding the performance comparison of CoTaH-BERT and BERT for a question from QuAC dataset (Choi et al., 2018) with a large history is provided in Appendix F. E Reproducibility The seed for all experiments, except the training of CQGθ, is 1000. All of the experiments to train the QAθ′ are conducted on a single RTX 3070 Ti with 8GB memory, on which each experiment takes approximately 6 hours. CQGθ is trained on a single Tesla T4 from Google Colab. For each model, BERT or CoTaH-BERT, the hyperparameters are optimized on the dev set, and a final model will be trained on the train set with the optimized hyperparameters. Subsequently, a single result on the test set will be reported as depicted in Table 2. The source code can be found on our GitHub page.4 F Case Study In Figure 3, a document sample with its corresponding dialog in the dev set is depicted. In the figure, 4https://github.com/HamedHematian/SynCQG the ninth turn question, q9, with its history, H9, are shown. The answers of BERT and CoTaH-BERT to q9 are compared, showing that CoTaH-BERT has been successful in answering this question with a full F1 score, while BERT has been unsuccessful. q9 asks about the release date of the album stated in q2. This is a suitable sample for our context, as there are significant irrelevant history turns between q9 and q2. We observe that CoTaH-BERT has been successful in identifying the relevant history by answering the question correctly. However, the BERT model has mistakenly reported another date, which is wrong. As BERT has returned a span containing the word “mixing”, it’s possible that BERT has incorrectly identified the previous turn question, q8, as relevant and has returned a span by text matching encompassing the word “mixing”, and containing merely some random dates. 638 Figure 3: A comparison between BERT and CoTaH-BERT extracted answers to a question, showing that CoTaHBERT has been able to successfully ignore the irrelevant history by extracting the correct answer. However, the BERT model has been confused and returned a wrong answer. The dialog and the document are presented from the QuAC dataset (Choi et al., 2018). 639
2024_short
57
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 640–649 August 11-16, 2024 ©2024 Association for Computational Linguistics How Good is Zero-Shot MT Evaluation for Low Resource Indian Languages? Anushka Singh1,2 Ananya B. Sai1,2 Raj Dabre1,2,3,6 Ratish Puduppully4 Anoop Kunchukuttan1,2,5 Mitesh M. Khapra1,2 1Nilekani Centre at AI4Bharat 2Indian Institute of Technology Madras, India 3National Institute of Information and Communications Technology, Kyoto, Japan 4 Institute for Infocomm Research (I2R), A*STAR, Singapore 5Microsoft, India 6Indian Institute of Technology Bombay, India Abstract While machine translation evaluation has been studied primarily for high-resource languages, there has been a recent interest in evaluation for low-resource languages due to the increasing availability of data and models. In this paper, we focus on a zero-shot evaluation setting focusing on low-resource Indian languages, namely Assamese, Kannada, Maithili, and Punjabi. We collect sufficient Multi-Dimensional Quality Metrics (MQM) and Direct Assessment (DA) annotations to create test sets and metaevaluate a plethora of automatic evaluation metrics. We observe that even for learned metrics, which are known to exhibit zero-shot performance, the Kendall Tau and Pearson correlations with human annotations are only as high as 0.32 and 0.45. Synthetic data approaches show mixed results and overall do not help close the gap by much for these languages. This indicates that there is still a long way to go for low-resource evaluation. The dataset and evaluation metrics are publicly accessible online.1 1 Introduction While there has been a meteoric rise in the amount of data and improvements in architectures for machine translation (MT) models (Gala et al., 2023; Costa-jussà et al., 2024), in order to scientifically establish whether the translation quality has improved, it is important to have reliable evaluation metrics. However, most of the evaluation metrics were developed with English and a few select other languages in mind. It has been shown that such metrics do not necessarily generalize to other languages and have to be separately meta-evaluated (Sai B et al., 2023; Rivera-Trigueros and OlveraLobo, 2021). The reasons behind this include linguistic aspects that vary across languages, along with factors like the diversity of outputs produced by the models for each language. Such qualitative 1https://github.com/AI4Bharat/IndicMT-Eval differences will be exacerbated in low-resource languages due to the prominent reliance on extensive data resources by today’s models. In this work, we delve deeper into the evaluation of low-resource Indian languages, namely Assamese, Maithili, Punjabi, and Kannada, belonging to 2 different language families. Our goal is to establish the reliability of MT evaluation metrics for low-resource languages. To facilitate this, we collect human scores on the candidate translations using the MQM approach (Lommel et al., 2014). We make use of 5 large multilingual models and APIs that can output text in these languages to generate candidate translations for evaluation. We then collect 250 annotations per language, amounting to a total of 1000 MQM annotations for low-resource languages. Using the data we created, we evaluate multiple existing evaluation metrics of different types, both automatic and learned. In the case of learned metrics, since we do not have training data, we leverage data for related Indic languages from Sai B et al. (2023) for fine-tuning and performing zeroshot meta-evaluations. We observe that for these learned metrics, despite studies finding decent to good performance in other languages, there is a huge margin for improvement in evaluating lowresource languages. We also explore the influence of the base model and synthetic data generation for low-resource languages. In summary, our contributions are as follows: (i) MQM dataset for 4 low resource languages for evaluation (ii) Meta-evaluation of existing metrics on low-resource languages (iii) Analysis of potential techniques to improve the metrics, including (a) exposure of metrics to related languages, (b) different base models, and (c) usage of synthetic data. We show that evaluation for low-resource languages is still far behind other languages. 640 2 Related Work The effectiveness of evaluation metrics has been studied for various languages. Most of the existing MT evaluation metrics are typically analyzed for language pairs where English serves as either the source or target language. That has led to several criticism works (Ananthakrishnan et al., 2006; Callison-Burch et al., 2006; Post, 2018) followed by improvements. However, models are getting increasingly multilingual and slowly evaluation metrics are being studied for other languages (Sai B et al., 2023; Freitag et al., 2021; RiveraTrigueros and Olvera-Lobo, 2021; Cahyawijaya et al., 2021). Metrics like chrF (Popovi´c, 2015) and chrF++ (Popovic, 2017) were proposed for character-based, morphologically-rich languages. While some of these criteria hold for the languages we consider, there is no publicly available open study of such metrics for the specific case of low resource languages. On the other hand, different evaluation metrics are being used to evaluate models in these languages. WMT23 (Pal et al., 2023) had a special task track for low resource Indic languages for which BLEU, ChrF, RIBES, TER, and COMET metrics were used apart from human evaluation. However, to the best of our knowledge, there are no studies analyzing whether these metrics correlate with human judgments or not for these languages. Additionally, there is no publicly available data with human scores to study this. Mohtashami et al. (2023) used synthetic data augmentation to build a BLEURT-like metric for low resource languages. The only Indian language in their set is Punjabi (2k size, not publicly released), which initially had a poor Pearson correlation of 0.184. This was slightly improved to a value of 0.194 when adding synthetic data to their baseline data, although it is still a poor correlation value. 3 Methodology We collect MQM annotations as well as direct assessment (DA) scores and also create synthetic data for 4 languages, viz., Assamese, Punjabi, Kannada, and Maithili. We use the human-curated data as test data to benchmark the performance of various metrics on these low resource languages. The synthetic data is used to investigate the use of such strategies for augmenting resources in these languages for potential improvements in performance. We design experiments to understand the role of other related languages and the base model on the performance. The following subsections provide the details of the data we create and the strategies explored in our experiments. 3.1 MQM Data Annotation Following Sai B et al. (2023), for each of the 4 languages, we hired 2 language experts who are native speakers of that language with bilingual proficiency in English. We provided them the English source segment, the translation to be evaluated, and the MQM annotation guidelines (Lommel et al., 2014; Sai B et al., 2023) for identifying error types and their severities in the translations. These annotations were later used to calculate MQM scores. In addition to identifying errors, the annotators were also asked to assign a score to the translation in the range of 0-25, which we refer to as DA score since these are directly assigned by the annotator. For quality assurance, we initially gave 50 common segments to both annotators to mark the errors and indicate their scores. For any disagreements in annotations, the reasons were independently discussed with the annotators. Most of these disagreements were slight differences in marking severity, which we found to be subjective and difficult to standardize. Later, we computed the interannotator agreements (IAA) using the Pearson correlation of their scores. We employed a different annotator and repeated the validation process whenever this was below a threshold of 0.5 (which was the case for one language in our set - Punjabi). The final IAA is as follows for the 4 languages considered - Maithili - 0.7, Punjabi - 0.7, Assamese - 0.65, and Kannada - 0.68. We obtained the translations from 5 state-of-theart multilingual models and APIs including IndicTrans (Ramesh et al., 2022), NLLB2 (Costa-jussà et al., 2024), NLLB-MoE, Microsoft Azure Cognitive Services API 3 and Google translation API4. The source segments fed to these models are sampled from the FLORES-101 dataset (Goyal et al., 2022), and each segment is translated by each of the 5 models. These sources and translated segments are presented to the language expert in a random order without details regarding the model / API that generated the translation. The language expert is asked to highlight the text containing the error and indicate the type and severity of the error. We obtain such detailed annotations on 250 2We use the 1.3 B parameter version of the NLLB models. 3Bing API 4Google API 641 Assamese Maithili Kannada Punjabi Average Metric τ ρ τ ρ τ ρ τ ρ τ ρ BLEU 1 0.063 0.072 -0.131 -0.047 -0.017 -0.046 -0.002 -0.162 -0.022 -0.046 BLEU 2 0.058 0.081 0.078 -0.028 0.016 0.035 -0.016 0.065 0.034 0.038 BLEU 3 0.020 0.036 -0.028 -0.072 0.111 0.061 -0.055 0.023 0.012 0.012 BLEU 4 0.001 0.026 -0.032 -0.036 -0.088 -0.110 -0.023 0.065 -0.036 -0.014 SacreBLEU 0.075 0.104 0.199 0.265 0.103 0.155 0.098 0.154 0.119 0.170 ROUGE-L 0.088 0.128 0.052 0.055 0.005 0.003 -0.074 0.065 0.018 0.063 chrF++ 0.160 0.254 0.252 0.366 0.145 0.228 0.164 0.255 0.180 0.276 TER 0.123 0.158 0.257 0.403 0.131 0.199 0.170 0.240 0.170 0.250 LASER embs 0.097 0.191 0.119 0.306 0.139 0.275 0.036 0.042 0.098 0.204 LabSE embs 0.128 0.194 0.125 0.169 0.219 0.366 0.19 0.303 0.166 0.258 mBERT 0.131 0.247 0.212 0.388 0.165 0.248 0.234 0.281 0.186 0.291 distilmBERT 0.139 0.267 0.250 0.416 0.169 0.263 0.245 0.306 0.201 0.313 IndicBERT 0.199 0.290 0.235 0.389 0.191 0.276 0.237 0.311 0.216 0.317 MuRIL 0.206 0.324 0.309 0.476 0.162 0.239 0.204 0.269 0.220 0.327 BLEURT-20 0.119 0.185 0.320 0.440 0.279 0.488 0.280 0.352 0.250 0.366 COMET-DA 0.228 0.298 0.172 0.264 0.281 0. 390 0.300 0.358 0.245 0.328 COMET-MQM 0.260 0.381 0.199 0.291 0.290 0.410 0.266 0.334 0.254 0.354 COMET-QE-DA 0.290 0.340 0.080 0.070 0.300 0.450 0.270 0.330 0.235 0.298 COMET-QE-MQM 0.230 0.350 0.130 0.200 0.300 0.440 0.220 0.290 0.220 0.320 COMET-Kiwi 0.344 0.475 0.115 0.129 0.371 0.514 0.322 0.392 0.288 0.378 COMET-Kiwi-xl 0.334 0.48 0.300 0.338 0.337 0.486 0.266 0.352 0.309 0.414 GEMBA-MQM 0.235 0.266 0.085 0.118 0.108 0.079 0.282 0.235 0.178 0.174 GEMBA-MQM(IL lang) 0.228 0.276 0.081 0.077 0.050 0.069 0.171 0.261 0.132 0.171 Indic-COMET-DA 0.263 0.348 0.221 0.300 0.353 0.511 0.293 0.361 0.283 0.380 Indic-COMET-MQM 0.201 0.270 0.201 0.288 0.251 0.388 0.282 0.340 0.234 0.322 Base-IndicBERT(DA) 0.273 0.396 0.380 0.552 0.384 0.528 0.259 0.353 0.324 0.457 Base-IndicBERT(MQM) 0.293 0.426 0.311 0.483 0.302 0.440 0.224 0.313 0.283 0.416 Single Stage 0.232 0.348 0.337 0.473 0.279 0.437 0.305 0.378 0.288 0.409 2-Stage S/R 0.234 0.345 0.264 0.360 0.325 0.497 0.297 0.377 0.280 0.395 2-Stage R/S 0.194 0.292 0.211 0.322 0.325 0.463 0.279 0.342 0.252 0.355 Table 1: Kendall tau (τ) and Pearson (ρ) correlations of various evaluation metrics with human judgements at the segment-level. The best metric correlation among each category of metrics in bold in the respective block. The blocks delineate the following categories (i) word or character overlap-based metrics, (ii) embedding-based metrics, (iii) BERTscore-based formulations with embeddings from different multilingual models, (iv) trained metrics, and (v) GPT-4 based evaluation methods. The blocks after this show the results of our experiments with (a) Finetuning on related languages. (These experiments were done by varying seed values across 5 different runs and the standard deviation to be of the order of 10^-3) (b) adding synthetic data to the training. segments per language. 3.2 Synthetic Data Creation As human annotation data is expensive and timeconsuming to collect, we follow Geng et al. (2023) and Geng et al. (2022) and generate synthetic data for the aforementioned languages to reflect the variety of error types and severities in translations. Since we only have test sets, we obtain error type and severity distributions from datasets of related Indic languages in Sai B et al. (2023). We generate similar proportions of the error types and severities that can be both synthetically recreated and have a significant occurrence count in the distribution. To generate synthetic examples, we utilized BPCC-seed dataset containing data in all these languages without any overlap with the FLORES test set. More details about the synthetic data creation are presented in the Appendix B. Specifically, we created synthetic data with around 44k sentences for Assamese, 32k for Kannada, 24k for Maithili, and 6k for Punjabi based on the size of the available data in these languages. 3.3 Evaluation Metrics Considered We investigate the performance of multiple metrics of different categories. We consider (i) Wordoverlap based metrics of BLEU (Papineni et al., 2002) variants, SacreBLEU (Post, 2018), ROUGE (Lin, 2004), (ii) Character-based metric of chrF++ (Popovic, 2017), (iii) Edit-distance based metric of TER (Snover et al., 2006), (iv) Embedding-based 642 metrics of LabSE (Feng et al., 2022), LASER (Artetxe and Schwenk, 2019) (v) BERTScore computed using mBERT (Zhang et al., 2020), IndicBERT (Kakwani et al., 2020) and MuRIL (Khanuja et al., 2021), (vi) Trained metrics of BLEURT (Sellam et al., 2020) and COMET variants (Rei et al., 2020). Additionally, we also assess GEMBA-MQM (Kocmi and Federmann, 2023), a GPT-based reference-free evaluation metric. We experiment with replacing the English-focused examples in the prompt with examples from various Indian languages. We do this by selecting samples in en-hi, en-ta, and en-gu directions from the Indic MT Eval dataset. Further details on our adaptation are provided in Appendix A. 3.4 Zero Shot-Evaluation Approach Since the focus of this paper is zero-shot evaluation of our languages of interest, for learned metrics like COMET-DA and COMET-MQM, we leverage training data containing MQM and DA annotations, for all 5 related Indic languages, henceforth called related data, from Sai B et al. (2023). The related languages include Hindi, Gujarati, Marathi belonging to Indo-aryan family and Tamil and Malayalam belonging to Dravidian language family. There are 1,476 annotated examples in total per language which we split into train, validation and test set containing 1000, 200 and 276 examples respectively for each language. We found that some of the references were mismatched with the source sentences, which we corrected for our fine-tuning experiments. The validation data is used for early stopping, and the models performing best on the 5 related languages are used for zero-shot evaluation on our 4 languages of interest. We consider fine-tuning existing COMET-DA and COMET-MQM models which are language agnostic and compare them against fine-tuned variants using IndicBERTv2 (Doddapaneni et al., 2022) which is Indic focused. Note that XLM-Roberta and hence COMET models has 24 layers while IndicBERT v2 has 12 layers making the latter efficient. Using synthetic data: Regarding the use of synthetic data, created as described in section 3.2, henceforth called synthetic, we consider the following configurations on COMET-DA: 1. Single Stage: jointly-trained model on a randomly shuffled mix of related data and synthetic data. 2. 2-Stage S/R: training on synthetic data followed by related data with a reduced learning rate. 3. 2-Stage R/S: training on related data followed by synthetic data with a reduced learning rate. 4 Results We present the results for the following research questions to find ways to potentially improve performance on these models: (RQ0) How do existing metrics fare on lowresource languages? (RQ1) Does fine-tuning on related languages help? (RQ2) Does replacing the underlying model of a trained evaluation metric with an alternate backbone model trained on related languages help? (RQ3) Does synthetic training data help? We report Kendall-tau and Pearson correlations with human annotations. 4.1 Meta-Evaluation of Existing Metrics Table 1 shows that, among the word or character overlap-based metrics, chrF++ performs the best on most of the languages. In the embeddingbased approach, we find LabSE performs better than LASER embeddings. However, overall the word-based, character-based, and embeddingbased metrics are outperformed by the trained metrics. Among the trained metrics, the COMET model variants perform the best. Specifically, the recently proposed referenceless COMET-Kiwi and COMET-Kiwi-xl models have the best correlations with human judgments. However, most of the COMET-variants, except for COMET-Kiwi-xl, perform poorly in the Maithili language. This is despite the COMET*-DA variants having seen Hindi language data during training, which is closely related to Maithili and shares the same script. We observe that the GPT-4 based evaluation exhibited significantly lower performance on these languages. This could be attributed to the limited exposure of the underlying model to Indian languages, potentially hindering its ability to effectively identify translation errors in this context. All the analysis above presents observations of the relatively better performing metrics. Overall, we find that none of the evaluation metrics have good correlations with human judgments on these low resource languages. 4.2 Impact of Related languages In the 6th block of Table 1, specifically in the first two rows, we observe that fine-tuning on the 5 643 related languages improves correlations with human judgments(detailed results in Table 4 of Appendix C). We find that it also enhances performance of COMET-DA ("Indic-COMET-DA" row) on the low-resource languages belonging to the same or a close language family. However, we did not observe the same trend for COMET-MQM ("Indic-COMET-MQM" row). Our findings suggest that fine-tuning on related languages using supervised data can be a promising technique for improving performance on low resource languages. However, its effectiveness may vary depending on the underlying model and training configuration. 4.2.1 Does the Backbone Model Matter? To assess the role of the backbone model on the zero-shot performance, we perform experiments by replacing the XLM-Roberta base model of COMET with the IndicBERT v2 model5. The IndicBERT v2 model is a pretrained multilingual masked language model that was trained on 23 Indian languages including the low-resource languages in our evaluation set. However, note that it used a different dataset namely IndicCorp v2 for training. The rows of ‘Base-IndicBert(DA)’ and ‘BaseIndicBert(MQM)" in 6th block of Table 1, show what happens when we switch from the COMET backbone to IndicBERT v2. Comparing with nonfine-tuned as well as fine-tuned COMET variants, latter being Indic-COMET, we find that fine-tuning with an Indic-languages-specific base model like IndicBERT v2, which has prior exposure to these languages, leads to an improvement in performance. 4.3 Training with Synthetic Data Following the synthetic data incorporation methods outlined in 3.4, we experiment with using different proportions of the synthetic data with the real data. In particular, we start by adding equal proportions of real and synthetic data (i.e., 5k samples each) and thereafter double the amount of synthetic data added until we hit the maximum amount of data available for synthetic data creation. The results are presented in Table 1(detailed results in Table 2).Note that the synthetic data portion added in the experiment for each low resource language only contains data in that particular language. However, the real data consists of the same 5 related Indian languages. 5Note that XLM-Roberta model has 24 layers while IndicBERT v2 has 12 layers None of these approaches conclusively outperform the baseline models (COMET-DA and IndicCOMET-DA). The Single-Stage approach shows modest improvement when equal proportions of real and synthetic data are used. However, the performance declines on adding more amount of synthetic data. Overall, the mixed results in these experiments question the effectiveness of using larger quantities of synthetic data for low resource language translation evaluation tasks. This highlights the need for further investigation in this area, presenting an avenue for future research. 5 Conclusions Our work introduced an MQM dataset for four low resource languages consisting of 250 examples per language. Using this dataset, we analyzed the zero-shot performance of different types of existing metrics and observed that none of these existing methods showed good results in the case of low resource language. We explored different techniques to improve the performance, which includes finetuning on related language using Indic MT eval dataset 4.2, changing the base model to an Indicmodel 4.2.1 and using synthetic dataset of these low resource language 4.3. While some of these techniques provide small improvements, we find that there is still a long way to go for low-resource language evaluation. 6 Limitations The size of our dataset being small makes it just about sufficient for testing purposes. The lack of a dev split for the data limits the possibilities of exploring certain other recipes for training. We hope this serves as a starting point though. 7 Ethical Consideration For human annotations, language experts were provided with monthly salary based on their skill set and experience, under the norms of the government of our country. The annotations are collected on a publicly available dataset and will be released publicly for future use. All the datasets created as part of this work will be released under a CC-0 license6 and all the code and models will be released under an MIT license7. 6https://creativecommons.org/publicdomain/ zero/1.0 7https://opensource.org/licenses/MIT 644 Acknowledgements We would like to express our gratitude to the Ministry of Electronics and Information Technology (MeitY), Government of India, for setting up the ambitious Digital India Bhashini Mission with the goal of advancing Indian language technology. The annotators and language experts who worked on this project were supported by the generous grant given by Digital India Bhashini Mission to IIT Madras to serve as the Data Management Unit for the mission. We thank Shri Nandan Nilekani and Shrimati Rohini Nilekani for supporting our work through generous grants from EkStep Foundation and Nilekani Philanthropies. These grants were used for (i) supporting many of the students, research associates, and developers who worked on this project, (ii) fulfilling many of our computing needs, and (iii) recruiting project managers to oversee the massive pan-India data collection activity undertaken as a part of this work. We thank Pranjal Agadh Chitale for his helpful feedback and research discussions on this work. References Ananthakrishnan, Pushpak Bhattacharyya, Murugesan Sasikumar, and Ritesh M. Shah. 2006. Some issues in automatic evaluation of English-Hindi MT : More blues for BLEU. In ICON. Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597–610. Samuel Cahyawijaya, Genta Indra Winata, Bryan Wilie, Karissa Vincentio, Xiaohong Li, Adhiguna Kuncoro, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Leylia Khodra, Ayu Purwarianti, and Pascale Fung. 2021. Indonlg: Benchmark and resources for evaluating indonesian natural language generation. In Conference on Empirical Methods in Natural Language Processing. Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluating the role of Bleu in machine translation research. In 11th Conference of the European Chapter of the Association for Computational Linguistics, Trento, Italy. Association for Computational Linguistics. Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Jeff Wang, and N. L. L. B. Team. 2024. Scaling neural machine translation to 200 languages. Nature. Sumanth Doddapaneni, Rahul Aralikatte, Gowtham Ramesh, Shreyansh Goyal, Mitesh M. Khapra, Anoop Kunchukuttan, and Pratyush Kumar. 2022. Towards leaving no indic language behind: Building monolingual corpora, benchmark and models for indic languages. In Annual Meeting of the Association for Computational Linguistics. Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2022. Language-agnostic BERT sentence embedding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 878–891, Dublin, Ireland. Association for Computational Linguistics. Markus Freitag, George F. Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021. Experts, errors, and context: A large-scale study of human evaluation for machine translation. Transactions of the Association for Computational Linguistics, 9:1460–1474. Jay Gala, Pranjal A Chitale, A K Raghavan, Varun Gumma, Sumanth Doddapaneni, Aswanth Kumar M, Janki Atul Nawale, Anupama Sujatha, Ratish Puduppully, Vivek Raghavan, Pratyush Kumar, Mitesh M Khapra, Raj Dabre, and Anoop Kunchukuttan. 2023. Indictrans2: Towards high-quality and accessible machine translation models for all 22 scheduled indian languages. Transactions on Machine Learning Research. Xiang Geng, Zhe Lai, Yu Zhang, Shimin Tao, Hao Yang, Jiajun Chen, and Shujian Huang. 2023. Unify wordlevel and span-level tasks: Njunlp’s participation for the wmt2023 quality estimation shared task. In Conference on Machine Translation. Xiang Geng, Yu Zhang, Shujian Huang, Shimin Tao, Hao Yang, and Jiajun Chen. 2022. Njunlp’s participation for the wmt2022 quality estimation shared task. In Conference on Machine Translation. Naman Goyal, Cynthia Gao, Vishrav Chaudhary, PengJen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc’Aurelio Ranzato, Francisco Guzmán, and Angela Fan. 2022. The Flores-101 evaluation benchmark for low-resource and multilingual machine translation. Transactions of the Association for Computational Linguistics, 10:522–538. Divyanshu Kakwani, Anoop Kunchukuttan, Satish Golla, Gokul N.C., Avik Bhattacharyya, Mitesh M. Khapra, and Pratyush Kumar. 2020. IndicNLPSuite: 645 Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages. In Findings of EMNLP. Simran Khanuja, Diksha Bansal, Sarvesh Mehtani, Savya Khosla, Atreyee Dey, Balaji Gopalan, Dilip Kumar Margam, Pooja Aggarwal, Rajiv Teja Nagipogu, Shachi Dave, et al. 2021. Muril: Multilingual representations for indian languages. arXiv preprint arXiv:2103.10730. Tom Kocmi and Christian Federmann. 2023. GEMBAMQM: Detecting translation quality error spans with GPT-4. In Proceedings of the Eighth Conference on Machine Translation, pages 768–775, Singapore. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Arle Lommel, Hans Uszkoreit, and Aljoscha Burchardt. 2014. Multidimensional quality metrics (mqm): A framework for declaring and describing translation quality metrics. Revista Tradumàtica: tecnologies de la traducció, (12):455–463. Amirkeivan Mohtashami, Mauro Verzetti, and Paul K. Rubenstein. 2023. Learning translation quality evaluation on low resource languages from large language models. ArXiv, abs/2302.03491. Santanu Pal, Partha Pakray, Sahinur Rahman Laskar, Lenin Laitonjam, Vanlalmuansangi Khenglawt, Sunita Warjri, Pankaj Kundan Dadure, and Sandeep Kumar Dash. 2023. Findings of the WMT 2023 shared task on low-resource Indic language translation. In Proceedings of the Eighth Conference on Machine Translation, pages 682–694, Singapore. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Maja Popovi´c. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics. Maja Popovic. 2017. chrf++: words helping character n-grams. In Proceedings of the Second Conference on Machine Translation, WMT 2017, Copenhagen, Denmark, September 7-8, 2017, pages 612–618. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Belgium, Brussels. Association for Computational Linguistics. Gowtham Ramesh, Sumanth Doddapaneni, Aravinth Bheemaraj, Mayank Jobanputra, Raghavan AK, Ajitesh Sharma, Sujit Sahoo, Harshita Diddee, Mahalakshmi J, Divyanshu Kakwani, Navneet Kumar, Aswin Pradeep, Srihari Nagaraj, Kumar Deepak, Vivek Raghavan, Anoop Kunchukuttan, Pratyush Kumar, and Mitesh Shantadevi Khapra. 2022. Samanantar: The Largest Publicly Available Parallel Corpora Collection for 11 Indic Languages. Transactions of the Association for Computational Linguistics, 10:145–162. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics. Irene Rivera-Trigueros and María-Dolores Olvera-Lobo. 2021. Building a corpus for corporate websites machine translation evaluation. a step by step methodological approach. In Proceedings of the Translation and Interpreting Technology Online Conference, pages 93–101, Held Online. INCOMA Ltd. Ananya Sai B, Tanay Dixit, Vignesh Nagarajan, Anoop Kunchukuttan, Pratyush Kumar, Mitesh M. Khapra, and Raj Dabre. 2023. IndicMT eval: A dataset to meta-evaluate machine translation metrics for Indian languages. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14210–14228, Toronto, Canada. Association for Computational Linguistics. Thibault Sellam, Dipanjan Das, and Ankur P. Parikh. 2020. BLEURT: learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7881–7892. Association for Computational Linguistics. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In In Proceedings of Association for Machine Translation in the Americas, pages 223–231. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with BERT. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. A GPT-4 as evaluator We used GEMBA-MQM (Kocmi and Federmann, 2023), a GPT-based reference-free evaluation metric. This method employs a "three-shot prompting" technique, where GPT-4 is given three predetermined examples in en-de, en-cs, zh-en language pairs to help it understand the task of identifying 646 asm mai kan pan Average Metric τ ρ τ ρ τ ρ τ ρ τ ρ COMET-DA 0.228 0.298 0.172 0.264 0.281 0. 390 0.300 0.358 0.245 0.328 Indic-COMET-DA 0.263 0.348 0.221 0.3 0.353 0.511 0.293 0.361 0.283 0.38 Stage-1 x 0.232 0.348 0.337 0.473 0.279 0.437 0.305 0.378 0.288 0.409 Stage-1 2*x 0.242 0.367 0.333 0.472 0.233 0.368 Stage-1 4*x 0.196 0.293 0.336 0.425 0.232 0.358 Stage-2 S/R-x 0.234 0.345 0.264 0.360 0.325 0.497 0.297 0.377 0.280 0.395 Stage-2 S/R-2*x 0.248 0.355 0.278 0.384 0.32 0.504 Stage-2 S/R-4*x 0.265 0.381 0.300 0.429 0.308 0.485 Stage-2 R/S-x 0.194 0.292 0.211 0.322 0.325 0.463 0.279 0.342 0.252 0.355 Stage-2 R/S-2*x 0.160 0.251 0.225 0.345 0.316 0.442 Stage-2 R/S-4*x 0.167 0.252 0.206 0.303 0.335 0.410 Table 2: KendallTau (τ) and Pearson(ρ) correlation scores of experiments with synthetic data. First block consists of COMET-DA and Indic-COMET-DA models, followed by the results of different stages varying amount of synthetic data added. Here, x =5000 means 5K examples of synthetic data is added in fine-tuning process. For example, Stage-2 S/R-4*x shows the stage-2 result of a particular language, in which COMET-DA is first fine-tuned on synthetic data of size 4*x i.e. 20k, followed by real data. error-prone segments in the translated text. We experiment with replacing the English-focused examples in the prompt with examples from various Indian languages. We selected samples in en-hi, en-ta, en-gu directions by sampling from the Indic MT Eval dataset (Sai B et al., 2023). Table 1 shows results both on the vanilla GEMBA-MQM and our modified version named GEMBA-MQM(IL) which explicitly includes Indian language examples within the prompt. B Synthetic Data Creation We first studied the MQM annotations in the related languages of Hindi, Marathi, Gujarati, Tamil, and Malayalam released by Sai B et al. (2023). We extracted the counts of various error types with their corresponding severity counts. We choose the error types and severities that can both be synthetically recreated and have a significant occurrence count in the distribution. To recreate the errors in the low resource languages considered in our work, we use the BPCC-seed dataset containing data in all these languages without any overlap with the FLORES test set. For each of the error types, we modify correct sentences in the following ways. • Omission errors: We first determine whether the words are stop words or not depending on their frequency of occurrence in the BPCC corpus. We heuristically determine the top 100 words as the common words or stop words. We randomly drop an uncommon word in each segment sampled for an omission error introduction. • Addition errors: We randomly sample an uncommon word to be introduced at a random position in the segment. We found these errors to be less frequent and accordingly sampled fewer segments to include such errors. • Mistranslation errors: We randomly select tokens to be replaced with a [’MASK’] token. We then sample perturbations using Muril model. To replicate errors of different severities, we sample tokens with reduced generation probabilities to represent more severe pseudo errors. For the generation of varied pseudo translations, we employ a random selection process wherein one token is chosen from the top k tokens with the highest generation probability. Specifically, we set k values at 2,3,5,8 and 10 for different levels of severities. • Grammatical errors: We add, drop, or edit the common words to create fluency-based errors in the segments. C Training Details For training, we follow a similar process as (Rei et al., 2020). We start by loading the encoder initialized with either COMET-DA, COMET-MQM, or IndicBERT weights. We divide our model parameters into two groups: the regressor parameter, 647 Hyperparameters Values batch size 8 loss mse no. of frozen epochs 1 dropout 0.1 encoder learning rate 1.0e-06 encoder model XLM-RoBERTa hidden sizes 3072, 1024 layer mix layerwise decay 0.95 learning rate 1.5e-05 optimizer AdamW pool avg Table 3: Hyperparameters used to fine-tune IndicComet-DA. Note that for different experiments the value of encoder learning rate and learning rate will change. which involves the parameters of top feed-forward added for regression, and the encoder parameter, which comprises parameters of the pre-trained encoder. In the initial epoch, the encoder is frozen and only feed-forward is trained with a specific learning rate, after that entire model is trained using different learning rate. For detailed information about hyperparameters, please refer to table 3. All our experiments used a single RTX 3090 Ti GPU, with a cumulative computational time of 8 hours. Different experiments in this paper used different learning rates (lr) based on hyperparameter tuning. For fine-tuning Indic-COMET-DA and Indic-COMET-MQM, we found 1.0e-06 and 1.5e06 learning rates to be the best respectively. While we fine-tuned indicBERT with a slightly higher learning rate of 1.0e-05. 648 Hindi Malayalam Marathi Tamil Gujarati Average Metric ρ τ ρ τ ρ τ ρ τ ρ τ ρ τ COMET-DA 0.357 0.457 0.516 0.707 0.468 0.648 0.539 0.683 0.325 0.525 0.441 0.608 COMET-MQM 0.432 0.608 0.394 0.301 0.435 0.523 0.504 0.667 0.349 0.483 0.423 0.516 COMET-QE-DA 0.44 0.59 0.46 0.6 0.34 0.52 0.48 0.64 0.42 0.57 0.428 0.584 COMET-QE-MQM 0.45 0.64 0.34 0.44 0.29 0.4 0.5 0.67 0.38 0.43 0.392 0.516 Indic-COMET-DA 0.389 0.555 0.561 0.745 0.494 0.672 0.568 0.747 0.344 0.530 0.471 0.65 Indic-COMET-MQM 0.485 0.681 0.472 0.349 0.519 0.635 0.522 0.676 0.412 0.569 0.482 0.582 Base-IndicBERT(DA) 0.378 0.597 0.508 0.713 0.524 0.684 0.462 0.614 0.352 0.538 0.445 0.629 Base-IndicBERT(MQM) 0.443 0.673 0.398 0.350 0.484 0.624 0.424 0.559 0.379 0.525 0.426 0.546 Table 4: Segment-level Pearson (ρ) and Kendall tau (τ) correlations of different metrics on seen languages. 649
2024_short
58
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 650–656 August 11-16, 2024 ©2024 Association for Computational Linguistics Zero-Shot Cross-Lingual Reranking with Large Language Models for Low-Resource Languages Mofetoluwa Adeyemi, Akintunde Oladipo, Ronak Pradeep, Jimmy Lin David R. Cheriton School of Computer Science University of Waterloo {moadeyem, aooladipo, rpradeep, jimmylin}@uwaterloo.ca Abstract Large language models (LLMs) as listwise rerankers have shown impressive zero-shot capabilities in various passage ranking tasks. Despite their success, there is still a gap in existing literature on their effectiveness in reranking low-resource languages. To address this, we investigate how LLMs function as listwise rerankers in cross-lingual information retrieval (CLIR) systems with queries in English and passages in four African languages: Hausa, Somali, Swahili, and Yoruba. We analyze and compare the effectiveness of monolingual reranking using either query or document translations. We also evaluate the effectiveness of LLMs when leveraging their own generated translations. To grasp the general picture, we examine the effectiveness of multiple LLMs—the proprietary models RankGPT4 and RankGPT3.5, along with the open-source model RankZephyr. While the document translation setting, i.e., both queries and documents are in English, leads to the best reranking effectiveness, our results indicate that for specific LLMs, reranking in the African language setting achieves competitive effectiveness with the cross-lingual setting, and even performs better when using the LLM’s own translations. 1 Introduction Several studies have shown that large language models (LLMs) excel in various NLP tasks (Zhou et al., 2022; Zhu et al., 2023; Wang et al., 2023). In text ranking, LLMs have been used effectively as retrievers (Ma et al., 2023a) and in both pointwise and listwise reranking. In reranking, models may generate an ordered list directly (Sun et al., 2023; Ma et al., 2023b; Pradeep et al., 2023a; Tamber et al., 2023) or sort based on token probabilities (Ma et al., 2023b). The large context size of LLMs makes listwise approaches particularly attractive because the model attends to multiple documents to produce a relative ordering. Cross-lingual retrieval aims to provide information in a language different from that of the search query. This is especially relevant when the required information is not available or prevalent in the query’s language, as is the case for most lowresource languages. Previous work has examined sparse and multilingual dense retrieval models in cross-lingual settings for these languages (Zhang et al., 2023b; Ogundepo et al., 2022). However, studies on the effectiveness of LLMs as crosslingual retrievers or rerankers for low-resource languages are few to non-existent. In this study, we examine the effectiveness of proprietary and open-source models for listwise reranking in low-resource African languages. Our investigation is guided by the following research questions: (1) How well do LLMs fare as listwise rerankers for low-resource languages? (2) How effectively do LLMs perform listwise reranking in cross-lingual scenarios compared to monolingual (English or low-resource language) scenarios? (3) When we leverage translation, is reranking more effective when translation uses the same LLM used for zero-shot reranking? We answer these questions through an extensive investigation of the effectiveness of RankGPT (Sun et al., 2023) and RankZephyr (Pradeep et al., 2023b) in cross-lingual and monolingual retrieval settings. We use CIRAL (Adeyemi et al., 2023), a cross-lingual information retrieval dataset covering four African languages with queries in English and passages in African languages, and construct monolingual retrieval scenarios through document and query translations. Our results show that cross-lingual reranking with these LLMs is generally more effective compared to reranking in the African languages, underscoring that they are better tuned to English than low-resource languages. Across all languages, we achieve our best results when reranking entirely in English using retrieval results obtained by doc650 ument translation. In this setting, we see up to 7 points improvement in nDCG@20 over crosslingual reranking using RankGPT4, and up to 9 points over reranking in African languages. We specifically notice improvements with RankGPT4 when using its query translations for reranking in African languages. 2 Background and Related Work Given a corpus C = {D1, D2, ..., Dn} and a query q, information retrieval (IR) systems aim to return the k most relevant documents. Modern IR pipelines typically feature a multi-stage architecture in which a first-stage retriever returns a list of candidate documents that a reranker reorders for improved quality (Asadi and Lin, 2013; Nogueira et al., 2019; Zhuang et al., 2023). More recently, the effectiveness of decoder models as rerankers (dubbed “prompt decoders”) has been explored in some depth. Researchers have fine-tuned GPT-like models in the standard contrastive learning framework (Neelakantan et al., 2022; Muennighoff, 2022; Zhang et al., 2023a) and studied different approaches to reranking using both open-source LLMs and proprietary GPT models. Sun et al. (2023) evaluated the effectiveness of OpenAI models on multiple IR benchmarks using permutation generation approaches, while Ma et al. (2023b) demonstrate the effectiveness of GPT-3 as a zero-shot listwise reranker and the superiority of listwise over pointwise approaches. While these papers focus on reranking with LLMs, they only cover two African languages— Swahili and Yoruba. For both languages, GPT-3 improves over BM25 significantly but still falls behind supervised reranking baselines. In this work, we examine the effectiveness of these LLMs as components of IR systems for African languages. Specifically, we study the effectiveness of opensource and proprietary LLMs as listwise rerankers for four African languages (Hausa, Somali, Swahili, and Yoruba) using the CIRAL cross-lingual IR test collection (Adeyemi et al., 2023). To be more precise, cross-lingual information retrieval (CLIR) is a variant of the standard retrieval task in which the queries qi are in a different language from the documents in the corpus C. Popular approaches to CLIR include query translation, document translation, and language-independent representations (Lin et al., 2023). As the focus of this work is on the effectiveness of LLMs as listwise Input Prompt: SYSTEM You are RankGPT, an intelligent assistant that can rank passages based on their relevancy to the query. USER I will provide you with {num} passages, each indicated by number identifier []. Rank the passages based on their relevance to the query: {query}. [1] {passage 1} [2] {passage 2} ... [num] {passage num} Search Query: {query} Rank the {num} passages above based on their relevance to the search query. The passages should be listed in descending order using identifiers. The most relevant passages should be listed first. The output format should be [] > [], e.g., [1] > [2]. Only respond with the ranking results, do not say any word or explain. Model Completion: [10] > [4] > [5] > [6] ... [12] Figure 1: Prompt design and sample of model completion adopted for listwise reranking with the LLMs. rerankers in cross-lingual settings, we primarily explore document and query translation approaches in this study. 3 Methods Listwise Reranking. In listwise reranking, LLMs compare and attribute relevance over multiple documents in a single prompt. As this approach has been proven to be more effective than pointwise and pairwise reranking (Ma et al., 2023b; Pradeep et al., 2023a), we solely employ listwise reranking in this work. For each query q, a list of provided documents D1, ..., Dn is reranked by the LLM, where n denotes the number of documents that are inserted into the prompt. Prompt Design. We adopt RankGPT’s (Sun et al., 2023) listwise prompt design as modified by Pradeep et al. (2023a). The input prompt and generated completion are presented in Figure 1. LLM Zero-Shot Translations. We examine the effectiveness of LLMs in using their translations in crossing the language barrier. For a given LLM, we generate zero-shot translations of queries from English to African languages and implement reranking with the LLM using its translations. With this approach, we are able to examine the ranking effec651 Input Prompt: Query: {query} Translate this query to {African language}. Only return the translation, don't say any other word. Model Completion: {Translated query} Figure 2: Prompt design and model completion for zeroshot query translations with the LLMs. tiveness of the LLM solely in African languages, and examine the correlation between its translation quality and reranking. The prompt design for generating the query translation is shown in Figure 2. 4 Experimental Setup Models. We implement zero-shot reranking for African languages with three models. These include proprietary reranking LLMs: RankGPT4 and RankGPT3.5, using the gpt-4 and gpt-3.5-turbo models, respectively, from Azure’s OpenAI API. To examine the effectiveness of open-source LLMs, we rerank with RankZephyr (Pradeep et al., 2023b), an open-source reranking LLM obtained by instruction-fine-tuning Zephyrβ (Tunstall et al., 2023) to achieve competitive effectiveness with RankGPT models. Baselines. We compare the reranking effectiveness of the LLMs using already established models as baselines. Our baselines include two crossencoder models, the multilingual T5 (mT5) (Xue et al., 2021) and AfrimT5 (Adelani et al., 2022), which is mT5 with continued pre-training on African corpora. The mT51 and AfrimT52 rerankers were obtained from fine-tuning the base versions of both models on the MS MARCO passage collection (Bajaj et al., 2016) for 100k iterations, with a batch size of 128. Test Collection. Models are evaluated on CIRAL (Adeyemi et al., 2023), a CLIR test collection consisting of four African languages: Hausa, Somali, Swahili, and Yoruba. Queries in CIRAL are natural language factoid questions in English while passages are in the respective African languages. Each language comprises between 80 and 100 queries, and evaluations are done using the 1https://huggingface.co/castorini/ mt5-base-ft-msmarco 2https://huggingface.co/castorini/ afrimt5-base-ft-msmarco pooled judgments obtained from CIRAL’s passage retrieval task.3 We also make use of CIRAL’s translated passage collection4 in our document translation scenario. The test collection’s documents were translated from the African languages to English using the NLLB machine translation model (Costajussà et al., 2022). We report nDCG@20 scores following the test collection’s standard, and MRR@100. Configurations. First-stage retrieval uses BM25 (Robertson and Zaragoza, 2009) in the open-source Pyserini toolkit (Lin et al., 2021). We use whitespace tokenization for passages in native languages and the default English tokenizer for the translated passages. Our BM25 retrieval is implemented using document (BM25-DT) and query (BM25-QT) translations. For BM25-QT, queries are translated with Google Machine Translation (GMT). We rerank the top 100 passages retrieved by BM25 using the sliding window technique by Sun et al. (2023) with a window of 20 and a stride of 10. Experiments were conducted using the RankLLM toolkit.5 We use a context size of 4,096 tokens for RankGPT3.5 and RankZephyr, and 8,192 tokens for RankGPT4. These context sizes are also maintained for the zero-shot LLM translation experiments. For each model, translation is performed over three iterations and we vary the model’s temperatures from 0 to 0.6 to allow variation in the translations. Translations are only obtained for the GPT models since RankZephyr is suited only for reranking. Reranking results are reported over a single run, except with the LLM translations where we take the Reciprocal Rank Fusion (RRF) (Cormack et al., 2009) of results from the three iterations. 5 Results and Discussion 5.1 Cross-Lingual vs. Monolingual Reranking Table 1 compares results for cross-lingual reranking using CIRAL’s queries and passages unmodified, and also the English reranking scenario. Row (1) reports scores for the two first-stage retrievers, BM25 with query translation (BM25-QT) and document translation (BM25-DT). Cross-lingual reranking scores for the different LLMs are presented in Row (2), and we employ BM25-DT for first-stage retrieval given it is more effective. 3https://ciralproject.github.io/ 4https://huggingface.co/datasets/CIRAL/ ciral-corpus#translated-dataset 5https://github.com/castorini/rank_llm 652 Source nDCG@20 MRR@100 Prev. top-k ha so sw yo Avg ha so sw yo Avg (1a) BM25-QT None |C| 0.0870 0.0813 0.1302 0.2864 0.1462 0.1942 0.1495 0.3209 0.4434 0.2770 (1b) BM25-DT None |C| 0.2142 0.2461 0.2327 0.4451 0.2845 0.4009 0.4050 0.4426 0.5904 0.4597 Cross-lingual Reranking: English queries, passages in African languages (2a) RankGPT4 BM25-DT 100 0.3577 0.3159 0.3029 0.5070 0.3709 0.7006 0.5613 0.6378 0.7364 0.6590 (2b) RankGPT3.5 BM25-DT 100 0.2413 0.2919 0.2562 0.4416 0.3078 0.5125 0.5151 0.5615 0.5932 0.5456 (2c) RankZephyr BM25-DT 100 0.2741 0.2941 0.2953 0.4459 0.3274 0.4917 0.5195 0.5884 0.6311 0.5577 (2d) mT5 BM25-DT 100 0.3876 0.3757 0.3778 0.5604 0.4254 0.6381 0.6294 0.6855 0.6938 0.6617 (2e) AfrimT5 BM25-DT 100 0.3911 0.3530 0.3655 0.5510 0.4152 0.6463 0.5998 0.6888 0.6903 0.6563 English Reranking: English queries, English passages (3a) RankGPT4 BM25-DT 100 0.3967 0.3819 0.3756 0.5753 0.4324 0.7042 0.6125 0.7112 0.7523 0.6951 (3b) RankGPT3.5 BM25-DT 100 0.2980 0.3080 0.3074 0.4985 0.3530 0.5702 0.5373 0.6241 0.7306 0.6156 (3c) RankZephyr BM25-DT 100 0.3686 0.3630 0.3678 0.5275 0.4067 0.6431 0.6210 0.6995 0.7169 0.6701 (3d) mT5 BM25-DT 100 0.3644 0.3877 0.3587 0.5489 0.4149 0.5916 0.6104 0.6335 0.6732 0.6272 (3e) AfrimT5 BM25-DT 100 0.3748 0.3663 0.3591 0.5499 0.4125 0.6333 0.5521 0.6160 0.6983 0.6249 Table 1: Comparison of Cross-lingual and English reranking results. The cross-lingual scenario uses CIRAL’s English queries and African language passages while English reranking crosses the language barrier with English translations of the passages. Scores for reranking in English are reported in Row (3), and results show this to be the more effective scenario across the LLMs and languages. However, the cross-encoder T5 baselines have better reranking effectiveness in the cross-lingual scenario. Improved reranking effectiveness with English translations is expected, given that LLMs, despite being multilingual, are more attuned to English. The results obtained from reranking solely with African languages further probe the effectiveness of LLMs in low-resource language scenarios. We report scores using query translations in Table 2, with BM25-DT also as the first-stage retriever for a fair comparison. In comparing results from the query translation scenario to the cross-lingual results in Row (2) of Table 1, we generally observe better effectiveness with cross-lingual. However, RankGPT4 obtains higher scores for Somali, Swahili, and Yoruba in the African language scenario, especially with its query translations, comparing Rows (2a) in Table 1 and 2. 5.2 LLM Reranking Effectiveness We compare the effectiveness of the different LLMs across the reranking scenarios. RankGPT4 generally achieves better reranking among the 3 LLMs, as presented in Tables 1 and 2. In the crosslingual and English reranking scenarios, the opensource LLM RankZephyr (Pradeep et al., 2023b) achieves better reranking scores in comparison with RankGPT3.5 as reported in Rows (*b) and (*c) in Table 1. RankZephyr also achieves comparable scores with RankGPT4 in the English reranking scenario, and even a higher MRR for Somali as reported in Row (3c) of Table 1. These results establish the growing effectiveness of open-source LLMs for language tasks considering the limited availability of proprietary LLMs, but with room for improvement in low-resource languages. In comparing the reranking effectiveness of LLMs with that of the baseline models, scores vary depending on the scenario and specific LLM. Reranking scores of the cross-encoder T5 baselines are reported in Rows (*d) and (*e) of Tables 1 and 2. As seen in Rows (2d) and (2e) of Table 1, the crossencoder multilingual T5 baselines achieve higher reranking scores compared to all three LLMs. However, RankGPT4 outperforms both baselines in the English reranking scenario and using its query translations in the African language reranking scenario. We can attribute the higher effectiveness of the baselines to being fine-tuned for reranking as compared to the LLMs where reranking is carried out in a zero-shot fashion. 5.3 LLM Translations and Reranking Given that RankGPT4 achieves better reranking effectiveness using its query translations in the monolingual setting, we further examine the effectiveness of this scenario. Row (2) in Table 2 reports results using LLMs translations, and we compare these to results obtained using translations from GMT. Compared to results obtained with GMT translations, RankGPT4 does achieve better monolingual reranking effectiveness in the African language using its query translations. RankGPT3.5 on the other hand achieves less competitive scores on average using its query translations when com653 Source nDCG@20 MRR@100 Prev. top-k ha so sw yo Avg ha so sw yo Avg (1) BM25-DT None |C| 0.2142 0.2461 0.2327 0.4451 0.2845 0.4009 0.4050 0.4426 0.5904 0.4597 LLM Query Translations: Queries and passages in African languages (2a) RankGPT4 BM25-DT 100 0.3458 0.3487 0.3559 0.4834 0.3835 0.6293 0.4253 0.6961 0.6551 0.6015 (2b) RankGPT3.5 BM25-DT 100 0.2370 0.2773 0.2802 0.4462 0.3102 0.4651 0.4756 0.5314 0.6115 0.5209 GMT Query Translations: Queries and passages in African languages (3a) RankGPT4 BM25-DT 100 0.3523 0.3086 0.3086 0.4712 0.3602 0.6800 0.5154 0.6252 0.6545 0.6188 (3b) RankGPT3.5 BM25-DT 100 0.2479 0.2816 0.2761 0.4361 0.3104 0.4996 0.4741 0.5647 0.5505 0.5222 (3c) RankZephyr BM25-DT 100 0.2515 0.2520 0.2556 0.4114 0.2926 0.4573 0.4407 0.5460 0.5690 0.5033 (3d) mT5 BM25-DT 100 0.3395 0.3305 0.3412 0.4963 0.3769 0.5313 0.5105 0.5551 0.6574 0.5636 (3e) AfrimT5 BM25-DT 100 0.3559 0.3335 0.3428 0.4620 0.3736 0.5863 0.5195 0.6028 0.5886 0.5743 Table 2: Reranking in African languages using query translations and passages in the African language. BM25-DT is used as first stage. Query translations are done using the LLMs, and we compare effectiveness with GMT translations. Model ha so sw yo avg GPT4 21.8 7.4 43.8 16.0 22.3 GPT3.5 7.1 1.8 42.4 6.6 14.5 GMT 45.3 17.9 85.9 36.7 46.5 Table 3: Evaluation of the LLMs’ query translation quality using the BLEU metric. Scores reported are the average over three translation iterations. pared to translations from the GMT model, with the exception of Yoruba where it has much higher scores using its translations. Considering the effect of translation quality on reranking, we evaluate the LLMs’ translations and report results in Table 3. Evaluation is done against CIRAL’s human query translations using the BLEU metric. We observe better translations with GPT4 compared to GPT3.5, with GMT achieving the best quality. However, RankGPT4 still performs better using its query translations, indicating a correlation in the model’s understanding of the African languages. 6 Conclusion In this work, we evaluate zero-shot cross-lingual reranking with large language models (LLMs) on African languages. Our suite covered three forms of LLM-based reranking: RankGPT4, RankGPT3.5 and RankZephyr. Using the listwise reranking method, our results demonstrate that reranking in English via translation is the most optimal. We examine the effectiveness of LLMs in reranking for low-resource languages in the cross-lingual and African language monolingual scenarios and find that LLMs have comparable effectiveness in both scenarios but with better results in cross-lingual. In the process, we also establish that good translations obtained from the LLMs do improve their reranking effectiveness in the African language reranking scenario as discovered with RankGPT4. Additionally, while open-source models showcase slightly lower effectiveness than RankGPT4, they still largely improve over other proprietary models like RankGPT3.5, an important step towards the development of effective listwise rerankers for low-resource languages. 7 Limitations While we provide valuable insights into the application of LLMs for reranking tasks in low-resource settings, our work is not without limitations. One constraint is the reliance on translations for achieving good reranking effectiveness, which inherently introduces dependencies on the quality of translation models and their compatibility with the target languages. Additionally, the scope of languages and models evaluated in this study, covering only a small spectrum of African languages and a mix of proprietary and open-source LLMs, remains limited in the broader context of low-resource language research. Future research directions could address these limitations by exploring a wider array of lowresource languages and incorporating more diverse LLMs, including those specifically trained or finetuned on low-resource language datasets. Investigating alternative reranking pipelines that reduce reliance on translation or enhance the multilingual capabilities of LLMs directly could also offer new avenues for improving retrieval effectiveness in low-resource language settings. Acknowledgements This research was supported in part by the Natural Sciences and Engineering Research Council 654 (NSERC) of Canada and Huawei Technologies Canada. Thanks to Microsoft for providing access to OpenAI LLMs on Azure via the Accelerating Foundation Models Research program. We also thank the anonymous reviewers for their constructive suggestions. References David Adelani et al. 2022. A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for African News Translation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3053–3070. Mofetoluwa Adeyemi, Akintunde Oladipo, Xinyu Zhang, David Alfonso-Hermelo, Mehdi Rezagholizadeh, Boxing Chen, and Jimmy Lin. 2023. CIRAL at FIRE 2023: Cross-Lingual Information Retrieval for African Languages. In Proceedings of the 15th Annual Meeting of the Forum for Information Retrieval Evaluation, pages 4–6. Nima Asadi and Jimmy Lin. 2013. Effectiveness/Efficiency Tradeoffs for Candidate Generation in Multistage Retrieval Architectures. Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval. Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al. 2016. MS MARCO: A Human Generated Machine Reading Comprehension Dataset. ArXiv, abs/1611.09268. Gordon V. Cormack, Charles L. A. Clarke, and Stefan Büttcher. 2009. Reciprocal Rank Fusion Outperforms Condorcet and Individual Rank Learning Methods. Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval. Marta R. Costa-jussà et al. 2022. No Language Left Behind: Scaling Human-Centered Machine Translation. ArXiv, abs/2207.04672. Jimmy Lin, David Alfonso-Hermelo, Vitor Jeronymo, Ehsan Kamalloo, Carlos Lassance, Rodrigo Nogueira, Odunayo Ogundepo, Mehdi Rezagholizadeh, Nandan Thakur, Jheng-Hong Yang, and Xinyu Crystina Zhang. 2023. Simple Yet Effective Neural Ranking and Reranking Baselines for Cross-Lingual Information Retrieval. ArXiv, abs/2304.01019. Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, JhengHong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021. Pyserini: A Python Toolkit for Reproducible Information Retrieval Research with Sparse and Dense Representations. In Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021), pages 2356–2362. Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, and Jimmy Lin. 2023a. Fine-Tuning LLaMA for MultiStage Text Retrieval. ArXiv, abs/2310.08319. Xueguang Ma, Xinyu Zhang, Ronak Pradeep, and Jimmy Lin. 2023b. Zero-Shot Listwise Document Reranking with a Large Language Model. ArXiv, abs/2305.02156. Niklas Muennighoff. 2022. SGPT: GPT Sentence Embeddings for Semantic Search. ArXiv, abs/2202.08904. Arvind Neelakantan et al. 2022. Text and Code Embeddings by Contrastive Pre-Training. ArXiv, abs/2201.10005. Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, and Jimmy Lin. 2019. Multi-Stage Document Ranking with BERT. ArXiv, abs/1910.14424. Odunayo Ogundepo, Xinyu Zhang, Shuo Sun, Kevin Duh, and Jimmy Lin. 2022. AfriCLIRMatrix: Enabling Cross-Lingual Information Retrieval for African Languages. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8721–8728. Ronak Pradeep, Sahel Sharifymoghaddam, and Jimmy Lin. 2023a. RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models. ArXiv, abs/2309.15088. Ronak Pradeep, Sahel Sharifymoghaddam, and Jimmy Lin. 2023b. RankZephyr: Effective and Robust Zero-Shot Listwise Reranking is a Breeze! ArXiv, abs/2312.02724. Stephen E. Robertson and Hugo Zaragoza. 2009. The Probabilistic Relevance Framework: BM25 and Beyond. Foundations and Trends in Information Retrieval, 3:333–389. Weiwei Sun, Lingyong Yan, Xinyu Ma, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, and Zhaochun Ren. 2023. Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agents. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 14918–14937. Manveer Singh Tamber, Ronak Pradeep, and Jimmy Lin. 2023. Scaling down, LiTting up: Efficient zeroshot listwise reranking with seq2seq encoder-decoder models. ArXiv, abs/2312.16098. Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Kashif Rasul, Younes Belkada, Shengyi Huang, Leandro von Werra, Clémentine Fourrier, Nathan Habib, et al. 2023. Zephyr: Direct Distillation of LM Alignment. ArXiv, abs/2310.16944. 655 Shuhe Wang, Xiaofei Sun, Xiaoya Li, Rongbin Ouyang, Fei Wu, Tianwei Zhang, Jiwei Li, and Guoyin Wang. 2023. GPT-NER: Named Entity Recognition via Large Language Models. ArXiv, abs/2304.10428. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483– 498. Xin Zhang, Zehan Li, Yanzhao Zhang, Dingkun Long, Pengjun Xie, Meishan Zhang, and Min Zhang. 2023a. Language Models are Universal Embedders. ArXiv, abs/2310.08232. Xinyu Zhang, Kelechi Ogueji, Xueguang Ma, and Jimmy Lin. 2023b. Toward Best Practices for Training Multilingual Dense Retrieval Models. TOIS, 42(2):1–33. Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2022. Large Language Models are Human-Level Prompt Engineers. In The Eleventh International Conference on Learning Representations. Wenhao Zhu, Hongyi Liu, Qingxiu Dong, Jingjing Xu, Lingpeng Kong, Jiajun Chen, Lei Li, and Shujian Huang. 2023. Multilingual Machine Translation with Large Language Models: Empirical Results and Analysis. ArXiv, abs/2304.04675. Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, and Michael Bendersky. 2023. Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels. ArXiv, abs/2310.14122. 656
2024_short
59
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 65–74 August 11-16, 2024 ©2024 Association for Computational Linguistics RDRec: Rationale Distillation for LLM-based Recommendation Xinfeng Wang†, Jin Cui†, Yoshimi Suzuki‡, and Fumiyo Fukumoto‡ †Graduate School of Engineering ‡Interdisciplinary Graduate School University of Yamanashi, Kofu, Japan {g22dtsa7, g22dtsa5, ysuzuki, fukumoto}@yamanashi.ac.jp Abstract Large language model (LLM)-based recommender models that bridge users and items through textual prompts for effective semantic reasoning have gained considerable attention. However, few methods consider the underlying rationales behind interactions, such as user preferences and item attributes, limiting the reasoning capability of LLMs for recommendations. This paper proposes a rationale distillation recommender (RDRec), a compact model designed to learn rationales generated by a larger language model (LM). By leveraging rationales from reviews related to users and items, RDRec remarkably specifies their profiles for recommendations. Experiments show that RDRec achieves state-of-the-art (SOTA) performance in both top-N and sequential recommendations. Our source code is released at https://github.com/WangXFng/RDRec. 1 Introduction Large language models (LLMs) with powerful reasoning capabilities have been extensively studied for recommendations, including news and item recommendations (Li et al., 2022; Wei et al., 2023; Huang et al., 2023), explainable recommendations (Yang et al., 2023; Cheng et al., 2023), and zero-/few-shot and cold-start recommendations (He et al., 2023; Sanner et al., 2023). Several attempts have leveraged knowledge of LLMs to improve recommendation performance, such as enhancing embedding initialization (Harte et al., 2023), reranking candidates (Yue et al., 2023), and learning representation (Ren et al., 2023; Lin et al., 2023; Lei et al., 2023; Viswanathan et al., 2023). A straightforward approach is to integrate user and item IDs into LMs through prompt learning (Liu et al., 2023), including discrete prompts to find alternative words to represent IDs, continuous prompts to directly feed ID vectors into a pre-trained model (Sun et al., 2019), and hybrid prompts (Li et al., 2023a; Zhang and Wang, 2023). Recently, Geng et al. (2022) Figure 1: Illustration of our motivation. (a) denotes the review after a purchase and (b) refers to the rationale of the purchase distilled by LLMs. (c) and (d) indicate the preference and attribute enrichment, respectively. present a P5 paradigm to transform user–item interactions, user sequential behaviors, and reviews into text-to-text prompts for LLMs. This enables P5 to capture deeper semantics for LLM-based recommendations. Li et al. (2023b) enhance P5 by a prompt distillation, resulting in significant improvement and reductions in inference time. However, they pay no attention to mining the rationale behind each interaction, such as user preferences and item attributes, which hampers the reasoning capabilities of LLMs. As an example, in Fig. 1 (a), a user review for an item says: “It was pretty fun and cool since we had to change our strategy (user preference) to try to prevent her from playing intrigue cards (item attributes).” The user prefers strategic thinking in the game, and intrigue cards symbolize item characteristics. This introduces noise into the user’s profile, as the user leans towards a strategic game rather than merely cards. This suggests that the original review without intermediate prompts prevents the model from learning to understand the rationale behind the interaction. The Chain-of-Thought (CoT) prompting (Wei et al., 2022; Wang et al., 2023a) that promises 65 Figure 2: Illustration of rationale distillation with LLMs via the chain-of-thought (CoT) prompting. LLMs to decompose intermediate rationales, has been widely applied for rationale extraction (Wang et al., 2023b; Zhang et al., 2023b; McKee et al., 2023; Zhu et al., 2023). More recently, Hsieh et al. (2023) utilize the CoT prompting to distill rationales via LLMs to train smaller models. Inspired by this, we propose a compact recommender model to learn the interaction rationales, i.e., user preferences and item attributes, distilled from reviews using a larger LM. In this way, the model acquires clear textual knowledge with less noise (e.g., “intrigue cards” that may hinder understanding the user’s preference for “strategic games” in Fig. 1 (b)). This enables the model to derive more specified user and item profiles from all reviews given by the user or regarding the item for recommendations, as illustrated in Fig. 1 (cd). The main contributions of this paper can be summarized as follows. (1) We propose a compact RDRec model that effectively specifies user and item profiles by distilling interaction rationales from relevant reviews using a larger LM, and (2) RDRec consistently outperforms SOTA baselines on three real-world datasets in both sequential and top-N recommendations. 2 RDRec Framework We present an RDRec model consisting of two stages, an interaction rationale distillation and a rationale-aware recommendation. 2.1 Interaction Rationale Distillation Inspired by the recent works (Hsieh et al., 2023; Miao et al., 2023) that employ LLMs to produce training data for smaller models, we distill user preferences and item attributes from reviews by using the following prompt template: “A user bought an item and said ‘{review}’. Use two sentences to explain the user’s preference and the item’s attributes, respectively.” As illustrated in Fig. 2, a review feeds into LLMs with the prompt template. The output is user preferences and item attributes. Formally, given a user–item interaction triplet (u, i, ru,i) where u, i, and ru,i indicate a user, an item, and a review, respectively, we generate a quadruplet (u, i, pu,i, au,i) through rationale distillation. Here, pu,i and au,i refer to the distilled user preference and item attribute, respectively. 2.2 Rationale-aware Recommendation The RDRec uses PrOmpt Distillation (POD) (Li et al., 2023b) as its backbone. POD converts three recommendation tasks into LLM-based text generation tasks, and then distills continuous prompt vectors from task templates. These tasks are (i) sequential recommendations, predicting the next item through the user’s ordered interactions, (ii) top-N recommendations, recommending the top N items not yet engaged with by the user, and (iii) explanation generation for a user’s interactions. In contrast to POD, we incorporate an additional rationale generation task, consisting of a user preference generation and an item attribute generation. Specifically, following POD, we first distill prompt vectors (“<P4>” and “<P5>” in Fig. 3) from the templates of “Generate user_{#u}’s preference” and “Generate item_{#i}’s attribute”, where #u and #i denote the user and item IDs. Then, we concatenate prompt vectors with user and item IDs as the input, and the generated preference pu,i and attribute au,i as the output to train the model. To address the token composing issue (i.e., the token of “user_123” is often tokenized by LLMs as a sequence of [“user”, “_”, “12” and “3”]), we use the whole-word embedding (Geng et al., 2022) to treat each sequence of ID tokens as a complete unit, making it distinguishable as a word. Fig. 3 illustrates the input and output example of the four tasks. We define a pair of input-output 66 Figure 3: Illustration of input and output of four tasks by RDRec in the prompt distillation setting. words as X = [x1, ..., x|X|] and Y = [y1, ..., y|Y |], respectively. We then concatenate the tokens of the input with prompt vectors and obtain [x1, ..., x|X|, p1, ..., p|P|]. After adding the whole-word representation [w1, ..., w|X|+|P|], we feed them into the smaller model in RDRec to obtain a probability distribution p(y|Y<t, X) over a vocabulary at each step t, where Y<t denotes the tokens generated before step t. We adopt a log-likelihood loss function to optimize the model parameters Θ: LΘ = 1 |D| X (X,Y )∈D 1 |Y | |Y | X t=1 −log p(y|Y<t, X), (1) where D denotes the training set consisting of all input-output pairs for four tasks. |D| and |Y | denote the amount of training samples and the number of tokens in the output sequence, respectively. 2.3 Model Optimization and Inference Following POD, we shuffle the input-output pairs of four tasks and randomly select samples from each task in a specified proportion. We thereafter mixed these samples to train the RDRec model. During inference, we employ a beam search algorithm to generate results by selecting the word with the highest likelihood from the vocabulary. 3 Experiment 3.1 Experimental Setup Datasets and Metrics. Consistent with POD, we performed experiments on three public datasets, i.e., Sports & Outdoors, Beauty, and Toys & Games, which are collected from the Amazon dataset1. Each record in the dataset contains a user ID, an 1https://www.amazon.com/ Dataset #User #Item #Review Avg. Density (%) Sports 48,993 34,298 296,337 8.3 0.0453 Beauty 22,363 12,101 198,502 8.9 0.0734 Toys 19,804 22,086 167,597 8.6 0.0724 Table 1: Statistics of dataset. “#User”, “#Item”, “#Review”, and “Avg.” denote the number of users, items, reviews, and average user reviews, respectively. item ID, a rating, a textual review, and a timestamp. We split each dataset into training, validation, and test sets with a ratio of 8:1:1. The statistics of datasets are provided in Table 1. To evaluate the recommendation performance, we utilized the evaluation metrics of hit rate (HR)@k (H@k) and normalized discounted cumulative gain (NDCG)@k (N@k) with k ∈{1, 5, 10}. Baselines. We compared RDRec with ten baselines for sequential recommendations: CASER (Tang and Wang, 2018), HGN (Ma et al., 2019), GRU4Rec (Hidasi et al., 2015), BERT4Rec (Sun et al., 2019), FDSA (Zhang et al., 2019), SASRec (Kang and McAuley, 2018), S3-Rec (Zhou et al., 2020), P5 (Geng et al., 2022), RLS (Chu et al., 2023) and POD (Li et al., 2023b). We compared RDRec with five baselines for top-N recommendations: MF (Koren et al., 2009), MLP (Cheng et al., 2016), P5 (Geng et al., 2022), RLS (Chu et al., 2023) and POD (Li et al., 2023b). Implementation. For a fair comparison, RDRec used T5-small (Raffel et al., 2020) as the smaller model, aligning with the baselines P5 and POD. We used Llama-2-7b (Touvron et al., 2023) as the larger LM. We reported a 10-trial T-test to show the robustness of RDRec. Our RDRec was implemented and experimented with Pytorch on Nvidia GeForce RTX 3090 (24GB memory). The Appendix A.1 provides further details. 67 Models Sports Beauty Toys H@5 N@5 H@10 N@10 H@5 N@5 H@10 N@10 H@5 N@5 H@10 N@10 Caser 0.0116 0.0072 0.0194 0.0097 0.0205 0.0131 0.0347 0.0176 0.0166 0.0107 0.0270 0.0141 HGN 0.0189 0.0120 0.0313 0.0159 0.0325 0.0206 0.0512 0.0266 0.0321 0.0221 0.0497 0.0277 GRU4Rec 0.0129 0.0086 0.0204 0.0110 0.0164 0.0099 0.0283 0.0137 0.0097 0.0059 0.0176 0.0084 BERT4Rec 0.0115 0.0075 0.0191 0.0099 0.0203 0.0124 0.0347 0.0170 0.0116 0.0071 0.0203 0.0099 FDSA 0.0182 0.0122 0.0288 0.0156 0.0267 0.0163 0.0407 0.0208 0.0228 0.0140 0.0381 0.0189 SASRec 0.0233 0.0154 0.0350 0.0192 0.0387 0.0249 0.0605 0.0318 0.0463 0.0306 0.0675 0.0374 S3-Rec 0.0251 0.0161 0.0385 0.0204 0.0387 0.0244 0.0647 0.0327 0.0443 0.0294 0.0700 0.0376 P5 0.0387 0.0312 0.0460 0.0336 0.0508 0.0379 0.0664 0.0429 0.0648 0.0567 0.0709 0.0587 RSL 0.0392 0.0330 0.0512 0.0375 0.0508 0.0381 0.0667 0.0446 0.0676 0.0583 0.0712 0.0596 POD 0.0497 0.0399 0.0579 0.0422 0.0559 0.0420 0.0696 0.0471 0.0692 0.0589 0.0749 0.0601 Ours 0.0505 0.0408 0.0596 0.0433 0.0601 0.0461 0.0743 0.0504 0.0723 0.0593 0.0802 0.0605 Impv (%). 1.6 2.2 2.8 2.5 7.5* 9.8* 6.7* 7.1* 4.4* 0.6 7.1* 0.7 p-value 6.3e-1 5.1e-1 2.7e-1 3.8e-1 8.1e-3 2.4e-3 2.1e-2 2.5e-2 1e-2 5.8e-1 1.7e-5 5.9e-1 Table 2: Performance comparison on sequential recommendation. Bold: Best, underline: Second best. “*” indicates that the improvement is statistically significant (p-value < 0.05) in the 10-trial T-test. All of the baselines are reported by the papers (Geng et al., 2022; Chu et al., 2023; Li et al., 2023b), except for the POD model. Models Sports Beauty Toys H@1 H@5 N@5 H@10 N@10 H@1 H@5 N@5 H@10 N@10 H@1 H@5 N@5 H@10 N@10 MF 0.0314 0.1404 0.0848 0.2563 0.1220 0.0311 0.1426 0.0857 0.2573 0.1224 0.0233 0.1066 0.0641 0.2003 0.0940 MLP 0.0351 0.1520 0.0927 0.2671 0.1296 0.0317 0.1392 0.0848 0.2542 0.1215 0.0252 0.1142 0.0688 0.2077 0.0988 P5 0.0726 0.1955 0.1355 0.2802 0.1627 0.0608 0.1564 0.1096 0.2300 0.1332 0.0451 0.1322 0.0889 0.2023 0.1114 RSL 0.0892 0.2092 0.1502 0.3001 0.1703 0.0607 0.1612 0.1110 0.2209 0.1302 0.0389 0.1423 0.0825 0.1926 0.1028 POD 0.0927 0.2105 0.1539 0.2889 0.1782 0.0846 0.1931 0.1404 0.2677 0.1639 0.0579 0.1461 0.1029 0.2119 0.1244 Ours 0.1285 0.2747 0.2033 0.3683 0.2326 0.1203 0.2572 0.1902 0.3380 0.2160 0.0660 0.1655 0.1171 0.2375 0.1398 Impv. (%) 38.6* 30.5* 32.1* 27.5* 30.5* 42.2* 33.2* 35.8* 26.3* 31.8* 13.9* 13.2* 13.8* 12.1* 12.4* p-value 2.3e-14 1.1e-14 2.8e-15 1.1e-16 5.0e-15 3.8e-15 2.0e-15 1.7e-15 2.7e-15 2.1e-15 5.6e-7 4.4e-8 2.4e-8 1.2e-8 9.8e-9 Table 3: Comparison on top-N recommendation. The T-test shows the results by RDRec and the second-best, POD. 3.2 Experimental Results Tables 2 and 3 show comparative results between RDRec and baselines. We can see that the RDRec consistently surpasses the runner-ups, POD and RSL, with the improvement of 0.5 ∼9.8% in H@k and N@k for sequential recommendations, and 12.1 ∼42.2% in H@k and N@k for top-N recommendations, where k ∈{1, 5, 10}. This highlights the effectiveness of learning interaction rationales to improve both recommendation tasks. We also observed that RDRec exhibits greater improvement in top-N recommendations compared to sequential recommendations. This indicates that specifying user preferences and item attributes is more beneficial to recommending top-N unknown candidates, whereas sequential recommenders rely more on capturing correct behavioral patterns for predicting the user’s next choice. We conducted an ablation experiment to examine the rationale distillation. The result in Table. 4 shows that distilling user preferences and item attributes from reviews is advantageous for both sequential and top-N recommendations. We can see that specifying item profiles is generally more effective for top-N recommendation, whereas specifying user profiles is more effective for sequential recommendation on the Sports and Beauty datasets. UsP ItA Sports Beauty Toys H@10 N@10 H@10 N@10 H@10 N@10 Sequential recommendation ✘ ✘ 0.0566 0.0408 0.0705 0.0479 0.0768 0.0573 ✔ ✘ 0.0581 0.0425 0.0729 0.0494 0.0787 0.0589 ✘ ✔ 0.0573 0.0411 0.0712 0.0492 0.0788 0.0593 ✔ ✔ 0.0596 0.0433 0.0743 0.0504 0.0802 0.0605 Top-N recommendation ✘ ✘ 0.2977 0.1850 0.2777 0.1701 0.2200 0.1284 ✔ ✘ 0.3509 0.2200 0.3080 0.1912 0.2214 0.1307 ✘ ✔ 0.3513 0.2249 0.3275 0.2048 0.2321 0.1370 ✔ ✔ 0.3683 0.2326 0.3380 0.2160 0.2375 0.1398 Table 4: Ablation study. “w/o X” denotes the removed parts. “UsP” and “ItA” indicate the distillation of user preferences and item attributes, respectively. 3.3 Error Analysis of Sequential Recommendation We conducted an error analysis to examine the sequential recommendations by RDRec. We identified two noteworthy error cases: Case (i). RDRec may prioritize the next item based on a user’s earlier interactions rather than recent ones. One reason is that the Transformer (Vaswani et al., 2017) in T5 excels in capturing long-term dependencies, while it may cause RDRec to pay less attention to recent interactions. This suggests to enhance its self-attention (Fan et al., 2022) or develop short-term prompt-aware templates for LLM-based sequential recommendations. 68 Ratio Sports Beauty Toys EG:RG:SR:TR H@10 N@10 H@10 N@10 H@10 N@10 Sequential recommendation 1 : 1 : 1 : 1 0.0596 0.0433 0.0743 0.0504 0.0789 0.0594 1 : 1 : 2 : 1 0.0593 0.0431 0.0735 0.0502 0.0790 0.0601 1 : 1 : 1 : 3 0.0592 0.0426 0.0702 0.0445 0.0802 0.0605 Top-N recommendation 1 : 1 : 1 : 1 0.3261 0.2022 0.2855 0.1854 0.2214 0.1307 1 : 1 : 2 : 1 0.2822 0.1722 0.2693 0.1584 0.1872 0.1037 1 : 1 : 1 : 3 0.3683 0.2160 0.3380 0.2160 0.2375 0.1398 Table 5: Performance on the sample ratios of various tasks. “EG”, “RG”, “SR” and “TR” denote explanation and rationale generation, and sequential and top-N recommendations, respectively. Case (ii). RDRec often disregards popular items for users because they do not align with their sequential patterns. One possible reason for this is that, during training, RDRec selects random subsequences from the user interaction sequence and predicts the last item of each subsequence. This process emphasizes sequential patterns but possibly sacrifices the model’s capability to identify popular items. This suggests introducing a popularitybased interaction graph to help the model be aware of popular high-order neighbors. 3.4 In-Depth Analysis of RDRec To better understand the RDRec, we conducted indepth experiments and analysis. The Appendix A.2 provides further analyses. Effect of sample ratios. We observe from Table 5 that on the Toys dataset, increasing the ratio of topN samples for training RDRec improves sequential recommendations, while in major cases a higher ratio of sequential samples always harms top-N recommendations. One reason is that the training strategy of sequential tasks prioritizes sequential patterns while compromising its ability to detect unknown items. Computational complexity. Both Llama2 and T5 are Transformer-based models, with computational complexity of O(L2), where L is the number of word tokens. Consequently, RDRec’s computational complexity relies on user interaction count rather than the number of users and items. Compared with other complex ID-based methods, such as graph convolution network-based approaches with O((M + N)2) (He et al., 2020; Yu et al., 2022; Wang et al., 2023c, 2024), where M and N are the numbers of users and items, respectively, and (M+N) ≫L in Table 1, RDRec exhibits reduced computational demands, thereby rendering it suitable for deployment in large-scale applications. Study of rationale distillation. We investigated the rationale distillation and obtained two findings. One is that, even when a user negatively reviews an item, the LLM objectively specifies user requirements and item attributes. For instance, in the following input, the customer advises not buying the book unless the kids are truly interested in it. However, many others provide positive comments, such as “The toy was really nice.” and “Fun little toy to match the book.” Input: My Nephew is all about trucks and machines it’s cute for him but unless the kid’s really into the book or just general construction I wouldn’t bother. This indicates that objective profiles (e.g., a book and its content) are more crucial than users’ subjective opinions in real-world recommendations. We found that the generated item attributes by the LLM are relatively objective which is shown as follows: Output: The user prefers items that are cute and appealing to children, but not necessarily related to construction or machines. The item’s attributes include being a colorful and engaging picture book that teaches children about different construction vehicles. This could be a reason for the noticeable improvement in performance by learning rationales. The other observation is that, when a review is extremely short, the prompt could urge the LLM to produce hallucinations during rationale distillations. Recently, Zhang et al. (2023a) have proposed to mitigate hallucinations of LLM-based recommender to enhance its performance. This is a rich space for further exploration (Liu et al., 2022; Gao et al., 2023; Peng et al., 2023). 4 Conclusion We proposed a compact RDRec model to learn the underlying rationales for interactions generated by a larger LM. By learning rationales from all related reviews, RDRec effectively specifies user and item profiles for recommendations. Experimental results showed the effectiveness of our RDRec. Future work involves (i) exploring better prompts for sequential recommendations, and (ii) enhancing explanation generation in RDRec. 69 Acknowledgements We would like to thank anonymous reviewers for their thorough comments and suggestions. This work is supported by the China Scholarship Council (No.202208330093) and JKA (No.2023M-401). Ethics Statement This paper does not involve the presentation of a new dataset, an NLP application, and the utilization of demographic or identity characteristics information. Limitation The hallucination issue during rationale distillation remains unsolved. Additionally, RDRec faces an unfaithful reasoning problem, misinterpreting user opinions about candidate items despite delivering correct recommendations. References Hao Cheng, Shuo Wang, Wensheng Lu, Wei Zhang, Mingyang Zhou, Kezhong Lu, and Hao Liao. 2023. Explainable recommendation with personalized review retrieval and aspect learning. In the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 51–64. Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et al. 2016. Wide & deep learning for recommender systems. In Proceedings of the 1st workshop on deep learning for recommender systems, pages 7–10. Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, et al. 2023. Leveraging large language models for pre-trained recommender systems. arXiv preprint arXiv:2308.10837. Ziwei Fan, Zhiwei Liu, Yu Wang, Alice Wang, Zahra Nazari, Lei Zheng, Hao Peng, and Philip S Yu. 2022. Sequential recommendation via stochastic selfattention. In Proceedings of the ACM Web Conference 2022, pages 2036–2047. Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, et al. 2023. Rarr: Researching and revising what language models say, using language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 16477–16508. Shijie Geng, Shuchang Liu, Zuohui Fu, Yingqiang Ge, and Yongfeng Zhang. 2022. Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). In Proceedings of the 16th ACM Conference on Recommender Systems, pages 299–315. Jesse Harte, Wouter Zorgdrager, Panos Louridas, Asterios Katsifodimos, Dietmar Jannach, and Marios Fragkoulis. 2023. Leveraging large language models for sequential recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems, pages 1096–1102. Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. 2020. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 639–648. Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, and Julian McAuley. 2023. Large language models as zero-shot conversational recommenders. In Proceedings of the 32nd ACM international conference on information and knowledge management, pages 720–730. Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk. 2015. Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939. Cheng-Yu Hsieh, Chun-Liang Li, Chih-Kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alexander Ratner, Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. 2023. Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes. arXiv preprint arXiv:2305.02301. Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, and Xing Xie. 2023. Recommender ai agent: Integrating large language models for interactive recommendations. arXiv preprint arXiv:2308.16505. Wang-Cheng Kang and Julian McAuley. 2018. Selfattentive sequential recommendation. In 2018 IEEE international conference on data mining (ICDM), pages 197–206. IEEE. Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization techniques for recommender systems. Computer, 42(8):30–37. Yuxuan Lei, Jianxun Lian, Jing Yao, Xu Huang, Defu Lian, and Xing Xie. 2023. Recexplainer: Aligning large language models for recommendation model interpretability. arXiv preprint arXiv:2311.10947. Jian Li, Jieming Zhu, Qiwei Bi, Guohao Cai, Lifeng Shang, Zhenhua Dong, Xin Jiang, and Qun Liu. 2022. Miner: multi-interest matching network for news recommendation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 343– 352. 70 Lei Li, Yongfeng Zhang, and Li Chen. 2023a. Personalized prompt learning for explainable recommendation. ACM Transactions on Information Systems, 41(4):1–26. Lei Li, Yongfeng Zhang, and Li Chen. 2023b. Prompt distillation for efficient llm-based recommendation. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, pages 1348–1357. Xinyu Lin, Wenjie Wang, Yongqi Li, Fuli Feng, SeeKiong Ng, and Tat-Seng Chua. 2023. A multi-facet paradigm to bridge large language model and recommendation. arXiv preprint arXiv:2310.06491. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9):1–35. Tianyu Liu, Yizhe Zhang, Chris Brockett, Yi Mao, Zhifang Sui, Weizhu Chen, and Bill Dolan. 2022. A token-level reference-free hallucination detection benchmark for free-form text generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. Chen Ma, Peng Kang, and Xue Liu. 2019. Hierarchical gating networks for sequential recommendation. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 825–833. Daniel McKee, Justin Salamon, Josef Sivic, and Bryan Russell. 2023. Language-guided music recommendation for video via prompt analogies. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14784–14793. Zhongjian Miao, Wen Zhang, Jinsong Su, Xiang Li, Jian Luan, Yidong Chen, Bin Wang, and Min Zhang. 2023. Exploring all-in-one knowledge distillation framework for neural machine translation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2929–2940. Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, et al. 2023. Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551. Xubin Ren, Wei Wei, Lianghao Xia, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin, and Chao Huang. 2023. Representation learning with large language models for recommendation. arXiv preprint arXiv:2310.15950. Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, and Lucas Dixon. 2023. Large language models are competitive near cold-start recommenders for language-and item-based preferences. In Proceedings of the 17th ACM conference on recommender systems, pages 890–896. Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. 2019. Bert4rec: Sequential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM international conference on information and knowledge management, pages 1441–1450. Jiaxi Tang and Ke Wang. 2018. Personalized top-n sequential recommendation via convolutional sequence embedding. In Proceedings of the eleventh ACM international conference on web search and data mining, pages 565–573. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Vijay Viswanathan, Luyu Gao, Tongshuang Wu, Pengfei Liu, and Graham Neubig. 2023. Datafinder: Scientific dataset recommendation from natural language descriptions. In the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 10288–10303. Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, and Huan Sun. 2023a. Towards understanding chain-of-thought prompting: An empirical study of what matters. In the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2717– 2739. Peifeng Wang, Zhengyang Wang, Zheng Li, Yifan Gao, Bing Yin, and Xiang Ren. 2023b. Reasoning implicit sentiment with chain-of-thought prompting. In the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1171–1182. Xinfeng Wang, Fumiyo Fukumoto, Jin Cui, Yoshimi Suzuki, Jiyi Li, and Dongjin Yu. 2023c. Eedn: Enhanced encoder-decoder network with local and global context learning for poi recommendation. In 71 Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 383–392. Xinfeng Wang, Fumiyo Fukumoto, Jin Cui, Yoshimi Suzuki, and Dongjin Yu. 2024. Nfarec: A negative feedback-aware recommender model. arXiv preprint arXiv:2404.06900. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837. Wei Wei, Xubin Ren, Jiabin Tang, Qinyong Wang, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin, and Chao Huang. 2023. Llmrec: Large language models with graph augmentation for recommendation. arXiv preprint arXiv:2311.00423. Zhengyi Yang, Jiancan Wu, Yanchen Luo, Jizhi Zhang, Yancheng Yuan, An Zhang, Xiang Wang, and Xiangnan He. 2023. Large language model can interpret latent space of sequential recommender. arXiv preprint arXiv:2310.20487. Junliang Yu, Hongzhi Yin, Xin Xia, Tong Chen, Lizhen Cui, and Quoc Viet Hung Nguyen. 2022. Are graph augmentations necessary? simple graph contrastive learning for recommendation. In Proceedings of the 45th international ACM SIGIR conference on research and development in information retrieval, pages 1294–1303. Zhenrui Yue, Sara Rabhi, Gabriel de Souza Pereira Moreira, Dong Wang, and Even Oldridge. 2023. Llamarec: Two-stage recommendation using large language models for ranking. arXiv preprint arXiv:2311.02089. An Zhang, Leheng Sheng, Yuxin Chen, Hao Li, Yang Deng, Xiang Wang, and Tat-Seng Chua. 2023a. On generative agents in recommendation. arXiv preprint arXiv:2310.10108. Junjie Zhang, Ruobing Xie, Yupeng Hou, Wayne Xin Zhao, Leyu Lin, and Ji-Rong Wen. 2023b. Recommendation as instruction following: A large language model empowered recommendation approach. arXiv preprint arXiv:2305.07001. Tingting Zhang, Pengpeng Zhao, Yanchi Liu, Victor S Sheng, Jiajie Xu, Deqing Wang, Guanfeng Liu, Xiaofang Zhou, et al. 2019. Feature-level deeper selfattention network for sequential recommendation. In IJCAI, pages 4320–4326. Zizhuo Zhang and Bang Wang. 2023. Prompt learning for news recommendation. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 227–237. Kun Zhou, Hui Wang, Wayne Xin Zhao, Yutao Zhu, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, and Ji-Rong Wen. 2020. S3-rec: Self-supervised learning for sequential recommendation with mutual information maximization. In Proceedings of the 29th ACM international conference on information & knowledge management, pages 1893–1902. Yingjie Zhu, Jiasheng Si, Yibo Zhao, Haiyang Zhu, Deyu Zhou, and Yulan He. 2023. Explain, edit, generate: Rationale-sensitive counterfactual data augmentation for multi-hop fact verification. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 13377–13392. 72 A Appendix A.1 Experimental Details This section provides further experimental results and implementation setup. We focus on the fourth task as Geng et al. (2022) have conducted a thorough study for the others. A.1.1 Effect of Various Sample Ratios Table 6 shows the effect of various sample ratios on the recommendation performance. We can see that the sample ratio of various tasks for pretraining the model would influence the recommendation performance. Specifically, on the Toys dataset, increasing the ratio of top-N samples sometimes improves both sequential and top-N recommendations. In contrast, a higher ratio of sequential samples often negatively affects the performance of top-N recommendations across all datasets. The reason is that during training, RDRec selects random user interaction subsequences and predicts the last item of each subsequence. This process emphasizes sequential patterns, although possibly sacrifices the model’s capability to identify popular items. A.1.2 Execution Time Table 7 shows the execution time in various stages by RDRec on three datasets. These results were obtained through Nvidia GeForce RTX 3090 (24GB memory). We can see that RDRec efficiently makes inferences for recommendations with a small backbone, while the interaction rationale distillation and pre-training are time-consuming. Fortunately, these processes are only required once. A.1.3 Implementation Details For a fair comparison, all the hyperparameters of RDRec are in the same setting as POD. Specifically, both the encoder and decoder consist of 6 layers with each layer comprising an 8-headed attention layer. The vocabulary of T5 contains a total number of 32,100 tokens, with an embedding dimensionality of 512. We iteratively and randomly sampled a segment from a user’s item sequence for training the sequential recommendation task. The number of negative items for top-N recommendation is set to 99 for both training and evaluation. We used the AdamW optimizer (Loshchilov and Hutter, 2017). We set the number of prompt vectors to 3 for all tasks, the batch size for training all three tasks to 64, and the learning rate to 0.001 for the Sports dataset and 0.0005 for both the Beauty and Toys datasets. We exploit the discrete prompt templates for different tasks from (Geng et al., 2022). During training, we save a checkpoint if the total validation loss of the model in all tasks is the lowest for the current epoch. If this doesn’t occur 5 times, we terminate the training process and load the best checkpoint for evaluation. At the inference stage, we set the number of beams at 20 for sequential and top-N recommendations. For generation tasks, we apply group beam search with the number of beams and beam groups set to 21 and 3, respectively. A.1.4 Baselines To evaluate the performance of sequential and topN recommendations, we compared our RDRec with twelve baselines: • MF (Koren et al., 2009) accesses the inner product between user and item latent factors for predicting users’ preference for candidates. • GRU4Rec (Hidasi et al., 2015) regards the entire item sequence of each user as the user’s session to recommend. • MLP (Cheng et al., 2016) exploits a stack of nonlinear layers to learn user and item embeddings for making recommendations. • CASER (Tang and Wang, 2018) treats user interactions as images and employs 2-dimensional convolutions to capture sequential patterns. • SASRec (Kang and McAuley, 2018) exploits Markov Chains to excavate short-term semantics in users’ sequential patterns. • HGN (Ma et al., 2019) exploits a novel gating strategy to model users’ long- and short-term interests in candidate items. • BERT4Rec (Sun et al., 2019) proposes to leverage the BERT-style cloze task for the sequential recommender algorithm. • FDSA (Zhang et al., 2019) incorporates item features with item sequences of users to perform recommendations. • S3-Rec (Zhou et al., 2020) learns users’ latent behavioral features via employing a self-supervised learning paradigm. • P5 (Geng et al., 2022) converts three different recommendation tasks into textual generation tasks using LLMs for recommendations. • RSL (Chu et al., 2023) adopts novel training and inference strategies to deliver LLM-based recommendations. • POD (Li et al., 2023b) refines P5 through prompt distillation to make efficient and precise recommendations. 73 Ratio Sports Beauty Toys EG:RG:SR:TR H@5 N@5 H@10 N@10 H@5 N@5 H@10 N@10 H@5 N@5 H@10 N@10 Sequential recommendation 1 : 1 : 1 : 1 0.0503 0.0402 0.0596 0.0433 0.0601 0.0461 0.0743 0.0504 0.0716 0.0579 0.0789 0.0594 1 : 1 : 2 : 1 0.0501 0.0398 0.0593 0.0431 0.0595 0.0457 0.0735 0.0502 0.0713 0.0581 0.0790 0.0601 1 : 1 : 3 : 1 0.0496 0.0399 0.0578 0.0420 0.0573 0.0417 0.0662 0.0452 0.0713 0.0583 0.0792 0.0601 1 : 1 : 1 : 2 0.0489 0.0374 0.0571 0.0387 0.0565 0.0419 0.0715 0.0466 0.0717 0.0588 0.0799 0.0602 1 : 1 : 1 : 3 0.0483 0.0369 0.0592 0.0426 0.0547 0.0395 0.0702 0.0445 0.0723 0.0593 0.0802 0.0605 Top-N recommendation 1 : 1 : 1 : 1 0.2381 0.1750 0.3261 0.2022 0.2136 0.1516 0.2885 0.1854 0.1482 0.1062 0.2144 0.1307 1 : 1 : 2 : 1 0.2042 0.1476 0.2822 0.1722 0.1845 0.1350 0.2693 0.1584 0.1253 0.0876 0.1872 0.1037 1 : 1 : 3 : 1 0.1524 0.1080 0.2101 0.1298 0.1424 0.1024 0.2178 0.1359 0.1118 0.0780 0.1803 0.0998 1 : 1 : 1 : 2 0.2439 0.1810 0.3303 0.2067 0.2372 0.1784 0.3237 0.2030 0.1579 0.1091 0.2221 0.1339 1 : 1 : 1 : 3 0.2747 0.2033 0.3683 0.2326 0.2572 0.1902 0.3380 0.2160 0.1655 0.1171 0.2375 0.1398 Table 6: Performance comparison on various sample ratios for training RDRec. “EG”, “RG”, “SR” and “TR” denote explanation generation, rationale generation, and sequential and top-N recommendations, respectively. Datasets Stages Distillation Pre-training SR TR Sports 16h46m28s 16h23m23s 15m03s 18m23s Beauty 11h50m14s 12h45m12s 13m33s 16m07s Toys 09h13m05s 08h39m37s 16m25s 18m21s Table 7: Execution time in various stages. “SR” and “TR” represent the cumulative inference time for all users in sequential and top-N recommendations, respectively. “h”, “m”, and “s” refer to “hours” and “minutes”, and “seconds” respectively. A.2 Further Analyses A.2.1 Hallucination by LLMs We observed that when a review is too short, the LLM might produce hallucinations. The following example illustrates a rationale with hallucinations generated by the LLM: Input: This is a fantastic game. Output: The user prefers games with engaging storylines . The item’s attributes include a realistic game world , immersive sound effects , and smooth gameplay . The contents of “engaging storylines”, “immersive sound effects” and “smooth gameplay” marked by gray are hallucinations overly inferred by the LLM. Toward this, mitigating hallucinations of LLM-based recommender is a rich space for future exploration (Liu et al., 2022; Gao et al., 2023; Peng et al., 2023; Zhang et al., 2023a). A.2.2 Effect of Explanation Generation We observed that RDRec can generate correct explanations in many cases, such as the explanation “This is a great product for the price,” for the provided review “very good quality for the price.” However, RDRec sometimes recommends candidates correctly but provides explanations that completely differ from the user’s review. For instance, the generated explanation is, “Absolutely great product,” whereas the user’s actual review is, “I wouldn’t recommend this for painting your full nail.” One possible reason is that RDRec has learned to prioritize predicting user-item interaction over considering the rationale for making recommendations. This is a challenging yet intriguing path to further improve RDRec. A.2.3 The Whole-Word Embedding To address the token composing issue (i.e., the token of “user_1234” is often tokenized by the tokenizer of LLMs as a sequence of [“user”, “_”, “12” and “34”]), we employed the whole-word embedding (Geng et al., 2022) to ensure that each sequence of ID tokens is a complete unit and can be distinguished from a word. It is noteworthy that the whole-word embedding will not cause scalability issues because we only need to identify which tokens represent the same user (or item). For instance, given a token list [“P1”, “P2”, “P3”, “user”, “_”, “12”, “34”, “item”, “_”, “98”, “76”], the index list over the whole-word embedding vocabulary is [0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2]. Since the number of negative samples is set to 99 and the average user interaction is less than 9 in our datasets, an embedding matrix (512 * 512) with a maximum incremental number of 512 is sufficient. Even if a user’s interaction count exceeds 512, we only need to expand the wholeword embedding matrix, which is acceptable for a real-world deployment. 74
2024_short
6
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 657–664 August 11-16, 2024 ©2024 Association for Computational Linguistics Cross-Modal Projection in Multimodal LLMs Doesn’t Really Project Visual Attributes to Textual Space Gaurav Verma Minje Choi Kartik Sharma Jamelle Watson-Daniels Sejoon Oh Srijan Kumar Georgia Institute of Technology, Harvard University {gverma, minje.choi, ksartik, soh337, srijan}@gatech.edu [email protected] Abstract Multimodal large language models (MLLMs) like LLaVA and GPT-4(V) enable generalpurpose conversations about images with the language modality. As off-the-shelf MLLMs may have limited capabilities on images from domains like dermatology and agriculture, they must be fine-tuned to unlock domain-specific applications. The prevalent architecture of current open-source MLLMs comprises two major modules: an image-language (cross-modal) projection network and a large language model. It is desirable to understand the roles of these two modules in modeling domain-specific visual attributes to inform the design of future models and streamline the interpretability efforts on the current models. To this end, via experiments on 4 datasets and under 2 fine-tuning settings, we find that as the MLLM is finetuned, it indeed gains domain-specific visual capabilities, but the updates do not lead to the projection extracting relevant domain-specific visual attributes. Our results indicate that the domain-specific visual attributes are modeled by the LLM, even when only the projection is fine-tuned. Through this study, we offer a potential reinterpretation of the role of crossmodal projections in MLLM architectures. 1 Introduction The recent wave of advancements in large language models (LLMs) has equipped them with the ability to “see” images, leading to multimodal large language models (MLLMs) like LLaVA (Liu et al., 2023c), GPT-4(V) (Achiam et al., 2023), and Gemini (Anil et al., 2023). MLLMs unlock the potential to converse with visual data using language. However, existing MLLMs are trained and evaluated for general-purpose multimodal tasks like question-answering on natural images1 (Liu et al., 2023c; AI, 2024), which limits their applicability in 1We use ‘natural images’ or ‘internet images’ to refer to common images encountered on social media platforms and the Web and contrast them with domain-specific images. Classify this image into one of the following categories Classify this image into one of the following categories “Potato leaf early blight” “Frilly texture” 🔥 ❄ 🔥 🔥 Figure 1: Overview of our study. While the MLLM’s domain-specific visual capability can be improved using fine-tuning strategies, the domain-specific richness of the image’s post-projection representation does not improve. Results indicate that domain-specific visual attributes are predominantly modeled by the LLM parameters (whether frozen or not) and the projection does not necessarily play a role in mapping visual attributes to the LLM space. specific domains like agriculture and dermatology. MLLMs with domain-specific visual capabilities can transform workflows in several industries, including healthcare, agriculture, circuit design, and satellite imaging (Miotto et al., 2018; Ferentinos, 2018; Anilturk et al., 2023; Kaselimi et al., 2022). While fine-tuning can improve domain-specific visual capabilities of general-purpose MLLMs, we adopt domain-specific fine-tuning as a strategic approach to understand the roles that the MLLM’s key architectural components play in modeling visual attributes. A better understanding of the roles of MLLM’s components in modeling visual attributes can inform future design choices as well as direct interpretability efforts. Architecturally, open-source MLLMs comprise two key components: (i) a cross-modal projection layer that connects image representations with the LLM, and (ii) the LLM that processes the projected image representation and the text tokens; see Figure 1 (left). In the context of the projec657 tion, researchers often consider the projection layer as the unit responsible for aligning features/concepts from the image to the LLM space (Li et al., 2023; Lin et al., 2023; Moon et al., 2023). Consequently, one prevalent fine-tuning strategy to adapt MLLMs for domain-specific visual tasks is to update the projection while keeping the LLM parameters frozen (Moon et al., 2023). Alternatively, the projection and the LLM parameters can be finetuned concurrently (Liu et al., 2023b). In this work, we use domain-specific fine-tuning using the above two strategies to understand the role of the projection and the LLM parameters in acquiring domain-specific image modeling capabilities. We posit that if the projection plays a critical role in acquiring domain-specific image modeling capabilities, the post-projection representation – i.e., the representation of the image transformed by the projection, should be richer2 in domain-specific features. Conversely, if the post-projection representation is not richer in domain-specific features, the domain-specific features are being identified or modeled by the LLM parameters.3 Our experiments and analysis with 4 different datasets show that, as expected, both the finetuning strategies boost domain-specific closed-set image classification performance of the MLLM. However, none of the strategies lead to extraction of richer domain-specific features by the update in the projection layer; see Figure 1 (right). This indicates that as MLLMs are fine-tuned to classify domain-specific images, the identification of domain-specific image attributes occurs in the LLM parameters, whether frozen or not. More broadly, our results add to the existing evidence that deep neural networks can be inherently multimodal (Goh et al., 2021; Schwettmann et al., 2023), and LLMs could model visual data with minimal assistance from the cross-modal projection. We first discuss the fine-tuning strategies to improve the domain-specific capabilities of MLLMs (Section 2) and then analyze the role of projection in acquiring the new domain-specific capabilities (Section 3). Finally, we discuss the implications of our work and the future directions (Section 4). 2We use domain-specific richness to indicate the “expressive power” of the representations (Bengio et al., 2012) towards the domain-specific task. 3Project webpage: https://claws-lab.github.io/ projection-in-MLLMs/ 2 Effect of Fine-tuning Projection Layer versus the Entire Multimodal LLM We are interested in exploring two potential finetuning strategies that could help an MLLM in gaining domain-specific visual capabilities. The first approach involves simply fine-tuning the vision-tolanguage projection, e.g., a simple two-layer MLP with ∼20M parameters. The second approach involves training the entire MLLM – i.e., the projection layer + the LLM with ∼7B parameters. We conduct all our experiments with the LLaVA-1.5 model (Liu et al., 2023b), which uses the LLaMA2-7B (Touvron et al., 2023) as the LLM backbone, as it is a strong representative of open-source stateof-the-art multimodal LLMs (Ge et al., 2023; Liu et al., 2023a; Yu et al., 2023). Setting 1: Only fine-tuning the projection layer. LLaVA-1.5 involves pre-training the cross-modal projection layers to align image features with the pre-trained LLM’s token embeddings by maximizing the next-token prediction likelihood of the MLLM. Let Xa denotes the ground-truth output corresponding to the question Xq regarding the image encoding Xv, which is obtained from the frozen vision-encoder of CLIP (Radford et al., 2021). The projection layer, parameterized by ϕ, is trained to elicit the correct response from the frozen LLM, token-by-token while using the projected image-encoding Hv = ϕ(Xv), and considering previous tokens of the ground-truth answer. See Figure 2 (Appendix) for a pictorial illustration of the formulation. Since our focus is to perform domain-specific image classification using MLLMs, we consider Xa = <label> for a given image and construct Xq as: Classify this image into one of the following categories relating to <task>: <classes_string>. Only output a single final classification label and NOTHING ELSE. For each example, we randomly shuffle the order of classes inside <classes_string> to avoid any position bias. We fine-tune the projection layers of the LLaVA-1.5 model for 1 epoch using the default hyper-parameters (Liu et al., 2023b). During inference, we perform zero-shot classification using the same prompt above for the MLLM with the updated projection. Setting 2: Fine-tuning the MLLM end-to-end. Alternatively, we fine-tune all the MLLM parameters, i.e., the projection layers and the LLM parameters concurrently by maximizing the next token658 MODELS/VARIANTS AGRICULTURE TEXTURES DERMATOLOGY HUMANITARIAN F1 Acc. F1 Acc. F1 Acc. F1 Acc. Random (Uniform) 0.0309 0.0339 0.0214 0.0218 0.0451 0.0483 0.2425 0.2664 CLIP (Zero-shot; LLaVA-1.5’s vision encoder) 0.4165 0.4492 0.4582 0.4984 0.1783 0.2401 0.4139 0.4718 LLaVA-1.5 (Zero-shot) 0.1064 0.1255 0.1882 0.2138 0.0658 0.0672 0.5169 0.5678 LLaVA-1.5 (FT-Proj with labels) 0.2221 0.2478 0.4505 0.4654 0.2932 0.3403 0.6227 0.7151 LLaVA-1.5 (FT-E2E with labels) 0.5984 0.6525 0.7446 0.7496 0.4947 0.5464 0.7950 0.8554 Table 1: Performance on domain-specific image classification datasets. Fine-tuning LLaVA-1.5 end-to-end leads to the best domain-specific performance, while only fine-tuning the projection leads to a notable gain over LLaVA’s zero-shot capabilities across all the datasets. It is worth noting that CLIP’s zero-shot performance, which is the pre-projection image representation that LLaVA uses, is notably better than LLaVA’s zero-shot performance. All the values are averaged over 5 experimental runs with different random seeds; the σ is < 1% for all values. prediction likelihood of the MLLM. In other words, we update both ϕ and θ, where θ denotes the LLM paramters. We use the same strategy to construct Xa and Xq as in the previous setting. Again, we fine-tune the LLaVA-1.5 model for 1 epoch using the default hyper-parameters. Similar to the above setting, after training the MLLM, we perform zeroshot domain-specific image classification using the Xq constructed above. We fine-tune the MLLM using these 2 strategies for each of the 4 datasets from different domains. Image datasets. The 4 image classification datasets correspond to the following tasks: leaf disease classification, visual texture detection, skin disease identification, and humanitarian category classification. Figure 3 (Appendix) provides an illustration of the datasets under consideration. (i) Agriculture: To enable scalable and early plant disease detection, Singh et al. (2020) curated PlantDoc. The dataset comprises 2,598 images categorized into 17 classes of leaf diseases. (ii) Textures: With an aim to evaluate whether visual models can identify human-centric attributes like texture beyond detecting or describing objects/scenes, Cimpoi et al. (2014) curated 5,640 images categorized into 47 texture-related classes (like polka-dotted, wrinkled, and honeycombed). (iii) Dermatology: We consider the DermNet dataset (Rimi et al., 2020), which comprises 19,561 images categorized into 23 types of skin diseases like Acne, Melanoma, Seborrheic Keratoses, etc. (iv) Humanitarian: To aid development of computational methods that can help humanitarian organizations process images posted on social platforms during crises, Alam et al. (2018) and Ofli et al. (2020) curated the CrisisMMD dataset, which comprises 10,461 images categorized into 4 different categories. This dataset comprises images that are the closest to natural/internet images. Domain-specific classification performance. Table 1 shows the image classification performance (macro-averaged F1 scores and accuracy) of the MLLMs under various settings. For reference, we include zero-shot classification performance of CLIP4, which is the visual encoder of the LLaVA1.5 model (see Appendix A.1 for details). First, it is worth noting that the zero-shot performance of the original LLaVA-1.5 model is notably worse than CLIP’s zero-shot performance. This indicates that while domain-specific image attributes are present in the pre-projection image embeddings that are obtained from a frozen vision encoder (i.e., Xv), they are not being used by the MLLM parameters. This can be attributed to the corpus used to train MLLMs like LLaVA, which comprises natural images. Second, clearly, the results show that finetuning indeed improves performance on domain-specific classification, with significant improvements made when fine-tuning the entire MLLM (‘FT-E2E’) as opposed to only the projection layer (‘FT-Proj’). The greater effectiveness of the FT-E2E can be attributed to greater representational space (∼7B) over FT-Proj (∼20M). With these observations, next, we focus on investigating the role of projection in capturing domain-specific image attributes. 3 Role of Projection in Learning Domain-Specific Image Attributes Following up on results in Table 1, we ask: does the projection learn to model the domain-specific image attributes on fine-tuning the MLLM? 4https://huggingface.co/openai/ clip-vit-large-patch14-336 (Wolf et al., 2019) 659 Task Setting Post-proj MLP (LLaVA-1.5; F1) MLLM (LLaVA-1.5; F1) Agriculture Original 0.5701 (———–) 0.1064 (————-) FT-Proj 0.4134 (-27.49%) 0.2221 (+108.74%) FT-E2E 0.5346 (-06.22%) 0.5984 (+462.41%) Textures Original 0.6401 (———–) 0.1882 (————-) FT-Proj 0.4736 (-26.01%) 0.4505 (+139.37%) FT-E2E 0.6212 (-02.95%) 0.7446 (+295.64%) Dermatology Original 0.3105 (———–) 0.0658 (————-) FT-Proj 0.2182 (-29.72%) 0.2932 (+345.59%) FT-E2E 0.2525 (-18.67%) 0.4947 (+651.82%) Humanitarian Original 0.7498 (———–) 0.5169 (————-) FT-Proj 0.6025 (-19.64%) 0.6227 (+020.47%) FT-E2E 0.7238 (-03.46%) 0.7950 (+053.80%) Table 2: Estimating the domain-specific richness of the post-projection image representation using an independent MLP. Compared to the original LLaVA1.5 model, both fine-tuning strategies lead to worsened domain-specific richness of the post-projection image representation (second-last column), while the MLLM performance (last column) improves consistently. This implies that the domain-specific attributes are identified in the LLM, even when the LLM parameters are kept frozen as the projection is updated (i.e., ‘FT-Proj’). Estimating post-projection richness. To answer the above question, we develop a reliable-yetsimple way to estimate domain-specific richness of the projected image representation, i.e., the post-projection representation, denoted by Hv = ϕ(Xv). We do this by training an independent multilayer perceptron (MLP) to perform the image classification task using Hv as the image representation. This classifier helps estimate the extent of domainspecific information (or expressive power (Bengio et al., 2012)) that can be extracted from the input, in this case the post-projection image representation Hv. In other words, a better classification performance by this MLP will denote relative domainspecific richness of the post-projection embeddings used for training, and vice versa. We train one MLP each using the post-projection representation Hv obtained from the following three settings: (i) original LLaVA-1.5, (ii) LLaVA-1.5 with fine-tuned projection, and (ii) LLaVA-1.5 with end-to-end fine-tuning, while keeping the architecture of the MLP the same for consistent comparison. We provide the additional details, including architecture and training hyper-parameters, in Appendix A.2. Comparing domain-specific richness of postprojection representation across different settings. Table 2 shows: (a) the domain-specific richness of post-projection representation Hv (‘Postproj MLP’), and (b) the corresponding MLLM performance (‘MLLM’), across the three settings mentioned above (i.e., ‘Original’, ‘FT-Proj’, and ‘FTE2E’). We report the macro-averaged F1 score on the test set of the respective dataset for both (a) and (b). There are two key trends in Table 2: first, when the ‘Original’ LLaVA-1.5 model’s projection layer is fine-tuned (‘FT-Proj’), the domain-specific richness of the post-projection representation diminishes, while a boost in the MLLM performance is observed. Similarly, second, with end-to-end fine-tuning of LLaVA-1.5 (‘FT-E2E’), the domainspecific richness of the post-projection representation worsens while the MLLM performance boosts notably. These two trends are consistent across all the datasets considered in our study. Domain-specific attributes are identified within the LLM. The two trends observed above reinforce the idea that as the MLLM gains previously-absent domain-specific image classification abilities via fine-tuning, the contribution of the projection layer in identifying relevant image attributes declines. Let us consider the two fine-tuning settings separately. In the first setting, the projection layer undergoes updates to assist the frozen LLM in more accurate label prediction, and yet captures lesser domain-specific image attributes. This indicates that the updates in projection layer merely facilitate better use of frozen LLM parameters for the domain-specific task and do not necessarily involve mapping image attributes to the frozen LLM space. In the second setting as well, when both the LLM parameters and projection layer undergo updates concurrently, the projection layer captures lesser domain-specific attributes, which indicates that the updates in the LLM parameters are predominantly responsible for the acquired domain-specific image classification capabilities. In sum, our results indicate that the modeling of domain-specific image attributes in MLLMs is done by the LLM parameters, whether they are kept frozen or undergo updates. 4 Discussion and Implications Existing literature on interpretability of neural networks has discussed the notion of “multimodal neurons” – neurons that trigger in response to particular concepts spanning disparate modalities (Goh et al., 2021; Schwettmann et al., 2023; Pan et al., 2023). For instance, Goh et al. (2021) demonstrate that in the CLIP model, a single neuron could respond to the photographs, drawings, or images that 660 relate to, let’s say ‘spiderman,’ even though the input image may differ in terms of low-level visual attributes like color, edges, and corners. Similarly, Schwettmann et al. (2023) show that a specific neurons within a frozen text-only Transformer are responsible for detecting visual concepts, let’s say like ‘horses,’ in the input images that are projected to align with the text-only transformer. Our study adds to this literature by showing that even the acquired abilities to detect visual attributes in an MLLM are reliant on the LLM parameters. Notably, when the LLM parameters are frozen, the cross-modal projection layer adapts to facilitate detection of visual attibutes in the LLM without extracting domain-specific attributes. In other words, when the LLM is frozen and the projection is finetuned, the projection parameters are updated to leverage the pre-existing domain-specific knowledge in the LLM parameters. In the future, we aim to interpret the layer- & neuron-level contributions in LLMs towards acquired multimodal reasoning. 5 Limitations and Broader Perspective Limitations and future work: Our current work focuses on a representative cross-modal projection scheme (multilayer perceptron) in a state-ofthe-art MLLM (LLaVA-1.5). Other open-source MLLMs have considered other projection schemes like a trainable linear layer (LLaVa-1; Liu et al. (2023c)), gated cross-attention (Flamingo; Alayrac et al. (2022)), and Q-Former (InstructBLIP; Dai et al. (2023)). Future work could extend the current study to other projection schemes and models. Beyond the adopted strategy of estimating the post-projection richness of image representations using an independent classifier, future work could also probe the MLLM using concept bottleneck methods (Koh et al., 2020), or analyze mutual information between representations (Bachman et al., 2019). Finally, while outside the scope of the current work, a holistic evaluation of the MLLM should focus on domain-specific capabilities as well as the general purpose capabilities. Broader social impact: The authors do not foresee any negative social impacts of this specific work. However, we acknowledge that existing LLMs and MLLMs demonstrate different forms of biases (Wan et al., 2023; Nwatu et al., 2023) that could be inherited in domain-specific variants. In line with the ongoing effort towards mitigating social biases in deep neural networks, future efforts that aim to interface modality-specific reasoning with LLMs, should consider the additional biases that LLMs may introduce on top of the modalityspecific networks. Datasets and code: The datasets used in this study are publicly available and were curated by previous research. We abide by their terms of use. We release the code for our experiments to aid reproducibility and enable future research on this topic: https://github.com/claws-lab/ projection-in-MLLMs 6 Acknowledgements This research/material is based upon work supported in part by NSF grants CNS-2154118, ITE2137724, ITE-2230692, CNS2239879, Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR00112290102 (subcontract No. PO70745), CDC, and funding from Microsoft. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the position or policy of DARPA, DoD, SRI International, CDC, NSF, and no official endorsement should be inferred. Gaurav is partially supported by the JP Morgan AI Research PhD Fellowship and the Snap Research Fellowship. We thank the members of the CLAWS Lab for their helpful feedback. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Abien Fred Agarap. 2018. Deep learning using rectified linear units (relu). arXiv preprint arXiv:1803.08375. Landing AI. 2024. Introducing domain-specific large vision models. https://landing.ai/ blog/introducing-domain-specific-large\ -vision-models/. Accessed: 2024-02-14. Firoj Alam, Ferda Ofli, and Muhammad Imran. 2018. Crisismmd: Multimodal twitter datasets from natural disasters. In Proceedings of the 12th International AAAI Conference on Web and Social Media (ICWSM). Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. 2022. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716–23736. 661 Rohan Anil, Sebastian Borgeaud, Yonghui Wu, JeanBaptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. 2023. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805. Onder Anilturk, Edwin Lumanauw, James Bird, Juan Olloniego, Dillon Laird, Juan Camilo Fernandez, and Quinn Killough. 2023. Automatic defect classification (adc) solution using data-centric artificial intelligence (ai) for outgoing quality inspections in the semiconductor industry. In Metrology, Inspection, and Process Control XXXVII, volume 12496, pages 830–836. SPIE. Philip Bachman, R Devon Hjelm, and William Buchwalter. 2019. Learning representations by maximizing mutual information across views. Advances in neural information processing systems, 32. Yoshua Bengio, Aaron C Courville, and Pascal Vincent. 2012. Unsupervised feature learning and deep learning: A review and new perspectives. CoRR, abs/1206.5538, 1(2665):2012. M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, , and A. Vedaldi. 2014. Describing textures in the wild. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Albert Li, Pascale Fung, and Steven C. H. Hoi. 2023. Instructblip: Towards general-purpose vision-language models with instruction tuning. ArXiv, abs/2305.06500. Konstantinos P Ferentinos. 2018. Deep learning models for plant disease detection and diagnosis. Computers and electronics in agriculture, 145:311–318. Wentao Ge, Shunian Chen, Guiming Chen, Junying Chen, Zhihong Chen, Shuo Yan, Chenghao Zhu, Ziyue Lin, Wenya Xie, Xidong Wang, et al. 2023. Mllm-bench, evaluating multi-modal llms using gpt4v. arXiv preprint arXiv:2311.13951. Gabriel Goh, Nick Cammarata, Chelsea Voss, Shan Carter, Michael Petrov, Ludwig Schubert, Alec Radford, and Chris Olah. 2021. Multimodal neurons in artificial neural networks. Distill, 6(3):e30. Maria Kaselimi, Athanasios Voulodimos, Ioannis Daskalopoulos, Nikolaos Doulamis, and Anastasios Doulamis. 2022. A vision transformer model for convolution-free multilabel classification of satellite imagery in deforestation monitoring. IEEE Transactions on Neural Networks and Learning Systems. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang. 2020. Concept bottleneck models. In International conference on machine learning, pages 5338–5348. PMLR. Chunyuan Li, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Naumann, Hoifung Poon, and Jianfeng Gao. 2023. Llavamed: Training a large language-and-vision assistant for biomedicine in one day. arXiv preprint arXiv:2306.00890. Bin Lin, Bin Zhu, Yang Ye, Munan Ning, Peng Jin, and Li Yuan. 2023. Video-llava: Learning united visual representation by alignment before projection. arXiv preprint arXiv:2311.10122. Fuxiao Liu, Tianrui Guan, Zongxia Li, Lichang Chen, Yaser Yacoob, Dinesh Manocha, and Tianyi Zhou. 2023a. Hallusionbench: You see what you think? or you think what you see? an image-context reasoning benchmark challenging for gpt-4v (ision), llava-1.5, and other multi-modality models. arXiv preprint arXiv:2310.14566. Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. 2023b. Improved baselines with visual instruction tuning. arXiv preprint arXiv:2310.03744. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023c. Visual instruction tuning. In Thirtyseventh Conference on Neural Information Processing Systems. Riccardo Miotto, Fei Wang, Shuang Wang, Xiaoqian Jiang, and Joel T Dudley. 2018. Deep learning for healthcare: review, opportunities and challenges. Briefings in bioinformatics, 19(6):1236–1246. Seungwhan Moon, Andrea Madotto, Zhaojiang Lin, Tushar Nagarajan, Matt Smith, Shashank Jain, ChunFu Yeh, Prakash Murugesan, Peyman Heidari, Yue Liu, et al. 2023. Anymal: An efficient and scalable any-modality augmented language model. arXiv preprint arXiv:2309.16058. Joan Nwatu, Oana Ignat, and Rada Mihalcea. 2023. Bridging the digital divide: Performance variation across socio-economic factors in vision-language models. arXiv preprint arXiv:2311.05746. Ferda Ofli, Firoj Alam, and Muhammad Imran. 2020. Analysis of social media data using multimodal deep learning for disaster response. In 17th International Conference on Information Systems for Crisis Response and Management. ISCRAM, ISCRAM. Haowen Pan, Yixin Cao, Xiaozhi Wang, and Xun Yang. 2023. Finding and editing multi-modal neurons in pre-trained transformer. arXiv preprint arXiv:2311.07470. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR. 662 Tanzina Afroz Rimi, Nishat Sultana, and Md Ferdouse Ahmed Foysal. 2020. Derm-nn: skin diseases detection using convolutional neural network. In 2020 4th International Conference on Intelligent Computing and Control Systems (ICICCS), pages 1205–1209. IEEE. Sarah Schwettmann, Neil Chowdhury, Samuel Klein, David Bau, and Antonio Torralba. 2023. Multimodal neurons in pretrained text-only transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2862–2867. Davinder Singh, Naman Jain, Pranjali Jain, Pratik Kayal, Sudhakar Kumawat, and Nipun Batra. 2020. Plantdoc: A dataset for visual plant disease detection. In Proceedings of the 7th ACM IKDD CoDS and 25th COMAD, pages 249–253. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, and Nanyun Peng. 2023. "kelly is a warm person, joseph is a role model": Gender biases in llm-generated reference letters. arXiv preprint arXiv:2310.09219. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface’s transformers: State-ofthe-art natural language processing. arXiv preprint arXiv:1910.03771. Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. 2023. Mm-vet: Evaluating large multimodal models for integrated capabilities. arXiv preprint arXiv:2308.02490. A Appendix A.1 Zero-Shot Classification Using CLIP We perform zero-shot classification using the CLIP model (clip-vit-large-patch14-336; ), which is the same as the vision encoder used for obtaining pre-projection representation of the input image (i.e., Xv) by the LLaVA-1.5 model. The CLIP model embeds both image and text data into a common space using a contrastive learning objective. We use the pre-trained model to compute the cosine similarity between the image representations and the representation of the dataset-specific label strings obtained from the textual backbone of CLIP. Following this, we consider the most similar label string to be the predicted label for the given image, Classify this image into one of the following categories Figure 2: Architecture of the MLLM considered in this study. ϕ and θ denote tunable parameters of the projection and the large language model, respectively. and compute classification metrics on the test set to quantify CLIP’s zero-shot performance. A.2 Multilayer Perceptron for Estimating Post-Projection Richness We train a multilayer perceptron for estimating the domain-specific richness of the post-projection image representation (i.e., Hv). The MLP takes the tokens corresponding to the image as input and learns to perform the classification task using the examples from the standard train set. Architecturally, the MLP comprises a token-level average pooling step to obtain the image representation, followed by subsequent layers, and eventually the output layer of size equivalent to the number of classes in the dataset. We use ReLU activation (Agarap, 2018) to induce non-linearity. We keep the architecture of this MLP fixed across all the settings to control for the number of learnable parameters and the representational power of the neural network, therefore allowing us to estimate the richness of the input embeddings with respect to the target task. Each model is trained with a batch size of 128. We use Adam optimizer (Kingma and Ba, 2014) with a learning rate initialized at 10−4 and adopt early stopping based on the loss values to avoid overfitting. As a sanity check, we note that an MLP trained using our setup on the postprojection embeddings obtained from the original LLaVA-1.5 model for the HUMANITARIAN task (a natural images dataset), achieves close to the state663 Rescue volunteering Vehicle Damage 4. Humanitarian Infra & utility Damage Affected individuals Cracked Chequered Bubbly 1. Agriculture Apple Scab Leaf Apple Rust Leaf Bell Pepper Leaf Spot Cherry Leaf 2. Textures Braided 3. Dermatology Eczema Contact Dermatitis Vascular Tumors Acne & Rosacea Figure 3: Illustration of the 4 domain-specific image classification datasets used in this study. The datasets are from diverse domains; for brevity we only show some of the representative labels from each of the datasets. Images best viewed with zoom. Task F1 score Acc. Agriculture 0.6991 0.7118 Textures 0.7644 0.7638 Dermatology 0.6046 0.6492 Humanitarian 0.7506 0.8238 Table 3: Classification performance of MLP-based image-only classifiers. A simple MLP performs better on 3 out of 4 tasks than the fine-tuned multimodal LLM; see Table 1 for MLLM results. of-the-art performance reported on this task (Alam et al., 2018). This indicates that our setup enables a reliable estimate of the richness/expressive power of the post-projection representations. A.3 Performance of Image-only Models As reference to the performance of MLLM’s domain-specific capabilities (before and after fine-tuning), we include the performance of simple image-only classification models. We use the 1024-dimensional image embeddings obtained from a pre-trained CLIP model (clip-vit-large-patch14-336) and train a multilayer perceptron with layers of size (1024 (input layer), 2000, 3600, 1024, 600, 256, # of classes (output layer)). We use the same design choices as used for training the MLPs described in Sec. A.2, and evaluate the models on respective test sets of the dataset. The results are presented in Table 3. Although it is not the primary focus of this work, it is interesting to note that for the domain-specific tasks – i.e., all the 3 tasks except HUMANITARIAN the MLP (with ∼20M parameters) performs better than the fine-tuned MLLM (with ∼7B parameters). Both the model use CLIP embeddings as input representation of the image and are fine-tuned with the same amount of labeled data. A.4 Compute Resources All the experiments discussed in this study were conducted using two NVIDIA A100 GPUs (80 GB). Each fine-tuning run of the MLLM took about 1 hour requiring both the GPUs, with additional time for inference; multiple inference runs could be carried over a single GPU. The training and evaluation of the MLPs took less than 20 minutes each. Each run of zero-shot evaluation of CLIP was done on a single GPU in less than 15 minutes. 664
2024_short
60
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 665–672 August 11-16, 2024 ©2024 Association for Computational Linguistics Guidance-Based Prompt Data Augmentation in Specialized Domains for Named Entity Recognition Hyeonseok Kang1, Hyein Seo1, Jeesu Jung1, Sangkeun Jung1∗, Du-Seong Chang2, Riwoo Chung2 1Computer Science and Engineering, Chungnam National University, Republic of Korea 2KT Corporation, Republic of Korea {dnfldjaak11,hyenee97,jisu.jung5,hugmanskj}@gmail.com, {dschang,riwoo.chung}@kt.com, Abstract While the abundance of rich and vast datasets across numerous fields has facilitated the advancement of natural language processing, sectors in need of specialized data types continue to struggle with the challenge of finding quality data. Our study introduces a novel guidance data augmentation technique utilizing abstracted context and sentence structures to produce varied sentences while maintaining context-entity relationships, addressing data scarcity challenges. By fostering a closer relationship between context, sentence structure, and role of entities, our method enhances data augmentation’s effectiveness. Consequently, by showcasing diversification in both entityrelated vocabulary and overall sentence structure, and simultaneously improving the training performance of named entity recognition task. 1 Introduction The field of Natural Language Processing (NLP) has witnessed remarkable success across various domains in recent years, primarily attributed to the availability of rich and high-quality data. However, specialized fields such as science and biology face significant challenges due to the scarcity of such quality data. Particularly, tasks like Named Entity Recognition (NER) face significant difficulties due to domain-specific characteristics where vocabulary roles diverge from general usage, necessitating specialized knowledge for effective data collection. To overcome the data shortage issue, various automated data augmentation (DA) techniques have been developed, including a recent approach that leverages Large Language Models (LLMs) for sentence generation to perform DA (Whitehouse et al., 2023b). Utilizing LLMs for DA involves employing few-shot learning or external modules (Zhuang et al., 2023) to provide additional information. In NER tasks, DA is applied with a focus on entities, maintaining the sentence’s core structure with ∗Corresponding author Augmented Data Augmented Data    Output Selection Naïve LLM Data Augmentation Guidance Generation via Data Abstraction         Data Prompt Data Prompt Guidance LLM Data Augmentation Output Selection   Figure 1: Comparison between data augmentation using LLM and Guidance LLM-based data augmentation. minimal alterations. This approach faces limitations in effectively augmenting cases like specific domain data, where vocabulary interpretation and roles vary with context and sentence composition. In this study, we propose Guidance Data Augmentation (GDA), utilizing information on context and sentence structure abstracted through data abstraction for DA, aiming to generate sentences with varied structures alongside augmenting similar entity types. This approach seeks to achieve more natural and diverse DA compared to single LLM methods by augmenting data with sentences of varied structures that match the seed sentence’s context and corresponding entity types. Our data abstraction approach structures relationships among context, entities, and sentence composition for DA, expanding inference scope by using higher-level conceptual information(Zheng et al., 2024). This approach is vital where entity-related terms diverge significantly from general usage, re665 Given the sentence : The presence of a moving object and easily estimable gravity vector. Augment the 3 sentences by replacing the entity tokens with new ones of the same types. Entity tokens : ['gravity vector’] Entity types : ['OtherScientificTerm’] Seed Generation Prompt 1. the presence of a moving object and easily estimable energy vector 2. the presence of a moving object and easily estimable force vector 3. the presence of a moving object and easily estimable magnetic field vector Seed Generation Output 1. The existence of an active particle and easily estimable energy vectors. 2. An object in motion and easily estimable gravity vectors. 3. A moving entity and easily estimable magnetic field vectors. Guidance Data Augmentation Output Given sentences : Create a common structure and context for the given sentences. Based on the roles performed by tokens in the sentence, select the appropriate "entity types" from the list below and explain its role. Entity types : [‘Method’, ‘Generic’, ‘Metric’, ‘Task’, ‘OtherScientificTerm’] Abstraction Prompt Sentences, NER Tokens Seed Generation Output Generate 3 unique sentences that include the context content by replacing "entity tokens" with new tokens of the same type. The generated sentences should be structured in accordance with the "structure" and "entity_roles". Entity tokens : ['gravity vector’] Entity types : ['OtherScientificTerm’] Guidance Data Augmentation Prompt Context, Structure, Entity roles Abstraction Output Context : the presence of a moving object and easily estimable Structure :<presence> specifies the state of being present, indicating the existence of something. <moving object> describes the type of object present, implying that it is in motion. <easily estimable> indicates that something can be easily calculated or determined. Entity roles : The entity type < OtherScientificTerm > is used to specify the type of object that is present and in motion, or describe the state of being present. Abstraction Output  Figure 2: Illustration of prompt flows for data augmentation in NER tasks. Structuring the use of data augmentation prompts with guidance prompts. Unlike traditional methods that input only the named entity tokens and their types, guidance prompts utilize context and entity role information generated from abstraction prompts to enrich data augmentation. quiring a sophisticated strategy that analyzes vocabulary and structural details of entities. We address this challenge with our data abstraction approach, which assesses the contextual environment, the relationships between entities and their context, and the roles entities play, ensuring the development of varied and contextually appropriate terminology for specialized areas. In response to these considerations, our study introduces a guidance prompt-based DA framework (see Figure 1). This framework is designed to generate sentences of various structures using the same type of entity, from data abstraction to the final response selection. 2 Related Works In NLP, widely used DA techniques comprise rulebased approaches such as synonym replacement, back-translation, and random text element insertion or deletion. These methods are especially prevalent in tasks where textual data may require diversification to better train models (Bayer et al., 2022). Specifically, for tasks such as NER, augmentation strategies often revolve around the substitution of words with similar meanings or roles. In this context, techniques like Easy Data Augmentation (EDA) (Wei and Zou, 2019) and the utilization of WordNet (Miller, 1995) through the Natural Language Toolkit (NLTK) (Bird and Loper, 2004) are frequently applied to generate synonyms-based augmented data. In particular, when using data from specialized domain, DA methods are often used because it is difficult to collect data as it often consists of data containing domain-specific knowledge. For biomedical named entity recognition, augmentation is often performed using context to enhance the understanding of specialized concepts (Bartolini et al., 2023). Recently, the utilization of LLMs has expanded, leading to an increased use of DA techniques based on LLMs. These techniques involve augmenting data for sentence classification by leveraging LLMs (Dai et al., 2023), or enhancing cross-lingual tasks (Whitehouse et al., 2023a) through augmentation. 3 Guidance Data Augmentation Our framework is designed around two key components for effective DA aimed at NER tasks: Data Abstraction and Data Augmentation via Guidance Prompts. Through these two approaches, the proposed method facilitates enhanced model performance on NER tasks. 3.1 Guidance as Data Abstraction Data abstraction involves abstracting and generalizing data to a form where the essential qualities are retained without the unnecessary specifics. The process allows for the alignment of roles and contextual attributes of named entities within sentences with the required entity types for the NER task, 666 SciERC NCBI-disease FIN Train 1,861(200) 5,432(200) 1,018(200) Dev 275 923 150 Test 551 940 305 Table 1: Composition of the dataset for model evaluation. Values within parentheses in the train dataset column represent the number of seed data instances randomly selected from the training data for augmentation purposes. thereby enabling the systematic identification and extraction of pivotal information. Initially, for the purpose of data abstraction, a prompt is constructed to process data by substituting tokens corresponding to named entities with alternative tokens of the same type, as depicted in the seed generation prompt in Figure 2. The data generated from these prompts, along with seed data and a comprehensive list of entity types, are used to formulate abstraction prompts. Such abstracted prompts are fed into a guidance LLM to generate common contextual information for sentences and to produce the necessary structure and entity role information for context composition. 3.2 Data Augmentation via Guidance Prompt Following data abstraction, the second component focuses on the augmentation process itself, using guidance prompts to generate new and varied instances of text. This method utilizes the abstracted data as a basis to inform the generation process, ensuring that the newly created text is both relevant and diverse. The guidance prompts are designed to direct the LLM in producing sentences that not only contain the targeted named entities but also mirror the semantic and structural diversity found in natural language usage. The final output generated using guidance prompts is configured to create sentences that include the semantic information of the context without the need for seed data. The entities’ roles and structure, derived from data abstraction alongside the context, serve as essential information for the generation and assignment of named entities. These elements are incorporated as conditions that must be adhered to within the guidance prompt, ensuring that the generated text aligns with specified contextual meanings and structural requirements. Together, these two components offer an effective approach to enhancing the training data for NER tasks. By first abstracting the data to capture its essential elements and then augmenting it through carefully designed prompts, our framework aims to significantly diversify the datasets available for training NER models. 4 Experiments 4.1 Datasets In selecting datasets for our experiments, we focused on data from specialized domains. These domains are characterized by the specificity (or expertise) required in their entities, requiring specialized knowledge for DA. Specifically, only three datasets were used: SciERC, NCBI-disease, and FIN. Detailed descriptions of the datasets are provided in Appendix A. 4.2 Models Data augmentation LLMs Within the proposed framework, the following LLMs were used OpenAI GPT-3.5 and GPT-4 (Ouyang et al., 2022). Models versioned gpt-3.5-turbo-0125 and gpt-4-0613 were utilized, with the temperature parameter set to the default value of 1. By using the same LLM version for both abstraction and augmentation output generation in guidance data augmentation, this approach prevents the influence of language understanding differences among LLMs. Detailed information on the DA setting employing LLM is delineated in Appendix B. Evaluation model for NER task For the evaluation phase, we used pre-trained language models, specifically BERT (Kenton and Toutanova, 2019). The model version employed is bert-base-uncased, and a comparative study was conducted to analyze the training effects of DA methods across three datasets. For training, the model was fed with a combination of 200 seed data instances and data augmented through DA as the training dataset. The F1 score was utilized as the metric for evaluating NER task performance. The implementation details utilized for training the evaluation model are furnished in Appendix C. 4.3 Results Table 2 presents a comparison of NER model performance, contrasting models fine-tuned with datasets augmented using baseline methods such as EDA, WordNet, and Naïve DA with those augmented through the proposed GDA approach. When employing GPT-3.5 for data augmentation, 667 Approch Method Model Datasets SciERC NCBI-disease FIN Rule-based DA WordNet 0.5018 0.7924 0.7480 EDA 0.5434 0.8062 0.7953 LLM-based DA Naïve DA GPT-3.5 0.5342(-0.0092) 0.8017(-0.0045) 0.8440(+0.0480) GDA(Ours) 0.5435(+0.0001) 0.8139(+0.0077) 0.8464(+0.0511) Naïve DA GPT-4 0.5308(-0.0126) 0.7697(-0.0365) 0.8520(+0.0567) GDA(Ours) 0.5159(-0.0275) 0.7875(-0.0187) 0.8544(+0.0591) Table 2: Evaluation of models trained with augmented data. Utilizing a base of 200 seed data points to generate an additional 600 data points for training, resulting in a total dataset of 800 entries. The table highlights the augmentation technique yielding the highest F1 score for each dataset in bold. Baseline methods employed were WordNet and EDA, with scores in parentheses indicating F1 score comparisons based on EDA. For LLM-based data augmentation, methods with superior performance per model are highlighted with a cyan background. the proposed method generally outperformed the Naïve DA approach. Notably, the NCBI-disease dataset saw a 1.22% improvement in F1 score with the proposed method over Naïve DA, while the FIN dataset experienced a 0.24% increase. Augmentation with GPT-4 yielded a 1.78% and 0.24% performance boost for the NCBI-disease and FIN datasets, respectively. In addition to improving model training performance, enhancing data augmentation requires the generation of diverse sentence structures and the assembly of vocabulary that corresponds with entity types. Figure 3a displays a graph comparing the structure of sentences generated through DA by method, utilizing the Bilingual Evaluation Understudy (BLEU) (Papineni et al., 2002) score to assess the degree of n-gram match between two sentences. Evaluation was conducted using BLEU4, where a lower score signifies reduced structural similarity between sentences, indicating greater diversity in the generated sentences. In addition to BLEU-4, we utilized BERTScore (Zhang et al., 2019) to evaluate the semantic similarity of generated sentences. BERTScore is particularly useful for capturing the nuanced semantic differences between sentences, as it utilizes pre-trained transformer models to provide a more context-aware assessment of similarity. This method is advantageous over traditional n-gram based metrics because it can better account for the semantic context rather than just surface-level text similarity. All three datasets displayed lower scores in both BERTScore and BLEU when using LLMbased augmentation compared to EDA, indicating enhanced diversity. In BLEU scores, the SciERC dataset, in particular, showed a significant 56.8% SciERC NCBI Disease FIN 0.0 0.2 0.4 0.6 0.8 1.0 BLEU-4 0.861 0.865 0.826 0.394 0.691 0.737 0.293 0.447 0.474 EDA Naïve DA GDA (a) BLEU-4 in augmentation using GPT-3.5 SciERC NCBI Disease FIN 0.70 0.75 0.80 0.85 0.90 0.95 1.00 BERT Score 0.986 0.987 0.981 0.924 0.944 0.954 0.921 0.931 0.931 EDA Naïve DA GDA (b) BERTScore in augmentation using GPT-3.5 Figure 3: Comparison of sentence diversity by method in augmentation using GPT-3.5 model. Lower BLEU and BERTScore indicate higher diversity in generated sentences. lower score with the proposed method compared to EDA. When contrasted with Naïve DA, the FIN and NCBI-disease datasets recorded 26.3% and 24.4% lower scores, respectively. For BERTScore, although the differences were smaller, ranging from 6.5% to 5%, the scores were still lower compared to EDA, indicating that the generated sentences 668 maintained semantic diversity. The narrow score differences in semantic similarity comparisons indicate that the generated sentences successfully preserve the contextual meaning of the seed sentences while introducing diversity. Experiments involving structural and semantic similarity using GPT-4, as well as assessments in generated sentences, are documented in Appendix D. Examples of DA outputs are provided in Appendix E. These results underscore the capability of the augmentation method utilizing abstract information to generate sentences with varied structures while preserving context and entity information. 5 Conclusion In this study, we proposed guidance data augmentation designed for NER tasks within specific domain data, enabling the generation of data suited for these tasks. By using data abstraction, our method facilitates structured relationships among context, entities, and sentence composition, allowing for the generation of sentences with diverse structures while ensuring entity consistency. The abstracted sentence information is utilized in constructing guidance prompts, enabling DA with a rich diversity in vocabulary and sentence structures. Future efforts will aim at refining this process for applicability to additional tasks and exploring the use of multiple guidance LLMs to enrich the abstraction information, thereby enhancing the guidance provided. Limitations Our study’s scope was notably confined to the NER task, limiting the versatility of our guidance prompts and data abstraction processes. This narrow focus restricts our exploration of the framework’s potential across various tasks. The limitation of employing a singular model approach for both data abstraction and guidance DA restricts the diversity of linguistic insights within our system. Future efforts will aim to broaden the application of our framework by utilizing different LLMs and expanding the range and granularity of data abstraction, thus addressing these limitations and fully leveraging the capabilities of Multi-LLMs structures for enhanced language understanding and generation tasks. Acknowledgements This work was supported by research fund of Chungnam National University. References Ilaria Bartolini, Vincenzo Moscato, Marco Postiglione, Giancarlo Sperlì, and Andrea Vignali. 2023. Data augmentation via context similarity: An application to biomedical named entity recognition. Information Systems, 119:102291. Markus Bayer, Marc-André Kaufhold, and Christian Reuter. 2022. A survey on data augmentation for text classification. ACM Comput. Surv., 55(7). Steven Bird and Edward Loper. 2004. NLTK: The natural language toolkit. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, pages 214–217, Barcelona, Spain. Association for Computational Linguistics. Haixing Dai, Zhengliang Liu, Wenxiong Liao, Xiaoke Huang, Yihan Cao, Zihao Wu, Lin Zhao, Shaochen Xu, Wei Liu, Ninghao Liu, Sheng Li, Dajiang Zhu, Hongmin Cai, Lichao Sun, Quanzheng Li, Dinggang Shen, Tianming Liu, and Xiang Li. 2023. Auggpt: Leveraging chatgpt for text data augmentation. Rezarta Islamaj Do˘gan, Robert Leaman, and Zhiyong Lu. 2014. Ncbi disease corpus: A resource for disease name recognition and concept normalization. Journal of Biomedical Informatics, 47:1–10. Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of naacL-HLT, volume 1, page 2. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3219–3232, Brussels, Belgium. Association for Computational Linguistics. George A. Miller. 1995. Wordnet: a lexical database for english. Commun. ACM, 38(11):39–41. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. 669 Julio Cesar Salinas Alvarado, Karin Verspoor, and Timothy Baldwin. 2015. Domain adaption of named entity recognition to support credit risk assessment. In Proceedings of the Australasian Language Technology Association Workshop 2015, pages 84–90, Parramatta, Australia. Jason Wei and Kai Zou. 2019. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 6382–6388, Hong Kong, China. Association for Computational Linguistics. Chenxi Whitehouse, Monojit Choudhury, and Alham Fikri Aji. 2023a. Llm-powered data augmentation for enhanced cross-lingual performance. Chenxi Whitehouse, Monojit Choudhury, and Alham Fikri Aji. 2023b. Llm-powered data augmentation for enhanced crosslingual performance. arXiv preprint arXiv:2305.14288. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675. Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, Heng-Tze Cheng, Ed H. Chi, Quoc V Le, and Denny Zhou. 2024. Step-back prompting enables reasoning via abstraction in large language models. In The Twelfth International Conference on Learning Representations. Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, and Chao Zhang. 2023. Toolqa: A dataset for llm question answering with external tools. arXiv preprint arXiv:2306.13304. A Datasets Details SciERC (Luan et al., 2018) is a collection of scientific abstract annotated with scientific entities, their relations, and coreference clusters. NCBI-disease (Do˘gan et al., 2014) consists of PubMed abstracts fully annotated at the mention and concept level to serve as a research resource for the biomedical SciERC NCBI Disease FIN 0.0 0.2 0.4 0.6 0.8 1.0 BLEU-4 0.861 0.865 0.826 0.387 0.713 0.727 0.105 0.229 0.124 EDA Naïve DA GDA (a) BLEU-4 in augmentation using GPT-4 SciERC NCBI Disease FIN 0.70 0.75 0.80 0.85 0.90 0.95 1.00 BERT Score 0.986 0.987 0.981 0.921 0.944 0.958 0.831 0.861 0.813 EDA Naïve DA GDA (b) BERTScore in augmentation using GPT-4 Figure 4: Comparison of sentence diversity by method in augmentation using GPT-4 model. Lower BLEU and BERTScore indicate higher diversity in generated sentences. natural language processing community. FIN (Salinas Alvarado et al., 2015) is composed of financial agreements publicly disclosed through U.S. Securities and Exchange Commission (SEC) filings. B Data Augmentation Details Experiments were conducted by categorizing DA methods into rule-based DA and LLM-based DA. For rule-based DA, WordNet and EDA were employed, with the number of synonyms to randomly choose set at 10. LLM-based DA encompassed Naïve DA and GDA, both utilizing identical settings. C Implementation Details For the evaluation of DA, utilizing a learning rate of 2e-5, a batch size of 32, a maximum sequence length of 128, and the Adam optimizer as hyperparameters. The implementation framework utilized is based on Huggingface PyTorch Transformers (Wolf et al., 2020). In terms of computational infrastructure, the experimental procedures were 670 Method Generated sentence Named entity tokens Seed data The interlingual approach to MT has been repeatedly advocated by researchers originally interested in natural language understanding who take machine translation to be one possible application. ‘interlingual approach’, ‘MT’, ‘natural language understanding’, ‘machine translation’ EDA The interlingual approach to MT has been repeatedly advocated by researchers originally interested in natural language understanding who take machine translation to be one possible applications programme. ‘interlingual approach’, ‘MT’, ‘natural language understanding’, ‘machine translation’ The interlingual approach to MT has been repeatedly advocated by researchers originally interested in instinctive language understanding who take machine translation to be one possible application. ‘interlingual approach’, ‘MT’, ‘natural language understanding’, ‘machine translation’ The interlingual approach to MT has been repeatedly advocated by researchers originally interested in natural language understanding who take machine translation to be one potential application. ‘interlingual approach’, ‘MT’, ‘natural language understanding’, ‘machine translation’ Naïve DA The contrastive approach to AI has been repeatedly advocated by researchers originally interested in computer vision who take natural language processing to be one possible application. ‘contrastive approach’, ‘AI’, ‘computer vision’, ‘natural language processing’ The symbolic approach to ML has been repeatedly advocated by researchers originally interested in pattern recognition who take deep learning to be one possible application. ‘symbolic approach’, ‘ML’, ‘pattern recognition’, ‘deep learning’ The neural approach to NLP has been repeatedly advocated by researchers originally interested in information retrieval who take sentiment analysis to be one possible application. ‘neural approach’, ‘NLP’, ‘information retrieval’, ‘sentiment analysis’ GDA(Ours) The symbolic, contrastive, neural, statistical, and rule-based approaches to AI, ML, NLP, CV, and NLU have been repeatedly advocated by researchers originally interested in various tasks and applications. ‘symbolic, contrastive, neural, statistical, and rule-based approaches’, ‘AI, ML, NLP, CV, NLU’ The interlingual approach to MT has been repeatedly advocated by researchers originally interested in natural language understanding which is a possible application. ‘interlingual approach’, ‘MT’, ‘natural language understanding’ Researchers who were originally focused on natural language understanding have repeatedly advocated the interlingual approach to MT which is one of the potential applications. ‘natural language understanding’, ‘interlingual approach’, ‘MT’ Table 3: Examples of data augmentation outcomes from EDA, Naïve DA, and GDA methodologies. 671 exclusively executed on Nvidia A6000 GPUs, complemented by AMD CPU cores. D Additional Sentence Diversity Evaluation Figure 4a exhibits a graph that contrasts the sentence structures generated via various DA methods, employing the BLEU-4 metric for comparison. Data augmentation with GDA using GPT-4 exhibits a marked improvement in data diversity as opposed to GPT-3.5. Figure 4b shows that GPT-4 demonstrates enhanced semantic diversity in data augmentation using GDA. Both implementations of GDA, using GPT-3.5 and GPT-4, showed improvements in semantic diversity within data augmentation compared to other augmentation methods. E Guidance Data Augmentation Case Table 3 presents an example of data created through EDA, Naïve DA, and GDA methods, with the last two utilizing the GPT-3.5 model. Augmentations via EDA and Naïve DA methods reveal replacements limited to either entity-specific words or other words. In contrast, sentences generated through GDA exhibit diversification in both entityrelated vocabulary and overall sentence structure, while maintaining the context of the seed data despite structural modifications. 672
2024_short
61
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 673–680 August 11-16, 2024 ©2024 Association for Computational Linguistics Aligning Large Language Models via Fine-grained Supervision Dehong Xu1∗†, Liang Qiu2∗, Minseok Kim2, Faisal Ladhak2, Jaeyoung Do3 1Department of Statistics, UCLA 2Amazon 3Department of Electrical and Computer Engineering, Seoul National University Correspondence: [email protected], [email protected] Abstract Pre-trained large-scale language models (LLMs) excel at producing coherent articles, yet their outputs may be untruthful, toxic, or fail to align with user expectations. Current approaches focus on using reinforcement learning with human feedback (RLHF) to improve model alignment, which works by transforming coarse human preferences of LLM outputs into a feedback signal that guides the model learning process. However, because this approach operates on sequence-level feedback, it lacks the precision to identify the exact parts of the output affecting user preferences. To address this gap, we propose a method to enhance LLM alignment through fine-grained token-level supervision. Specifically, we ask annotators to minimally edit less preferred responses within the standard reward modeling dataset to make them more favorable, ensuring changes are made only where necessary while retaining most of the original content. The refined dataset is used to train a token-level reward model, which is then used for training our fine-grained Proximal Policy Optimization (PPO) model. Our experiment results demonstrate that this approach can achieve up to an absolute improvement of 5.1% in LLM performance, in terms of win rate against the reference model, compared with the traditional PPO model. 1 Introduction One key objective in advancing large language models (LLMs) is to ensure safe, beneficial human interaction. However, current pre-trained models, mostly trained on web and book texts, often generate biased or toxic text, misaligning with human intentions. To address this issue, numerous studies (Ouyang et al., 2022; Rafailov et al., 2023; Bai et al., 2022b,a; Yuan et al., 2023; Touvron ∗Corresponding authors. † Author performed the work while interned at Amazon. et al., 2023; Ramamurthy et al., 2022) have integrated human feedback into the training process. A significant advancement is reinforcement learning from human feedback (RLHF) (Ouyang et al., 2022), which usually consists of two phases: First, a reward model (RM) is trained from preference data, which comprises various responses alongside their human-assigned preference scores for a given prompt. Then, this reward model is applied to optimize a final model using Proximal Policy Optimization (PPO) (Schulman et al., 2017). Recent works (Wu et al., 2023; Rafailov et al., 2023; Fernandes et al., 2023; Guo et al., 2023; Wang et al., 2024) discovered limitations of the current RM, specifically their misalignment with human values. This misalignment stems from two main issues: (i) the presence of incorrect and ambiguous preference pairs in the humanlabeled datasets; (ii) the limited insight inherent in sequence-level feedback. Specifically, from a data collection standpoint, the task of comparing the overall quality of model outputs is challenging for human annotators when outputs exhibit both desired and undesired behaviors in different parts. Moreover from the RM perspective, the reliance on preference-based data labeling leads to sparse training signals. This sparsity discourages the model’s ability to distinguish finer details between responses and further limits the capacity for reward optimization. To tackle this challenge, we propose the following two-fold contributions as illustrated in Figure 1: • We introduce a new data collection approach that asks annotators to edit responses from existing RM datasets to be more preferable. By comparing the original and edited responses, we obtain detailed token-level insights that are essential for training our fine-tuned reward model. • We propose a new token-level reward modeling approach that provides reward signals at the token level. Different from coarse-grained 673 Figure 1: The comparison between sequence-level reward modeling (Left) and our method of fine-grained reward modeling via minimal editing (Right). Our approach diverges from sequence-level reward modeling in two key aspects: (1) Data Collection, where we ask a human or LLM to edit the model response; and (2) Reward Modeling, which enables our model to assign rewards to individual tokens, as opposed to assessing the entire sequence collectively. sequence-level rewards, our approach offers more granular feedback, pinpointing the specific parts of a response that are effective or need improvement, which hence helps RL optimization. Experiment results using AlpacaFarm (Dubois et al., 2023) environment indicate that our proposed approach improves LLMs’ performance up to 5.1% against the baseline in terms of win rate, given the same amount of data for training. 2 Method In this section, we introduce our approach to finegrained data collection through editing and tokenlevel reward modeling. 2.1 Fine-grained data collection via minimal editing The conventional RLHF pipeline, as outlined in prior works (Ouyang et al., 2022; Dubois et al., 2023), involves three key stages: supervised finetuning (SFT), reward modeling (RM), and proximal policy optimization (PPO). In the RM phase, the standard practice entails collecting a dataset of human evaluations comparing two or more model outputs in response to a series of prompts. The dataset is represented as D = {x(i), y(i) w , y(i) l }N i=1, where x denotes a prompt and (yw, yl) indicates the preferred and less preferred responses, respectively. Utilizing such a dataset, earlier RLHF research focused on developing a reward model Rϕ that determines the more favored model output. This holistic reward model associates each input prompt x and its corresponding output y with one scalar value reflecting the output’s overall quality. However, as shown in the left panel of Figure 1, annotating a pair of model outputs that are substantially different can be a difficult task for humans, especially when each response exhibits a mix of desirable and undesirable behaviors. To address this issue, we introduce a novel data collection technique aimed at obtaining fine-grained supervision, which offers richer, comparative information beyond simple binary choices. Instead of annotating entire responses, our method involves targeted editing by humans or language models, as depicted in the right panel of Figure 1. The goal is to retain the majority of the original response while making improvements to specific areas in need of enhancement. Specifically, we introduce a response editing process in which we ask humans or prompt LLMs to perform targeted modifications. For fine-grained data collection, our method works for both human annotators and language models, following (Ding et al., 2022; Gilardi et al., 2023; Wang et al., 2022; Chiang and Lee, 2023). In practice, we prompt a proprietary LLM, such as Claude-2 (Bai et al., 2022b), to apply edits to 674 the original output. In the experiment, the original preference pairs (yw, yl) were not included and we only utilized yl from the original dataset for minimal editing. This approach maintains the same amount of data as the baseline methods, ensuring a fair comparison. Details of the prompt used for editing can be found in Appendix A.1, and the examples of fine-grained annotation with minimal editing are shown in Appendix A.2. Our method is based on the assumption that the edits inherently improve a response, making changes only when they enhance alignment with human values. The approach enables the refinement of responses by providing clear insights into the specific areas that require improvement. 2.2 Token-level reward modeling In this section, we will first introduce the RL environment and then define our token-level reward modeling scheme. Language generation can be defined as a Markov Decision Process (MDP) ⟨S, A, R, P, γ⟩. S refers to the state space and we define the start state s1 as the input prompts {x}. An action at t-step at is a generated token. The transition function of the environment is denoted as P : S × A →S, st = {x, a1, ..., at−1}. A response y of length T is then y = {a1, .., aT }. In our token-level reward scheme, a reward is assigned to each generated token at by R: S × A →R, where at each time step t there is a learned reward function rt = rϕ(st, at). Therefore, for each response, we have a trajectory τ = {s1, a1, r1, ...st, at, rt, ...sT , aT , rT }. We define the reward of the whole trajectory as the average of rewards assigned to each token: R(τ) = 1 T T X t=1 rt. (1) Following the Bradley-Terry (BT) model (Bradley and Terry, 1952) for preference modeling, we formulate the distribution of human preference for responses as below: p(τ i ≻τ j) = exp(R(τ i)) exp(R(τ i)) + exp(R(τ j)) = σ(R(τ i) −R(τ j)), (2) where τ i and τ j represent two different responses generated from the same prompt. Under the setting of our fine-grained supervision dataset, we assume τ i only makes edits on τ j while maintaining most parts unchanged. We define U0 = {t|ai t = aj t} and U1 = {t|ai t ̸= aj t} to represent the unchanged and changed parts. Regarding the reward model as a binary classifier, we use negative log-likelihood as the loss function. By plugging in Equation 1, we have: L = −E(τ i,τ j)∼D  log σ(R(τ i) −R(τ j))  = −E(τ i,τ j)∼D[log σ(( 1 T i −1 T j ) X t∈U0 rt + 1 T i X t∈U1 ri t −1 T j X t∈U1 rj t)], (3) Ideally, we aim for the unchanged part to maintain a consistent reward. Under this assumption, and if the two responses are of equal length, the first term of the loss function can be removed: L ≈−E(τ i,τ j)∼D[log σ( 1 T i X t∈U1 ri t −1 T j X t∈U1 rj t)] (4) For the edited part, the loss function is thus designed to maximize the reward for the preferred response and minimize it for the less favored one. With a trained token-level reward model, we can integrate it into the Proximal Policy Optimization (PPO) (Schulman et al., 2017) algorithm. In the traditional PPO-RLHF method, each token in the sequence is assigned a reward of the form [−KL1, −KL2, ..., R −KLn], where KLi denotes the Kullback-Leibler divergence (Kullback and Leibler, 1951) for the generated token sequence up to that point, and R represents the sequence-level reward from the reward model. Generalized Advantage Estimation (GAE) (Schulman et al., 2015) is then employed to calculate the advantage at the token level. In contrast, our approach assigns a reward Ri directly from the token-level reward model to each token in the sequence, resulting in a reward vector of [R1, R2, ..., Rn]. This approach enhances the granularity of feedback at each step of the sequence generation process, without changing the underlying GAE and policy update procedure. Consequently, the computational cost remains comparable to the standard RLHF approach. 3 Experiments In this section, we demonstrate our experimental setup and empirical results in detail. 675 Model Win rate (%) Fine-grained Token-level PPO 51.6 ± 1.8 51.6 ± 1.8 51.6 ± 1.8 Fine-grained PPO 51.2 ± 1.8 51.2 ± 1.8 51.2 ± 1.8 Davinci003 (Brown et al., 2020) 50.0 PPO-RLHF (Ouyang et al., 2022) 46.5 ± 1.8 Table 1: Evaluation results by Claude. Davinci003 is the reference model. All results of other models are from (Dubois et al., 2023). 3.1 Experimental setup In constructing our dataset, we follow the framework established by AlpacaFarm (Dubois et al., 2023), which offers a simulation environment that includes data splits for SFT, RM, PPO, and evaluation processes. Building on this, we develop our refined RM dataset using the fine-grained approach, where we employ Claude-2 (Bai et al., 2022b) to perform targeted editing. Edits are generated on the less preferred responses from the original pairwise data, ensuring lightweight yet effective modifications. We evaluate our method by finetuning the pretrained LLaMA-7B (Touvron et al., 2023) model. To assess the quality of our model’s generation compared to baseline models, we employ a winrate measurement, where the model pθ is evaluated against a reference model pref. This method involves pairwise comparisons to estimate how often pθ’s outputs are preferred over pref’s for given instructions. Both our model and the baselines are evaluated against the same reference model, Davinci003, aligning with AlpacaFarm (Dubois et al., 2023). To assess the win rate, we employ Claude as the judge, following the simulated approach in (Zheng et al., 2023). To evaluate the effectiveness of our data annotation approach and token-level reward model, we train two models: (i) Fine-grained PPO that only uses our fine-grained RM dataset with editing while still trained with a sequence-level reward, and (ii) Fine-grained Token-level PPO that incorporates both the fine-grained RM dataset and token-level reward modeling, and hence applies token-level reward to PPO. 3.2 Experiment results Results in human value alignment Table 1 showcases our methods (highlighted) alongside the baseline PPO-RLHF model, both trained on LLaMA-7B (Touvron et al., 2023). Results indicate Model Accuracy (%) RM w/ Fine-grained dataset 85.2 ± 1.8 85.2 ± 1.8 85.2 ± 1.8 RM w/o Fine-grained dataset 58.2 ± 1.8 Table 2: Reward model accuracy. Leveraging the finegrained dataset enhances the reward model’s ability to assign correct rewards to responses. Model Step Tr. hours RLHF (Ouyang et al., 2022) RM 0.2 Fine-grained RLHF RM 0.3 0.3 0.3 RLHF (Ouyang et al., 2022) PPO 4 Fine-grained RLHF PPO 222 Table 3: Training efficiency. Highlighted numbers represent the training hours (Tr. hours) of the fine-grained PPO model trained with token-level rewards. that our novel data collection technique, when integrated with standard PPO training, leads to an absolute performance increase of 4.7% compared to traditional methods (refer to lines 2 vs. 4). This highlights the effectiveness of our fine-grained data collection strategy. Moreover, when trained with the same fine-grained dataset, the token-level reward model (line 1) demonstrates further alignment improvements compared to the PPO alone (line 2), indicating the importance of token-level rewards. Together, these findings affirm that our approach significantly outperforms the traditional PPO-RLHF model. Reward model analysis To explain the observed performance increase, we further investigate the effectiveness of the reward model. We test its accuracy in assigning higher rewards to superior responses within the evaluation set. As shown in Table 2, our fine-grained dataset enables the learned reward model to reach an accuracy of approximately 85.2%, outperforming the model trained with the original dataset. This result demonstrates that our data collection method enhances the capability of our reward model to identify and appropriately reward better responses. Training efficiency Table 3 illustrates the training costs for different models. Note that all the models are trained on 8 NVIDIA A100 GPUs (80G) with the same batch size for both phases. While the training time for the reward modeling phase is comparable between our method and the baseline, our fine-grained reward model significantly boosts the efficiency of RL optimization. 676 It reduces the time required for PPO to converge to its optimal performance by half, due to our more precise and fine-grained reward function. Based on the experiment results, our reward function can provide more accurate and denser training signals, which can help RL algorithms converge faster. This improvement in training efficiency could be important for LLM alignment, especially when the size of the LLM becomes increasingly large. 4 Limitations Although the empirical results show that our approach achieves better performance in model alignment, we struggle to provide rigorous mathematical proof to conclusively demonstrate the effectiveness of this reward allocation strategy, specifically in Equation 4. 5 Conclusion In this paper, we introduce a fine-grained RLHF framework that includes a data collection technique alongside a token-level reward model. This approach enables better value alignment by learning a more accurate reward model, facilitating faster convergence for PPO. Our experimental results show performance improvement based on automatic evaluations compared to the baseline method. Acknowledgments We would like to thank Yi Xu, Puyang Xu and other members of Amazon, as well as Ying Nian Wu and Minglu Zhao and from University of California, Los Angeles for their valuable discussions and constructive feedback. Dehong Xu’s research for this work was financially supported by Amazon during his internship at Amazon. References Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022a. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022b. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073. Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324– 345. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Cheng-Han Chiang and Hung-yi Lee. 2023. Can large language models be an alternative to human evaluations? arXiv preprint arXiv:2305.01937. Bosheng Ding, Chengwei Qin, Linlin Liu, Yew Ken Chia, Shafiq Joty, Boyang Li, and Lidong Bing. 2022. Is gpt-3 a good data annotator? arXiv preprint arXiv:2212.10450. Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023. Alpacafarm: A simulation framework for methods that learn from human feedback. arXiv preprint arXiv:2305.14387. Patrick Fernandes, Aman Madaan, Emmy Liu, António Farinhas, Pedro Henrique Martins, Amanda Bertsch, José GC de Souza, Shuyan Zhou, Tongshuang Wu, Graham Neubig, et al. 2023. Bridging the gap: A survey on integrating (human) feedback for natural language generation. arXiv preprint arXiv:2305.00955. Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli. 2023. Chatgpt outperforms crowd workers for text-annotation tasks. Proceedings of the National Academy of Sciences, 120(30):e2305016120. Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, and Ji-Rong Wen. 2023. Beyond imitation: Leveraging fine-grained quality signals for alignment. arXiv preprint arXiv:2311.04072. Solomon Kullback and Richard A Leibler. 1951. On information and sufficiency. The annals of mathematical statistics, 22(1):79–86. 677 Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290. Rajkumar Ramamurthy, Prithviraj Ammanabrolu, Kianté Brantley, Jack Hessel, Rafet Sifa, Christian Bauckhage, Hannaneh Hajishirzi, and Yejin Choi. 2022. Is reinforcement learning (not) for natural language processing?: Benchmarks, baselines, and building blocks for natural language policy optimization. arXiv preprint arXiv:2210.01241. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. 2015. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Binghai Wang, Rui Zheng, Lu Chen, Yan Liu, Shihan Dou, Caishuang Huang, Wei Shen, Senjie Jin, Enyu Zhou, Chenyu Shi, et al. 2024. Secrets of rlhf in large language models part ii: Reward modeling. arXiv preprint arXiv:2401.06080. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2022. Self-instruct: Aligning language models with self-generated instructions. arXiv preprint arXiv:2212.10560. Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A Smith, Mari Ostendorf, and Hannaneh Hajishirzi. 2023. Fine-grained human feedback gives better rewards for language model training. arXiv preprint arXiv:2306.01693. Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. 2023. Rrhf: Rank responses to align language models with human feedback without tears. arXiv preprint arXiv:2304.05302. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685. 678 A Appendix A.1 Prompt for Minimal Editing Figure 2 shows the prompt for Claude-2 to generate the fine-grained dataset by minimal editing. Figure 2: Prompt for Claude A.2 More examples of minimal editing In this section, we provide more examples of fine-grained annotation via minimal editing. As shown in Figure 3, 4 and 5, the annotators kept most of the response unchanged and only revised the red parts to the yellow parts. Figure 3: Example of fine-grained annotation via minimal editing: edit words may cause safety issues. 679 Figure 4: Example of fine-grained annotation via minimal editing: provide more explanation on academic words. Figure 5: Example of fine-grained annotation via minimal editing: change the literary device that follows the instruction better. 680
2024_short
62
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 681–692 August 11-16, 2024 ©2024 Association for Computational Linguistics Annotating FrameNet via Structure-Conditioned Language Generation Xinyue Cui University of Southern California [email protected] Swabha Swayamdipta University of Southern California [email protected] Abstract Despite the remarkable generative capabilities of language models in producing naturalistic language, their effectiveness on explicit manipulation and generation of linguistic structures remain understudied. In this paper, we investigate the task of generating new sentences preserving a given semantic structure, following the FrameNet formalism. We propose a framework to produce novel frame-semantically annotated sentences following an overgenerateand-filter approach. Our results show that conditioning on rich, explicit semantic information tends to produce generations with high human acceptance, under both prompting and finetuning. Our generated frame-semantic structured annotations are effective at training data augmentation for frame-semantic role labeling in low-resource settings; however, we do not see benefits under higher resource settings. Our study concludes that while generating highquality, semantically rich data might be within reach, the downstream utility of such generations remains to be seen, highlighting the outstanding challenges with automating linguistic annotation tasks.1 1 Introduction Large language models (LLMs) have demonstrated unprecedented capabilities in generating naturalistic language. These successes hint at LMs’ implicit capabilities to “understand” language; but are they capable of processing explicit symbolic structures in order to generate language consistent with the structures? Not only would this help us understand the depth of LLMs’ linguistic capabilities but would also serve to efficiently and cheaply expand existing sources of linguistic structure annotation. In this work, we investigate the abilities 1Our code is available at https://github.com/ X-F-Cui/FrameNet-Conditional-Generation. 1. Select FE spans for Replacement Growing up, boys are disciplined for breaking the rules. Time Evaluee Reason discipline.v REWARDS_AND_PUNISHMENTS Target LU Sister LU • reward.v • disciplinary.a • penalty.n • discipline.v • punish.v Growing up, <MASK> are rewarded <MASK>. Time Evaluee Reason reward.v Time Evaluee Reason reward.v Growing up, children are rewarded often. Growing up, boys are rewarded for breaking the rules. reward.v Time Evaluee Reason Growing up, girls are rewarded for good behavior. reward.v Time Evaluee Reason T5 GPT-4 2. Structure-Conditioned Generation 3. Filter Generations w/ Inconsistent FEs 0. Replace Sister LU Figure 1: Our framework to generate frame semantic annotated data. Following Pancholy et al. (2021), we replace a sister LU with the target LU in an annotated sentence (0;§2). We select FEs appropriate for generating a new structure-annotated sentence (1;§3.1), and execute generation via fine-tuning T5 or prompting GPT-4 (2;§3.2). Finally, we filter out sentences that fail to preserve LU-FE relationships under FrameNet (3;§3.3). of LLMs to generate annotations for one such resource of linguistic structure: FrameNet, a lexical database grounded in the theory of frame semantics (Fillmore, 1985; Ruppenhofer et al., 2016). We propose an approach for language generation conditioned on frame-semantic structure such that the generation (i) is consistent with the frame structure, (ii) is acceptable by humans and (ii) is useful for a downstream task, namely frame-semantic role labeling (Gildea and Jurafsky, 2000b). 681 Our framework for generating frame-semantic annotations leverages both the FrameNet hierarchy and LLMs’ generative capabilities to transfer annotations from existing sentences to new examples. Specifically, we introduce frame structureconditioned language generation, focused on specific spans in the sentence such that the resulting sentence follows the given frame structure and is also acceptable to humans. Overall, we follow an overgenerate-and-filter pipeline, to ensure semantic consistency of the resulting annotations. Our framework is outlined in Figure 1. Our intrinsic evaluation, via both human judgment and automated metrics, show that the generated sentences preserve the intended framesemantic structure more faithfully compared to existing approaches (Pancholy et al., 2021). As an extrinsic evaluation, we use our generations to augment the training data for frame-semantic role labeling: identifying and classifying spans in the sentence corresponding to FrameNet frames. Under a low-resource setting, our generation annotations tend to be effective for training data augmentation for frame-semantic role labeling. However, these trends do not translate to a high-resource setting; these findings are consistent with observations from others who have reported challenges in leveraging LLMs for semantic parsing tasks, such as constituency parsing (Bai et al., 2023), dependency parsing (Lin et al., 2023), and abstract meaning representation parsing (Ettinger et al., 2023). Our findings prompt further investigation into the role of LLMs in semantic structured prediction. 2 FrameNet and Extensions Frame semantics theory (Gildea and Jurafsky, 2000a) posits that understanding a word requires access to a semantic frame—a conceptual structure that represents situations, objects, or actions, providing context to the meaning of words or phrases. Frame elements (FEs) are the roles involved in a frame, describing a certain aspect of the frame. A Lexical Unit (LU) is a pairing of tokens (specifically a word lemma and its part of speech) and their evoked frames. As illustrated in Figure 1, the token “disciplined” evokes the LU discipline.v, which is associated with the frame REWARDS_AND_PUNISHMENT, with FEs including Time, Evaluee, and Reason. Grounded in frame semantics theory, FrameNet (Ruppenhofer et al., 2006) is a lexical database, featuring sentences that are annotated by linguistic experts according to frame semantics. Within FrameNet, the majority of sentences are annotated with a focus on a specific LU within each sentence, which is referred to as lexicographic data; Figure 1 shows such an instance. A subset of FrameNet’s annotations consider all LUs within a sentence; these are called full-text data; Figure 1 does not consider other LUs such as grow.v or break.v. FrameNet has defined 1,224 frames, covering 13,640 lexical units.The FrameNet hierarchy also links FEs using 10,725 relations. However, of the 13,640 identified LUs, only 62% have associated annotations. Our approach seeks to automatically generate annotated examples for the remaining 38% of the LUs, towards increasing coverage in FrameNet without laborious manual annotation. Sister LU Replacement Pancholy et al. (2021) propose a solution to FrameNet’s coverage problem using an intuitive approach: since LUs within the same frame tend to share similar annotation structures, they substitute one LU (the target LU) with another (a sister LU) to yield a new sentence. This replacement approach only considers LUs with the same POS tag to preserve the semantics of the original sentence; for instance, in Figure 1, we replace the sister LU discipline.v with the target LU reward.v. However, due to the nuanced semantic differences between the two LUs, the specific content of the FE spans in the original sentence may no longer be consistent with the target LU in the new sentence. Indeed Pancholy et al. (2021) report such semantic mismatches as their primary weakness. To overcome this very weakness, our work proposes leveraging LLMs to generate FE spans that better align with the target LU, as described subsequently. For the rest of this work, we focus solely on verb LUs, where initial experiments showed that the inconsistency problem was the most severe. Details of FrameNet’s LU distribution by POS tags, along with examples of non-verb LU replacements can be found in Appendix A. 3 Generating FrameNet Annotations via Frame-Semantic Conditioning We propose an approach to automate the expansion of FrameNet annotations by generating new annotations with language models. Given sister LUreplaced annotations (§2; Pancholy et al., 2021), we select FE spans which are likely to be semantically inconsistent (§3.1), generate new sentences 682 with replacement spans by conditioning on framesemantic structure information (§3.2) and finally filter inconsistent generations (§3.3). 3.1 Selecting Candidate FEs for Generation We identify the FEs which often result in semantic inconsistencies, in order to generate replacements of the spans corresponding to such FEs. Our selection takes into account the FE type, its ancestry under FrameNet, and the span’s syntactic phrase type. Preliminary analyses, detailed in Appendix B, help us narrow the criteria as below: 1. FE Type Criterion: The FE span to be generated must belong to a core FE type, i.e., the essential FEs that are necessary to fully understand the meaning of a frame. 2. Ancestor Criterion: The FE should not possess Agent or Self-mover ancestors. 3. Phrase Type Criterion: The FE’s phrase type should be a prepositional phrase. Qualitative analyses revealed that it suffices to meet criterion (1) while satisfying either (2) or (3). For instance, in Figure 1, under REWARDS_AND _PUNISHMENTS, only the FEs Evaluee and Reason are core (and satisfy (2)) while Time is not; thus we only select the last two FE spans for generation. 3.2 Generating Semantically Consistent Spans We generate semantically consistent FE spans for selected candidate FEs via two approaches: finetuning a T5-large model (Raffel et al., 2019) and prompting GPT-4 Turbo, following Mishra et al. (2021). In each case, we condition the generation on different degrees of semantic information: No Conditioning We generate FE spans without conditioning on any semantic labels. FE-Conditioning The generation is conditioned on the type of FE span to be generated. Frame+FE-Conditioning The generation is conditioned on both the frame and the FE type. The above process produces new sentences with generated FE spans designed to align better with the target LU, thereby preserving the original frame-semantic structure. However, despite the vastly improved generative capabilities of language models, they are still prone to making errors, thus not guaranteeing the semantic consistency we aim for. Hence, we adopt an overgenerate-and-filter approach (Langkilde and Knight, 1998; Walker et al., 2001): generate multiple candidates and aggressively filter out those that are semantically inconsistent. Details on fine-tuning T5 and prompting GPT-4 are provided in Appendix C. 3.3 Filtering Inconsistent Generations We design a filter to ensure that the generated sentences preserve the same semantics as the expert annotations from the original sentence. This requires the new FE spans to maintain the same FE type as the original. We propose a new metric FE fidelity, which checks how often the generated spans have the same FE type as the original. To determine the FE type of the generated spans, we train an FE type classifier on FrameNet by finetuning SpanBERT, the state-of-the-art model for span classification (Joshi et al., 2019).2 We use a strict filtering criterion: remove all generations where the FE classifier detects even a single FE type inconsistency, i.e. only retain instances with perfect FE fidelity. 3.4 Intrinsic Evaluation of Generations We evaluate our generated frame-semantic annotations against those from Pancholy et al. (2021), before and after filtering (§3.3). We consider three metrics: perplexity under Llama-2-7B (Touvron et al., 2023) for overall fluency, FE fidelity, and human acceptance. We randomly sampled 1000 LUs without annotations under FrameNet and used our generation framework to generate one instance each for these LUs. For human acceptability, we perform fine-grained manual evaluation on 200 examples sampled from the generated instances.3 We deem an example acceptable if the FE spans semantically align with the target LU and preserve the FE role definitions under FrameNet. We provide a qualitative analysis of generated examples in Appendix E. Results in Table 1 shows that our filtering approach—designed for perfect FE fidelity— improves performance under the other two metrics. Compared to rule-based generations from Pancholy et al. (2021), our filtered generations fare better under both perplexity and human acceptability, indicating improved fluency and semantic consistency. Most importantly, models incorporating semantic information, i.e., FE-conditioned and Frame+FE2Our SpanBERT FE classifier attains 95% accuracy on the standard FrameNet 1.7 splits; see Appendix D for details. 3Human evaluation is mainly conducted by the first author of this work. These annotations were validated by two independent volunteers unfamiliar with generated data evaluating the same examples from GPT-4 | Frame+FE, where the ratings differ by only 1% from our primary ratings. This suggests a consistent rating quality across different observers. 683 Before Filtering (|Dtest|=1K) After Filtering (FE Fid. = 1.0) FE Fid. ppl. Human (|Dtest|=200) ppl.(|Dtest|) Human (|Dtest|) Human (FN 1.7) 0.979 78.1 1.000 97.0 (975) 1.000 (199) Pancholy et al. 0.953 127.8 0.611 146.0 (947) 0.686 (189) T5 0.784 139.3 0.594 117.5 (789) 0.713 (156) T5 | FE 0.862 127.6 0.711 112.7 (850) 0.777 (168) T5 | Frame + FE 0.882 136.8 0.644 124.4 (873) 0.704 (172) GPT-4 0.704 114.9 0.528 114.2 (724) 0.723 (132) GPT-4 | FE 0.841 106.3 0.700 103.4 (838) 0.826 (164) GPT-4 | Frame + FE 0.853 117.2 0.733 111.8 (845) 0.821 (165) Table 1: Perplexity, FE fidelity and human acceptability of T5 and GPT-4 generations conditioned on different degrees of semantic information. Number of instances after filtering are in parantheses. Best results are in boldface. conditioned models, achieve higher human acceptance and generally lower perplexity compared to their no-conditioning counterparts, signifying that semantic cues improve both fluency and semantic consistency. Even before filtering, FE fidelity increases with the amount of semantic conditioning, indicating the benefits of structure-based conditioning. We also provide reference-based evaluation in Appendix F. 4 Augmenting Data for Frame-SRL Beyond improving FrameNet coverage, we investigate the extrinsic utility of our generations as training data to improve the frame-SRL task, which involves identifying and classifying FE spans in sentences for a given frame-LU pair. Here, we consider a modified Frame-SRL task, which considers gold-standard frames and LUs, following Pancholy et al. (2021). This remains a challenging task even for powerful models like GPT-4, which achieves a test F1 score of only 0.228 in contrast to Lin et al. (2021)’s state-of-the-art F1 score of 0.722. For experimental ease, we fine-tune a SpanBERT model on FrameNet’s full-text data as our parser4 and avoid using existing parsers due to their reliance on weaker, non-Transformer architectures (Swayamdipta et al., 2017), complex problem formulation (Lin et al., 2021), or need for extra frame and FE information (Zheng et al., 2022). As a pilot study, we prioritize augmenting the training data with verb LUs with F1 scores below 0.75 on average. This serves as an oracle augmenter targeting the lowest-performing LUs in the test set. For the generation of augmented data, we use our top-performing models within T5 and GPT-4 models according to human evaluation: T5 | FE and GPT-4 | Frame+FE models. Of 2,295 4This parser obtains an F1 score of 0.677, see Table 2. LUs present in the test data, 370 were selected for augmentation, resulting in 5,631 generated instances. After filtering, we retain 4,596 instances from GPT-4 | Frame+FE and 4,638 instances from T5 | FE. Additional experiments using different augmentation strategies on subsets of FrameNet are in Appendix G. All LUs F1 Aug. LUs F1 Unaugmented 0.677 ± 0.004 0.681 ± 0.012 Aug. w/ T5 | FE 0.683 ± 0.000 0.682 ± 0.006 Aug. w/ GPT-4 | Frame+FE 0.684 ± 0.002 0.677 ± 0.010 Table 2: F1 score of all LUs and augmented LUs under unaugmented setting, augmented settings with generations from T5 | FE and GPT-4 | Frame+FE, averaged across 3 random seeds. Table 2 shows the Frame-SRL performance, with and without data augmentation on all LUs and on only the augmented LUs. Despite the successes with human acceptance and perplexity, our generations exhibit marginal improvement on overall performance, and even hurt the performance on the augmented LUs. We hypothesize that this stagnation in performance stems from two factors: (1) the phenomenon of diminishing returns experienced by our Frame-SRL parser, and (2) the limited diversity in augmented data. Apart from the newly generated FE spans, the generated sentences closely resemble the original, thereby unable to introduce novel signals for frame-SRL; see subsection G.3 and Appendix H for more experiments on generation diversity. We speculate that Pancholy et al. (2021)’s success with data augmentation despite using only sister LU replacement might be attributed to use of a weaker parser (Swayamdipta et al., 2017), which left more room for improvement. 684 4.1 Augmenting Under Low-Resource Setting To further investigate our failure to improve frameSRL performance via data augmentation, we simulate a low-resource scenario and conduct experiments using increasing proportions of FrameNet training data under three settings: (1) training our SRL parser with full-text data, (2) training our SRL parser with both full-text and lexicographic data (which contains 10x more instances), and (3) training an existing frame semantic parser (Lin et al., 2021)5 with full-text data, to control for the use of our specific parser. 0.050.10 0.25 0.50 0.75 1.00 train data percentage 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 F1 score fulltext data + lexicographic data fulltext data Lin et al. fulltext data Lin et al. on SRL 25% fulltext data + 6.25% augmentation Figure 2: Learning curves for our frame-SRL model and Lin et al. (2021)’s end-to-end parser show diminishing returns on adding more human-annotated training data. The triangle marker denotes the performance of Lin et al. (2021)’s parser on SRL with gold frame and LU. Figure 2 shows that parsers across all three settings exhibit diminishing returns, especially on the second setting, which utilizes the largest training set. This indicates that there seems to be little room for improvement in frame-SRL, even with human annotated data. Following our learning curves, we further evaluate the utility of our generations without the influence of diminishing returns, by performing data augmentation in a low-resource setting. Specifically, we augment 25% of the full-text training data with an additional 6.25% of data generated using our method. As demonstrated in Figure 2, the performance of the model in this scenario not only exceeds that of the 25% dataset without augmentation but the results of the 25% dataset augmented with 6.25% of human-annotated data. This show5Lin et al. (2021) break frame-SRL into three subsequent sub-tasks: target identification, frame identification, and SRL, contributing to worse overall performance. cases the high utility of our generations for targeted data augmentation in a low-resource setting. 5 Related Work Data Augmentation for FrameNet While FrameNet annotations are expert annotated for the highest quality, this also limits their scalability. In an effort to improve FrameNet’s LU coverage, Pavlick et al. (2015) proposes increasing the LU vocabulary via automatic paraphrasing and crowdworker verification, without expanding the lexicographic annotations. Others address this limitation by generating annotations through lexical substitution (Anwar et al., 2023) and predicate replacement (Pancholy et al., 2021); neither leverages the generative capabilities of LLMs, however. Controlled Generation Other works have explored using semantic controls for generation tasks. Ou et al. (2021) propose FrameNet-structured constraints to generate sentences to help with a story completion task. Ross et al. (2021) studied controlled generation given target semantic attributes defined within PropBank, somewhat coarse-grained compared to FrameNet. Similarly, Ye et al. (2024) employ the rewriting capabilities of LLMs to generate semantically coherent sentences that preserve named entities for the Named Entity Recognition task. Guo et al. (2022) introduced GENIUS, a novel sketch-based language model pre-training approach aimed at reconstructing text based on keywords or sketches, though not semantic structures; this limits its effectiveness in capturing the full context. 6 Conclusion Our study provides insights into the successes and failures of LLMs in manipulating FrameNet’s linguistic structures. When conditioned on semantic information, LLMs show improved capability in producing semantically annotated sentences, indicating the value of linguistic structure in language generation. Under a low-resource setting, our generated annotations prove effective for augmenting training data for frame-SRL. Nevertheless, this success does not translate to a high-resource setting, echoing challenges reported in applying LLMs to other flavors of semantics (Bai et al., 2023; Lin et al., 2023; Ettinger et al., 2023). These outcomes underline the need for further exploration into how LLMs can be more effectively employed in automating linguistic structure annotation. 685 Acknowledgements We thank the anonymous reviewers and area chairs for valuable feedback. This work benefited from several fruitful discussions with Nathan Schneider, Miriam R. L. Petruck, Jena Hwang, and many folks from the USC-NLP group. We thank Ziyu He for providing additional human evaluation on generated annotations. This research was partly supported by the Allen Institute for AI and an Intel Rising Stars Award. Limitations While our work contributes valuable insights into LLMs’ capabilities towards semantic structureconditioned generation, we acknowledge certain limitations. First, our research is exclusively centered on the English language. This focus restricts the generalizability of our findings to other languages, which likely present unique linguistic structures with associated semantic complexity. The exploration of LLMs’ capabilities in linguistic structures manipulation and generation in languages other than English remains an open direction for future research. Moreover, we do not consider the full complexity of the frame semantic role labeling task, which also considers target and frame identification. Even for the argument identification task, we use an oracle augmentation strategy. Despite this relaxed assumption, the generations had limited improvement in performance, except in low-resource settings, where targeted data augmentation proved more effective. This indicates potential for improvement in scenarios with limited annotated data but highlights the need for further research in diverse and complex settings. Ethics Statement We recognize the inherent ethical considerations associated with utilizing and generating data via language models. A primary concern is the potential presence of sensitive, private, or offensive content within the FrameNet corpus and our generated data. In light of these concerns, we carefully scrutinize the generated sentences during the manual analysis of the 200 generated examples and do not find such harmful content. Moving forward, we are committed to ensuring ethical handling of data used in our research and promoting responsible use of dataset and language models. References Saba Anwar, Artem Shelmanov, Nikolay Arefyev, Alexander Panchenko, and Christian Biemann. 2023. Text augmentation for semantic frame induction and parsing. Language Resources and Evaluation, pages 1–46. Xuefeng Bai, Jialong Wu, Jialong Wu, Yulong Chen, Zhongqing Wang, and Yue Zhang. 2023. Constituency parsing using llms. ArXiv, abs/2310.19462. Allyson Ettinger, Jena D. Hwang, Valentina Pyatkin, Chandra Bhagavatula, and Yejin Choi. 2023. "you are an expert linguistic annotator": Limits of llms as analyzers of abstract meaning representation. In Conference on Empirical Methods in Natural Language Processing. Charles J. Fillmore. 1985. Frames and the semantics of understanding. Quaderni di Semantica, 6(2):222– 254. Daniel Gildea and Dan Jurafsky. 2000a. Automatic labeling of semantic roles. In Annual Meeting of the Association for Computational Linguistics. Daniel Gildea and Daniel Jurafsky. 2000b. Automatic labeling of semantic roles. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 512–520, Hong Kong. Association for Computational Linguistics. Biyang Guo, Yeyun Gong, Yelong Shen, Songqiao Han, Hailiang Huang, Nan Duan, and Weizhu Chen. 2022. Genius: Sketch-based language model pre-training via extreme and selective masking for text generation and augmentation. ArXiv, abs/2211.10330. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2019. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64–77. Meghana Kshirsagar, Sam Thomson, Nathan Schneider, Jaime G. Carbonell, Noah A. Smith, and Chris Dyer. 2015. Frame-semantic role labeling with heterogeneous annotations. In Annual Meeting of the Association for Computational Linguistics. Irene Langkilde and Kevin Knight. 1998. Generation that exploits corpus-based statistical knowledge. In 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1, pages 704–710, Montreal, Quebec, Canada. Association for Computational Linguistics. Boda Lin, Xinyi Zhou, Binghao Tang, Xiaocheng Gong, and Si Li. 2023. Chatgpt is a potential zero-shot dependency parser. ArXiv, abs/2310.16654. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Annual Meeting of the Association for Computational Linguistics. 686 Zhichao Lin, Yueheng Sun, and Meishan Zhang. 2021. A graph-based neural model for end-to-end frame semantic parsing. In Conference on Empirical Methods in Natural Language Processing. Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. In International Conference on Learning Representations. Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2021. Cross-task generalization via natural language crowdsourcing instructions. In Annual Meeting of the Association for Computational Linguistics. Jiefu Ou, Nathaniel Weir, Anton Belyy, Felix Yu, and Benjamin Van Durme. 2021. InFillmore: Frameguided language generation with bidirectional context. In Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics, pages 129–142, Online. Association for Computational Linguistics. Ayush Pancholy, Miriam R. L. Petruck, and Swabha Swayamdipta. 2021. Sister help: Data augmentation for frame-semantic role labeling. ArXiv, abs/2109.07725. Ellie Pavlick, Travis Wolfe, Pushpendre Rastogi, Chris Callison-Burch, Mark Dredze, and Benjamin Van Durme. 2015. FrameNet+: Fast paraphrastic tripling of FrameNet. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 408–413, Beijing, China. Association for Computational Linguistics. Hao Peng, Sam Thomson, Swabha Swayamdipta, and Noah A. Smith. 2018. Learning joint semantic parsers from disjoint data. ArXiv, abs/1804.05990. Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. ArXiv, abs/1910.10683. Alexis Ross, Tongshuang Sherry Wu, Hao Peng, Matthew E. Peters, and Matt Gardner. 2021. Tailor: Generating and perturbing text with semantic controls. In Annual Meeting of the Association for Computational Linguistics. Josef Ruppenhofer, Michael Ellsworth, Miriam R. L. Petruck, Christopher R. Johnson, Collin F. Baker, and Jan Scheffczyk. 2016. FrameNet II: Extended Theory and Practice. ICSI: Berkeley. Josef Ruppenhofer, Michael Ellsworth, Miriam R. L. Petruck, Christopher R. Johnson, and Jan Scheffczyk. 2006. Framenet ii: Extended theory and practice. Swabha Swayamdipta, Sam Thomson, Chris Dyer, and Noah A. Smith. 2017. Frame-semantic parsing with softmax-margin segmental rnns and a syntactic scaffold. ArXiv, abs/1706.09528. Hugo Touvron, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. ArXiv, abs/2307.09288. Marilyn A. Walker, Owen Rambow, and Monica Rogati. 2001. SPoT: A trainable sentence planner. In Second Meeting of the North American Chapter of the Association for Computational Linguistics. Junjie Ye, Nuo Xu, Yikun Wang, Jie Zhou, Qi Zhang, Tao Gui, and Xuanjing Huang. 2024. Llm-da: Data augmentation via large language models for few-shot named entity recognition. ArXiv, abs/2402.14568. Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. Bartscore: Evaluating generated text as text generation. ArXiv, abs/2106.11520. Ce Zheng, Yiming Wang, and Baobao Chang. 2022. Query your model with definitions in framenet: An effective method for frame semantic role labeling. ArXiv, abs/2212.02036. 687 A FrameNet Statistics A.1 Distribution of Lexical Units Table 3 illustrates a breakdown of FrameNet corpus categorized by the POS tags of the LUs. Specifically, we report the number of instances and the average count of candidate FEs per sentence, corresponding to LUs of each POS category. The two predominant categories are verb (v) LUs and noun (n) LUs, with verb LUs exhibiting a higher average of candidate FE spans per sentence compared to noun LUs. LU POS # Inst. # FEs # C. FEs # Cd. FEs v 82710 2.406 1.945 1.354 n 77869 1.171 0.675 0.564 a 33904 1.467 1.211 1.025 prep 2996 2.212 2.013 1.946 adv 2070 1.851 1.717 1.655 scon 758 1.906 1.883 1.883 num 350 1.086 0.929 0.549 art 267 1.547 1.543 1.408 idio 105 2.162 1.933 1.486 c 69 1.957 0.841 0.826 Table 3: Number of instances and average number of all, core, and candidate FE spans per sentence, categorized by POS tags of LUs in FrameNet. C. FEs represents Core FEs and Cd. FEs represents Candidate FEs. A.2 Replacement of non-verb LUs Table 4 shows several examples of non-verb LU replacement, where the resulting sentences mostly preserve semantic consistency. Given the extensive number of annotated verb LUs available for LU replacement and candidate FEs per sentence for masking and subsequent structure-conditioned generation, our generation methodology is primarily applied to verb LUs. A.3 Full-Text and Lexicographic Data Table 5 shows the distribution of the training, development, and test datasets following standard splits on FrameNet 1.7 from prior work (Kshirsagar et al., 2015; Swayamdipta et al., 2017; Peng et al., 2018; Zheng et al., 2022). Both the development and test datasets consist exclusively of full-text data, whereas any lexicographic data, when utilized, is solely included within the training dataset. Since our generation approach is designed to produce lexicographic instances annotated for a single LU, when augmenting fulltext data (§4), we break down each fulltext example by annotated LUs and process them individually as multiple lexicographic examples. Frame LU Sentence Leadership king.n (rector.n) No prior Scottish king (rector) claimed his minority ended at this age. Sounds tinkle.n (yap.n) Racing down the corridor, he heard the tinkle (yap) of metal hitting the floor. Body_part claw.n (back.n) A cat scratched its claws (back) against the tree. Disgraceful _situation shameful.a (disgraceful.a) This party announced his shameful (disgraceful) embarrassments to the whole world . Frequency always.adv (rarely.adv) The temple is always (rarely) crowded with worshippers . Concessive despite.prep (in spite of.prep) Despite (In spite of) his ambition , Gass ’ success was short-lived . Conditional _Occurrence supposing.scon (what if.scon) So , supposing (what if) we did get a search warrant , what would we find ? Table 4: Example sentences of non-verb LUs where semantic consistency is preserved after sister LU replacement. The original LU is in teal and the replacement LU is in orange and parentheses. Dataset Split Size Train (full-text + lex.) 192,364 Train (full-text) 19,437 Development 2,272 Test 6,462 Table 5: Training set size with and without lexicographic data, development set size, and test set size in FrameNet 1.7. B Details on Candidate FEs Selection There are three criteria for determining a candidate FE span, i.e., FE Type Criterion, Ancestor Criterion, and Phrase Type Criterion. In preliminary experiments, we have conducted manual analysis on the compatibility of FE spans with replacement LUs on 50 example generations. As demonstrated through the sentence in Figure 1, the FE Type criterion can effectively eliminate non-core FE that do not need to be masked, i.e., "Growing up" of FE type Time. Also, the Phrase Type Criterion can identify the candidate FE "for breaking the rules", which is a prepositional phrase. Moreover, we find that FEs of Agent or Self-mover type describes a human subject, which is typically independent of 688 Sentence After Replacement FE Type She was bending over a basket of freshly picked flowers , organizing them to her satisfaction . Agent (Agent) The woman got to her feet , marched indoors , was again hurled out . Self_mover (Self_mover) While some presumed her husband was dead , Sunnie refused to give up hope . Cognizer (Agent) Table 6: Example sentences after LU replacement with FEs of type Agent, Self_mover, or their descendants, which are compatible with the new replacement LU. The ancestors of FE types are reported in parentheses. The FEs are shown in teal and the replacement LUs are shown in orange. the LU evoked in the sentence. Since FE types within the same hierarchy tree share similar properties, we exclude FEs of Agent and Self-mover types, as well as any FEs having ancestors of these types, from our masking process, as illustrated in Table 6. C Details on Span Generation C.1 T5-large Fine-Tuning During the fine-tuning process of T5-large, we incorporate semantic information using special tokens, which is demonstrated in Table 7 through the example sentence in Figure 1. T5 models are finetuned on full-text data and lexicographic data in FrameNet for 5 epochs with a learning rate of 1e-4 and an AdamW (Loshchilov and Hutter, 2017) optimizer of weight decay 0.01. The training process takes around 3 hours on 4 NVIDIA RTX A6000 GPUs. C.2 GPT-4 Few-shot Prompting When instructing GPT-4 models to generate FE spans, we provide the task title, definition, specific instructions, and examples of input/output pairs along with explanations for each output, as demonstrated in Table 8. D FE Classifier Training Details Our classifier operates on the principle of classifying one FE span at a time. In cases where multiple FE spans are present within a single sentence, we split these into distinct instances for individual processing. For each instance, we introduce special tokens—<LU_START> and <LU_END>—around the Model Input No Conditioning Growing up, <mask> are rewarded <mask>. FE-Conditioning Growing up, <FE: Evaluee> <mask> </FE: Evaluee> are rewarded <FE: Reason> <mask> </FE: Reason>. Frame-FE-Conditioning Growing up, <Frame: Rewards_and_Punishments + FE: Evaluee> <mask> </Frame: Rewards_and_Punishments + FE: Evaluee> are rewarded <Frame: Rewards_and_Punishments + FE: Reason> <mask> </Frame: Rewards_and_Punishments + FE: Reason>. Table 7: Template of finetuning T5 models on an example sentence. LU, and <FE_START> and <FE_END> around the FE span. Additionally, the name of the evoked frame is appended to the end of the sentence. To train our classifier to effectively discern valid FE spans from invalid ones, we augment training data with instances where randomly selected word spans are labeled as “Not an FE”, constituting approximately 10% of the training data. The FE classifier is finetuned on full-text data and lexicographic data for 20 epochs with a learning rate of 2e-5 and an AdamW optimizer with weight decay 0.01. The training process takes around 4 hours on 4 NVIDIA RTX A6000 GPUs. E Human evaluation of generated examples We perform fine-grained manual analysis on 200 generated sentences to evaluate the quality of model generations based on two criteria: (1) sentence-level semantic coherence and (2) preservation of original FE types. We present 10 example sentences from the overall 200 in Table 9. F Intrinsic Evaluation on FrameNet Test Data To evaluate the quality of generated sentences on reference-based metrics such as ROUGE (Lin, 2004) and BARTScore (Yuan et al., 2021), we perform §3.1 and §3.2 on the test split of FrameNet 1.7 with verb LUs. As observed in Table 10, the T5 | FE model surpasses others in ROUGE scores, signifying superior word-level precision, while GPT-4 achieves the highest BARTScore, indicat689 Title Sentence completion using frame elements Definition You need to complete the given sentence containing one or multiple blanks (<mask>). Your answer must be of the frame element type specified in FE Type. Example Input Frame: Rewards_and_Punishments. Lexical Unit: discipline.v. Sentence: Growing up, <mask> are disciplined <mask>. FE Type: Evaluee, Reason. Example Output boys, for breaking the rules Reason The frame "Rewards_and_Punishments" is associated with frame elements "Evaluee" and "Reason". The answer "boys" fills up the first blank because it is a frame element (FE) of type "Evaluee". The answer "for breaking the rules" fills up the second blank because it is an FE of type "Reason". Prompt Fill in the blanks in the sentence based on the provided frame, lexical unit and FE type. Generate the spans that fill up the blanks ONLY. Do NOT generate the whole sentence or existing parts of the sentence. Separate the generated spans of different blanks by a comma. Generate the output of the task instance ONLY. Do NOT include existing words or phrases before or after the blank. Task Input Frame: Experiencer_obj. Lexical Unit: please.v. Sentence: This way <mask> are never pleased <mask> . FE Type: Experiencer, Stimulus. Task Output Table 8: Example prompts for GPT-4 models. Texts in green only appear in FE-Conditioning and Frame-FE-Conditioning models. Texts in orange only appear in Frame-FE-Conditioning models. ing its generated sentences most closely match the gold-standard FE spans in terms of meaning. For reference-free metrics, GPT-4 | FE performs well in both log perplexity and FE fidelity, showcasing its ability to produce the most fluent and semantically coherent generations. G More on Augmentation Experiments G.1 Experiments using Non-oracle Augmentation Strategy To evaluate the robustness and generalizability of our model under realistic conditions, we employed an augmentation strategy similar to that used by Pancholy et al. (2021). Specifically, we remove all annotated sentences of 150 randomly selected verb LUs from the full text training data and train our baseline parser using the remaining training data. Our full model was trained on instances of the 150 verb LUs re-generated by our framework along with the data used to train the baseline model. As a result, the test F1 scores for the baseline model and full model were 0.689 and 0.690, respectively, which echos the lack of significant improvement using the oracle augmentation strategy. G.2 Experiments on Verb-only Subset Since our generation method mainly focuses on augmenting verb LUs, we conduct additional augmentation experiments using a subset of FrameNet that includes only verb LU instances. To ensure model performance on a subset of data, we incorporate lexicographic data with verb LUs into our training set, resulting in a training set enriched with 80.2k examples, a development set comprising approximately 600 examples, and a test set containing about 2k examples. We experimented with different augmentation percentages both with and without filtering, as shown in Table 11. We use an oracle augmenter to augment LUs inversely proportional to their F1 scores from the unaugmented experiments. To expand coverage on more LUs during augmentation, we augment all LUs rather than limiting to those with F1 scores below 0.75. Although the improvements are marginal, the outcome from filtered augmentations is generally better than those from their unfiltered counterparts. G.3 Experiments on Multiple Candidate Generations In the main experiments conducted in this paper, we generated one instance for each LU-sentence pair. However, instances could be filtered out due to inconsistent FE spans, which could hurt generation diversity. To address this, we further experimented with generating three candidate instances for each LU-sentence pair to improve generation coverage. Specifically, we augmented the full-text training data by 25% under both the 1-candidate and 3-candidate settings. However, as shown in Table 12, generating three candidates did not lead to performance improvements in the F1 score. This suggests that simply increasing the number of generated candidates may not be sufficient to enhance 690 Frame LU Sentence Original FEs GPT-4 | FE Human Eval. Verification verify.v (confirm.v) The bank, upon confirming <Unconfirmed_content>, released the goods to the customer. compliance with the terms of the credit the transaction details ✓✓ Distributed _position blanket.v (line.v) <Theme> lines <Location> and the lake is covered with ice. snow many feet deep, the land the first snowfall, the shore ✓✓ Being_located sit.v (stand.v) Against the left-hand wall nearest to the camera are three storage shelves; <Theme> stands <Location>. a lidless unvarnished coffin in the process of construction, on the middle shelf a tall vase, on the top shelf ✓✓ Evoking conjure.v (evoke.v) A name like Pauline Gascoyne inevitably evoke <Phenomenon>. an image of a bimbo Gazza in a GTi memories of a bygone era ✓✓ Event happen.v (take place.v) Jamaicans appear to worry little about the future; sometimes it seems that they worry little even about what takes place <Time>. in the next few minutes tomorrow ✓✓ Self_motion climb.v (walk.v) My mother parked her bicycle in the shoulder and took my hand, and we walked <Goal>. to the top of the hill to the park ✓✓ Process_materialsstain.v (process.v) If you accidentally process <Material> <Alterant>, leave it for a week or two. walls, with woodworm fluid the wood, too much ✓× Self_motion creep.v (make.v) Matilda took the knife she had been eating with, and all four of them make <Path>. towards the dining-room door their way to the living room ✓× Hunting hunt.v (fish.v) <Food> too were mercilessly fished and often left, plucked and dying, where the sealers found them. The albatrosses The penguins × ✓ Change_position _on_a_scale dip.v (rise.v) <Attribute> rose <Final _value> in the summer, but has recently climbed above $400 and last night was nudging $410. The price per ounce, below $360 The price, to $410 × ✓ Table 9: Example Generations of GPT-4 | FE, our best model according to human acceptance. The two marks in human evaluation represent whether the generations satisfy the two criteria individually: (1) sentence-level semantic coherence and (2) preservation of all FE types. A sentence is deemed acceptable only when it satisfies both criteria. The new replacement LUs are presented in orange or parentheses. Masked FE spans are presented in teal and their corresponding FE types in angle brackets. 691 BARTScore ROUGE-1 ROUGE-L Perp. FE Fid. Human 4.82 T5 -5.939 0.301 0.298 447.874 0.829 T5 | FE -5.922 0.318 0.316 434.231 0.840 T5 | Frame + FE -6.179 0.276 0.274 441.639 0.843 GPT-4 -4.060 0.228 0.227 85.820 0.880 GPT-4 | FE -4.336 0.218 0.217 82.977 0.930 GPT-4 | Frame + FE -4.395 0.210 0.209 87.548 0.929 Table 10: Log BARTScore, ROUGE scores and perplexity of generations on FrameNet test set without LU replacement. All LUs F1 Aug. LUs F1 Unaugmented 0.751 0.779 5% Aug. w/o filter 0.745 0.778 5% Aug. w/ filter 0.752 0.781 25% Aug. w/o filter 0.752 0.776 25% Aug. w/ filter 0.753 0.781 Table 11: F1 score of all verb LUs and augmented LUs in augmentation experiments using different percentages of augmentations generated by T5 | FE with and without filtering, compared to baseline results without data augmentation. Best results are in boldface generation diversity. Future work may need to explore more effective strategies to improve the diversity of generated data. All LUs F1 Unaugmented 0.693 1-candidate 0.688 3-candidate 0.673 Table 12: F1 score of SRL parsers trained on unaugmented data and augmented data generated by T5 | FE under 1-candidate and 3-candidate strategies. H Effect of Filtering on Generation Diversity To examine the effect of filtering on the diversity of generated data, we have conducted experiments to compute the Self-BLEU scores to measure diversity for the same 1,000 instances discussed in §3.4. A lower Self-BLEU score indicates higher diversity, as it signifies less overlap within the generated texts. As demonstrated in Table 13, the diversity of the generated candidates increases after applying the filter, even surpassing the diversity of the original instances created by humans. This substantiates the effectiveness of our filtering process in Before Filtering After Filtering Human 0.298 T5 0.302 0.278 T5 | FE 0.295 0.277 T5 | Frame+FE 0.295 0.271 GPT-4 0.270 0.249 GPT-4 | FE 0.268 0.246 GPT-4 | Frame+FE 0.271 0.253 Table 13: Self-BLEU scores of the 1000 instances created in §3.4 before and after filtering. enhancing the variability and quality of the generated sentences. 692
2024_short
63
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 693–704 August 11-16, 2024 ©2024 Association for Computational Linguistics DUAL-REFLECT: Enhancing Large Language Models for Reflective Translation through Dual Learning Feedback Mechanisms Andong Chen♠, Lianzhang Lou♣, Kehai Chen*♠, Xuefeng Bai♠, Yang Xiang♣, Muyun Yang♠, Tiejun Zhao♠, Min Zhang♠ ♠School of Computer Science and Technology, Harbin Institute of Technology, China ♣Pengcheng Laboratory, Shenzhen, China [email protected], {loulzh, xiangy}@pcl.ac.cn {chenkehai, baixuefeng, yangmuyun, tjzhao, zhangmin2021}@hit.edu.cn, Abstract Recently, large language models (LLMs) enhanced by self-reflection have achieved promising performance on machine translation. The key idea is guiding LLMs to generate translation with human-like feedback. However, existing self-reflection methods lack effective feedback information, limiting the translation performance. To address this, we introduce a DUAL-REFLECT framework, leveraging the dual learning of translation tasks to provide effective feedback, thereby enhancing the models’ self-reflective abilities and improving translation performance. The application of this method across various translation tasks has proven its effectiveness in improving translation accuracy and eliminating ambiguities, especially in translation tasks with low-resource language pairs1. 1 Introduction Large language models (LLMs) have recently demonstrated remarkable abilities across a variety of tasks (Bubeck et al., 2023a; Xu and Poo, 2023; Zhao et al., 2023). Notably, in the field of machine translation, LLMs have improved translation quality by adopting human-like methods of self-reflection (Shinn et al., 2023; Liang et al., 2023). The self-reflection process primarily relies on using LLMs to iteratively refine initial drafts through feedback loops, a method that has been widely researched and explored (Shinn et al., 2023; Park et al., 2023; Scheurer et al., 2022; Le et al., 2022; Welleck et al., 2022; Amabile, 1983; Flower and Hayes, 1981; Chen et al., 2023b; Simon, 1962; Chen et al., 2023a; Sun et al., 2021a). The lack of effective feedback limits the self-reflective capacity of Large Language Models (LLMs), thereby affecting their continuous * Corresponding author. 1Our code is available at https://github.com/ loulianzhang/Dual-Reflect. 🤔 Determine whether the following two sentences provided by user convey the same meaning and style, including subtleties. ....the term "白了" in this context implies a sense...... The word "disdainful" captures this connotation  🔍 Figure 1: DUAL-REFLECT first obtains an initial translation result, then performs back-translation, and LLMs reflect on the differences between the backtranslation results and the original source content to obtain feedback signals, ultimately optimizing the translation outcome. improvement in translation (Tyen et al., 2023; Liang et al., 2023; Lou et al., 2023). To address this, we introduce a framework that leverages the inherent duality property (He et al., 2016; Qin, 2020; Sun et al., 2021b; Yi et al., 2017; Xia et al., 2017) of translation tasks to provide effective feedback to LLMs, thereby enhancing their reflective capabilities and consequently improving translation performance. This method, named DUAL-REFLECT, stands for DUAL learning enhanced auto-REFLECtive Translation and comprises five stages: Draft Translation, Back Translation, Process Assessment, Dual-Reflection, Auto Revision. In the draft translation stage, LLMs employ their inherent translation capabilities to generate a draft translation. Subsequently, in the Back Translation stage, LLMs translate the draft translation back to the source language. Then, during the process assessment stage, an LLM-based agent is introduced to assess whether dual reflection is needed. If not, it outputs the 693 final result; otherwise, the process continues to cycle through all the steps. Based on this, in the dual reflection stage, LLMs reflect on the differences between the back-translation results and the initial source input, revealing potential translation biases. LLMs further analyze the reasons for these discrepancies and propose suggestions for improvement. Finally, In the autorevision stage, LLMs modify the initial translation by incorporating the analysis and improvement suggestions obtained through dual reflection. We verify the effectiveness of the DUALREFLECT framework across four translation directions in the WMT22, covering high, medium, and lower resource languages, as well as a commonsense reasoning MT Benchmark. Automatic evaluation results show that DUAL-REFLECT outperforms strong baseline methods, significantly enhancing translation performance. Notably, on low-resource translation tasks, DUAL-REFLECT achieved an average result that surpassed ChatGPT by +1.6 COMET. In addition, DUAL-REFLECT enhanced ChatGPT exceeded GPT-4 on the commonsense reasoning MT benchmark. Further human evaluation demonstrates that DUALREFLECT shows a better ability to resolve translation ambiguities compared to other methods. 2 Approach: DUAL-REFLECT Our DUAL-REFLECT framework consists of Five key stages, described in detail as follows: 2.1 Stage-1: Draft Translation In the draft translation stage, LLMs utilize their inherent translation capabilities to generate a draft translation from the source language Ls to the target language Lt. The instruction template for this translation task is as follows: Translation Instruction: Translate the following text from Ls to Lt: Input Text: Source Sentence x Output Text: Target Sentence y 2.2 Stage-2: Back Translation In this stage, the same instruction as used in the draft translation stage is adopted. The goal is to back-translate the initial translation result from the target language Lt back to the source language Ls, with the output being x′. 2.3 Stage-3: Process Assessment We introduce an evaluation agent, denoted as PA, to supervise and control the entire translation process. This Agent has two different modes: Judgment Mode: PA determines whether it can accurately identify the differences between x and x′ within a given specific number of iterations. If PA(x, x′) = False, the Dual Reflection stage is terminated; otherwise, the entire process continues. Stage-3: Judgment Mode: If you are a Ls linguist, Determine whether the following two sentences provided by user convey the same meaning and style, including subtleties. If so, give ’False’ response without any explanation, otherwise give ’True’ response and explain the reason. Input Text: Source Sentence x and Back Translation Output x′ Output Text: ’True’ or ’False’ Pattern Extraction: In the judgment mode, once determined to be True or after exceeding the predefined number of iterations, PA is responsible for extracting the final translation result from the entire output, denoted as PA(x, x′) = final_translation. Stage-3: Pattern Extraction: Therefore, Pattern Extraction : Please summarize the input information, you need to extract the final translation result from the paragraph. Now, please output your answer in JSON format, as follows: {′final_translation′ :′′}. Please strictly follow the JSON format and do not output irrelevant content. Input Text: Target Sentence y Output Text: {’final_translation’: ’extraction result’} 2.4 Stage-4: Dual Reflection The goal of the dual reflection stage is to reflect on the differences between the source sentences generated by back-translation and the initial source input. Then, it outputs analysis results and proposes suggestions to enhance translation performance. 694 Dual Reflection Instruction: Compare the the two sentences provided by the user. It aims to analyze the disparities between them in meaning, style, and subtleties, first provide analytical results, and then suggest how to revise them to make the two sentences consistent. Input Text: Source Sentence x′ and x Output Text: Analysis Results (AR) and Translation Suggestions (TS) 2.5 Stage-5: Auto Revision In this stage, utilizing the output of the dual reflection and the original source sentences as input, the original source sentences are re-translated (from Ls to Lt). Auto Revision Instruction: Translate the following text from Ls to Lt: Input Text: Analysis Results (AR), Translation Suggestions (TS) and x Output Text: Target Sentence y 3 Experiments 3.1 Experimental Setup Test Data. To mitigate concerns of data leakage as highlighted by Bubeck et al., 2023b, Garcia et al., 2023, and Zhu et al., 2023, we leveraged the WMT222 (Kocmi et al., 2022) and WMT233 (Kocmi et al., 2023) test set in our evaluation framework. Additionally, to further evaluate DUAL-REFLECT’s performance in complex translation tasks, we employed the Commonsense Reasoning MT dataset (He et al., 2020), consisting of Chinese→English translation examples. See Appendix A.1 for specific details. Comparing Systems. In our evaluation, the DUAL-REFLECT framework is compared with a range of models, including ChatGPT (Ouyang et al., 2022), GPT-44 (Achiam et al., 2023), Alpaca2https://www.statmt.org/wmt22/index.html 3https://www2.statmt.org/wmt23/ 4The ChatGPT and GPT-4 models used in this work are accessed through the gpt-3.5-turbo and gpt-4 APIs, respectively. 7B5, Vicuna-7B6, ReRank (He et al., 2023), SelfReflect (Shinn et al., 2023), MAD (Liang et al., 2023), and MAPS (He et al., 2023). See Appendix A.2 for specific details. Evaluation Metrics. In evaluating our translation methodology, we initially employ COMET7 (Rei et al., 2022a) and BLEURT8 (Sellam et al., 2020) as automatic metrics, aligning with the established standards in LLM-based translation literature (He et al., 2023; Huang et al., 2024). To further evaluate our translation method, we employ human evaluations to verify translation performance and the ability to resolve translation ambiguities. Details on human evaluations are in Appendix B.4. 3.2 Main Results The main results of WMT22 and the Commonsense MT are presented in Tables 1 and 2. The results of WMT23 are presented in Appendix B.3. Based on these outcomes, we derive the subsequent insights: The effectiveness of DUAL-REFLECT has been validated across a wide range of settings. As shown in Table 1, across 4 language pairs, 3 LLMs, and 2 metrics, DUAL-REFLECT achieves the best performance compared to other methods. Specifically, DUAL-REFLECT demonstrates an average improvement of +1.18 COMET over the baseline ChatGPT and +0.75 COMET over the Self-Reflect methods. In the low-resource Cs→Uk translation task, DUALREFLECT surpasses ChatGPT and MAPS by +2.2 and +1.4 COMET, respectively. Additionally, Table 5 shows the remaining five low-resource tasks from WMT22, with an average increase of +0.7 COMET. These improvements indicate that DUAL-REFLECT has broad applicability across different levels of resource availability and language similarity, especially exhibiting more pronounced improvements in language pairs with lower resources. The effectiveness of DUAL-REFLECT in commonsense reasoning translation tasks. The results, presented in Table 2, show that in commonsense reasoning translation tasks, DUALREFLECT significantly outperforms other methods, achieving the best translation performance. 5https://huggingface.co/tatsu-lab/alpaca-7bwdiff/tree/main 6https://huggingface.co/lmsys/vicuna-7b-v1.5 7https://huggingface.co/Unbabel/wmt22-comet-da 8https://github.com/lucadiliello/bleurt-pytorch 695 Methods En→De En→Ja Cs→Uk En→Hr COMET BLEURT COMET BLEURT COMET BLEURT COMET BLEURT ChatGPT 85.8 75.6 87.9 66.3 88.0 75.0 85.9 75.0 +5-shot 86.5 76.3 88.2 67.1 88.3 86.4 +Rerank 86.0 75.9 88.0 66.6 88.3 75.3 86.3 75.4 +Refine 85.9 76.0 88.1 66.4 89.0 74.5 86.1 75.6 +Refine_cos 86.2 76.3 88.4 66.8 89.5 75.0 86.4 75.9 +MAPS 86.4 76.3 88.5 67.4 88.8 76.1 86.5 76.0 +Self-Reflect 86.3 76.1 88.3 66.9 88.4 76.0 86.3 75.8 +DUAL-REFLECT 86.5 76.4 88.7 67.9 90.2 77.3 86.9 76.4 Alpaca-7B 75.5 62.2 56.6 31.4 74.1 52.4 65.9 53.2 +5shot 76.3 62.8 57.9 31.9 75.9 53.1 67.9 53.6 +MAPS 76.7 63.5 58.2 33.9 76.3 53.7 68.1 54.2 +DUAL-REFLECT 78.1 64.1 61.0 34.7 77.5 54.3 69.5 55.4 Vicuna-7B 79.8 67.4 82.3 58.7 74.9 57.8 69.3 57.7 +5shot 80.3 67.8 83.3 59.3 76.3 58.3 70.2 58.1 +MAPS 81.1 68.4 84.4 60.3 77.2 59.6 71.1 58.8 +DUAL-REFLECT 82.0 69.1 85.1 61.1 78.3 60.7 72.9 60.4 Table 1: The main results from the WMT22 benchmark are presented. ChatGPT, Alpaca-7B, and Vicuna-7B mean to perform translation directly through Zero-Shot. The bold indicates the highest values that are statistically significant, with p-values less than 0.05 in the paired t-test against all compared methods. Compared to the Self-Reflect method, it showed an improvement of +1.3 COMET, indicating more effective error correction capabilities. Moreover, DUAL-REFLECT also surpassed the MAD method, which relies on feedback from multiagent debate, demonstrating the high quality of its feedback. Notably, in translation tasks involving logical reasoning, DUAL-REFLECT’s performance even exceeded that of GPT-4, suggesting reasoning abilities. Methods AutoMetrics COMET BLEURT GPT-4 82.0 71.0 ChatGPT +Zero-Shot 79.7 68.2 +Rerank 80.9 68.9 +Refine 80.4 68.5 +Refine_cos 80.8 68.8 +MAPS 81.9 +Self-Reflect 80.9 68.7 +MAD 82.0 69.4 +DUAL-REFLECT 82.2 71.8 Table 2: The main results from the Commonsense MT benchmark are presented. The bold indicates the highest value. The bold indicates the highest values, statistically significant with p-values less than 0.05 in the paired t-test against compared methods. 4 Analysis We thoroughly analyze our approach, with results primarily reported on CommonsenseMT Zh→En unless stated otherwise. 4.1 The Effectiveness of Dual Learning In this study, we explore the potential positive impact of a dual learning feedback mechanism on translation performance, as shown in Figure 2. The horizontal axis denotes ∆D = 100 −COMET(x, x′), the disparity between the original sentence x and its back-translated version x′. The vertical axis quantifies improvement in translation performance, as a COMET metric difference (∆C), between DUAL-REFLECT and ChatGPT. Findings show a correlation coefficient of 0.46, indicating that feedback from dual learning improves the model’s reflective capabilities, thus enhancing translation accuracy. Additionally, the experimental data shows significant differences between the output x′ and the original source sentence x in the initial back-translation (∆D > 50), further confirming the universality of differences obtained from the dual learning in translation tasks. 0 10 20 30 40 50 60 70 D:Source-Back Translation Difference 10 0 10 20 30 40 50 C: Performance Improvement Figure 2: Effectiveness experiment of Dual Learning, each point represents a translation data from the test set. 696 4.2 Human Evaluation In terms of human evaluation, this study follows the method of Liang et al., 2023 to assess translation outcomes from two main dimensions: accuracy in ambiguity resolution and direct assessment of translation quality (details in Appendix B.4). The experimental results are presented in Table 3. Regarding the accuracy of ambiguity resolution, DUAL-REFLECT performs the best, indicating that dual feedback contributes to better disambiguation in translation tasks. In terms of human evaluation, DUAL-REFLECT receives the highest ratings, further demonstrating that the method achieves superior translation quality. Methods Human Evaluation Score ACC GPT-4 3.9 69.8 ChatGPT +Zero-Shot 3.1 63.8 +Rerank 3.3 66.8 +Self-Reflect 3.4 64.9 +MAD 3.7 76.2 +DUAL-REFLECT 4.2 77.4 Table 3: The human-annotated results of the Commonsense MT benchmark. 4.3 Examine how iteration rounds affect results In this experimental design, we require reviewer PA to determine the final answer (PA(x, x′) = final_translation) in each iteration, rather than allowing adaptive termination of iterations as described in Section 2.3. Figure 3 in the Appendix presents the outcomes, revealing DUAL-REFLECT’s superior performance over the benchmark method as iterations progress, notably achieving the highest COMET score in three iterations. This emphasizes DUAL-REFLECT’s ability to provide improved translations through repeated iterations, demonstrating the effectiveness and robustness of its dual learning feedback mechanism. 5 Case Study This section presents a case study on the DUALREFLECT method, assessing its effectiveness and constraints via examples (detailed in Appendix B.5). Positive instances (Figures 4, 5 and 6) illustrate substantial enhancements in translation accuracy and semantic coherence due to its reflective and iterative processes. Conversely, negative examples (Figures 7 and 8) highlight Figure 3: Translation performance varies based on the iteration of either DUAL-REFLECT or MAD. the dependency of DUAL-REFLECT’s success on Back Translation quality, suggesting limitations in its capacity for improvement. This underscores the method’s reliance on the integrity of each cycle component for optimal performance. This achievement has break the world record. 这个成就打破了世界记录。 The two sentences do not convey the same meaning and style. The first sentence "这个成绩平了世界记录" means "This score has tied the world record." It implies that the score achieved is equal to the existing world record. The second sentence "这个成就已经打破了世界纪录" means "This achievement has already broken the world record." It implies that the achievement has surpassed the previous world record. Suggestion: 1. 平了 (píng le) - This phrase means "tied" or "equal to" in English. 2. 世界记录 (shìjiè jìlù) - This phrase means "world record" in English. This score has tied the world record. 这个分数追平了世界纪录。 This score has tied the world record. Input: 这个成绩平了世界记录。 (This achievement equalled the world record.) Figure 4: The DUAL-REFLECT methodology for translating positive examples within Chinese sentences. 6 Conclusion We introduced DUAL-REFLECT, an LLM-based machine translation method, that leverages dual learning to improve reflection and performance, excelling in resource-limited and common sense reasoning scenarios, with human evaluations confirming its effectiveness. 697 7 Limitations The DUAL-REFLECT framework enhances the reflective capabilities of LLMs in translation tasks by leveraging the duality nature of translation but has several limitations. Firstly, models with stronger reflective capabilities will obtain better feedback, thereby enhancing more performance. Additionally, since our method requires multiple steps, it necessitates a significant amount of computational resources. 8 Ethics Statement One of the core design principles of the DUALREFLECT framework is a strict respect for intellectual property rights. This applies to both the methods and algorithms developed within the framework as well as those cited from the literature, all adhering strictly to copyright laws. Additionally, the framework upholds this principle in the handling of translation content, ensuring its use does not infringe upon the rights of original creators. The framework also places a strong emphasis on responsibility during the automated translation process. By integrating stages of reflection and revision, DUAL-REFLECT enhances the transparency and interpretability of the translation methodology, thereby effectively identifying and correcting potential errors in the translation process. 9 Acknowledgements We want to thank all the anonymous reviewers for their valuable comments. This work was supported by the National Natural Science Foundation of China (62276077, 62376075, U1908216, 62376076 and 62106115), Guangdong Basic and Applied Basic Research Foundation (2024A1515011205), Shenzhen College Stability Support Plan (GXWD20220811170358002, GXWD20220817123150002), Key R&D Program of Yunnan (202203AA080004), and Major Key Project of PCL under Grant No. PCL2022D01. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Teresa M Amabile. 1983. A theoretical framework. The Social Psychology of Creativity, pages 65–96. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. 2023a. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott M. Lundberg, Harsha Nori, Hamid Palangi, Marco Túlio Ribeiro, and Yi Zhang. 2023b. Sparks of artificial general intelligence: Early experiments with GPT-4. CoRR. Andong Chen, Yuan Sun, Xiaobing Zhao, Rosella Galindo Esparza, Kehai Chen, Yang Xiang, Tiejun Zhao, and Min Zhang. 2023a. Improving lowresource question answering by augmenting question information. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10413–10420, Singapore. Association for Computational Linguistics. Andong Chen, Feng Yao, Xinyan Zhao, Yating Zhang, Changlong Sun, Yun Liu, and Weixing Shen. 2023b. Equals: A real-world dataset for legal question answering via reading chinese laws. In Proceedings of the Nineteenth International Conference on Artificial Intelligence and Law, pages 71–80. Pinzhen Chen, Zhicheng Guo, Barry Haddow, and Kenneth Heafield. 2023c. Iterative translation refinement with large language models. arXiv preprint arXiv:2306.03856. Linda Flower and John R Hayes. 1981. A cognitive process theory of writing. College composition and communication, 32(4):365–387. Xavier Garcia, Yamini Bansal, Colin Cherry, George Foster, Maxim Krikun, Melvin Johnson, and Orhan Firat. 2023. The unreasonable effectiveness of few-shot learning for machine translation. In International Conference on Machine Learning, pages 10867–10878. PMLR. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. Advances in neural information processing systems, 29. Jie He, Tao Wang, Deyi Xiong, and Qun Liu. 2020. The box is in the pen: Evaluating commonsense reasoning in neural machine translation. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3662–3672. Zhiwei He, Tian Liang, Wenxiang Jiao, Zhuosheng Zhang, Yujiu Yang, Rui Wang, Zhaopeng Tu, Shuming Shi, and Xing Wang. 2023. Exploring human-like translation strategy with large language models. ArXiv, abs/2305.04118. 698 Yichong Huang, Xiaocheng Feng, Baohang Li, Chengpeng Fu, Wenshuai Huo, Ting Liu, and Bing Qin. 2024. Aligning translation-specific understanding to general understanding in large language models. arXiv preprint arXiv:2401.05072. Tom Kocmi, Eleftherios Avramidis, Rachel Bawden, Ondˇrej Bojar, Anton Dvorkovich, Christian Federmann, Mark Fishel, Markus Freitag, Thamme Gowda, Roman Grundkiewicz, et al. 2023. Findings of the 2023 conference on machine translation (wmt23): Llms are here but not quite there yet. In Proceedings of the Eighth Conference on Machine Translation, pages 1–42. Tom Kocmi, Rachel Bawden, Ondˇrej Bojar, Anton Dvorkovich, Christian Federmann, Mark Fishel, Thamme Gowda, Yvette Graham, Roman Grundkiewicz, Barry Haddow, et al. 2022. Findings of the 2022 conference on machine translation (wmt22). In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 1–45. Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, and Steven Chu Hong Hoi. 2022. Coderl: Mastering code generation through pretrained models and deep reinforcement learning. Advances in Neural Information Processing Systems, 35:21314–21328. Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi. 2023. Encouraging divergent thinking in large language models through multi-agent debate. arXiv preprint arXiv:2305.19118. Lianzhang Lou, Xi Yin, Yutao Xie, and Yang Xiang. 2023. CCEval: A representative evaluation benchmark for the Chinese-centric multilingual machine translation. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10176–10184, Singapore. Association for Computational Linguistics. Yasmin Moslem, Rejwanul Haque, and Andy Way. 2023. Adaptive machine translation with large language models. arXiv preprint arXiv:2301.13294. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730– 27744. Joon Sung Park, Joseph C. O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. 2023. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, UIST 2023, San Francisco, CA, USA, 29 October 2023- 1 November 2023, pages 2:1–2:22. ACM. Tao Qin. 2020. Dual learning. Springer. Ricardo Rei, José GC De Souza, Duarte Alves, Chrysoula Zerva, Ana C Farinha, Taisiya Glushkova, Alon Lavie, Luisa Coheur, and André FT Martins. 2022a. Comet-22: Unbabel-ist 2022 submission for the metrics shared task. In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 578–585. Ricardo Rei, Marcos Treviso, Nuno M Guerreiro, Chrysoula Zerva, Ana C Farinha, Christine Maroti, José GC De Souza, Taisiya Glushkova, Duarte M Alves, Alon Lavie, et al. 2022b. Cometkiwi: Istunbabel 2022 submission for the quality estimation shared task. arXiv preprint arXiv:2209.06243. Jérémy Scheurer, Jon Ander Campos, Jun Shern Chan, Angelica Chen, Kyunghyun Cho, and Ethan Perez. 2022. Training language models with natural language feedback. arXiv preprint arXiv:2204.14146, 8. Thibault Sellam, Dipanjan Das, and Ankur P Parikh. 2020. Bleurt: Learning robust metrics for text generation. arXiv preprint arXiv:2004.04696. Noah Shinn, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023. Reflexion: Language agents with verbal reinforcement learning. Herbert A Simon. 1962. The architecture of complexity. Proceedings of the American philosophical society, 106(6):467–482. Yuan Sun, Andong Chen, Chaofan Chen, Tianci Xia, and Xiaobing Zhao. 2021a. A joint model for representation learning of tibetan knowledge graph based on encyclopedia. Transactions on Asian and Low-Resource Language Information Processing, 20(2):1–17. Yuan Sun, Chaofan Chen, Andong Chen, and Xiaobing Zhao. 2021b. Tibetan question generation based on sequence to sequence model. Computers, Materials & Continua, 68(3). Gladys Tyen, Hassan Mansoor, Peter Chen, Tony Mak, and Victor C˘arbune. 2023. Llms cannot find reasoning errors, but can correct them! arXiv preprint arXiv:2311.08516. Sean Welleck, Ximing Lu, Peter West, Faeze Brahman, Tianxiao Shen, Daniel Khashabi, and Yejin Choi. 2022. Generating sequences by learning to selfcorrect. arXiv preprint arXiv:2211.00053. Yingce Xia, Tao Qin, Wei Chen, Jiang Bian, Nenghai Yu, and Tie-Yan Liu. 2017. Dual supervised learning. In International conference on machine learning, pages 3789–3798. PMLR. Bo Xu and Mu-ming Poo. 2023. Large language models and brain-inspired general intelligence. National Science Review, 10(10):nwad267. 699 Zili Yi, Hao Zhang, Ping Tan, and Minglun Gong. 2017. Dualgan: Unsupervised dual learning for imageto-image translation. In Proceedings of the IEEE international conference on computer vision, pages 2849–2857. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223. Wenhao Zhu, Hongyi Liu, Qingxiu Dong, Jingjing Xu, Lingpeng Kong, Jiajun Chen, Lei Li, and Shujian Huang. 2023. Multilingual machine translation with large language models: Empirical results and analysis. arXiv preprint arXiv:2304.04675. A Experiment Setup A.1 Test Data For the WMT22 test set (Kocmi et al., 2022), the experimental analysis covers 9 language pairs. We used the full test dataset. Among these languages, En→De and En→Ja are classified as high-resource and medium-resource languages, respectively. In contrast, Cs↔Uk, En→Hr, Yakut↔Russian, and En↔Liv are categorized as low-resource languages. For the WMT23 test set (Kocmi et al., 2023), the experimental analysis covers 4 language pairs. We used the full test dataset. Among them, En→De and En→Ja are identified as high and mediumresource languages, with the former belonging to the same language family and the latter exhibiting significant differences. In contrast, Cs→Uk and En→Hr are categorized as low-resource languages, being closely related and belonging to the same language family, respectively. The Commonsense Reasoning MT dataset (He et al., 2020) encompasses vocabulary that requires common knowledge for resolution, along with instances of contextual/contextless grammatical ambiguity in Chinese-to-English translation data. Each translation data includes a source sentence and two contrasting translations, involving seven different types of common knowledge. Despite these elements appearing amenable to direct translation, such simplified interpretations are often misleading. A.2 Comparative Methods The following sections provide detailed descriptions of these comparisons. • Baseline, standard zero-shot translation is performed in ChatGPT (Ouyang et al., 2022) and GPT-4 (Achiam et al., 2023) with the temperature parameter set to 0, which is the default value for our experiments. • Rerank was conducted with the identical prompt as the baseline, employing a temperature of 0.3, in alignment with Moslem et al., 2023. Three random samples were generated and combined with the baseline to yield four candidates. The optimal candidate was chosen through Quality Estimation (QE). • Renfie (Chen et al., 2023c) first requests a translation from ChatGPT, then provides the source text and translation results, and obtains a refined translation through multiple rounds of modifications by mimicking the human correction process. Renfie_cos as a contrastive prompt to the Renfie, the work insert the word “bad” to hint that the previous translation is of low quality, regardless of its actual quality. • MAPS (He et al., 2023), incorporating the knowledge of keywords, topic words, and demonstrations similar to the given source sentence to enhance the translation process, respectively. • Self-Reflect (Shinn et al., 2023), This approach requires the LLM to scrutinize and refine its translation until it deems the current output satisfactory. • MAD (Liang et al., 2023) enhance the capabilities of large language models (LLMs) by encouraging divergent thinking. In this method, multiple agents engage in a debate, while a judge oversees the process to derive a final solution. B Experiment Results B.1 Results on Reference-free metric To further clarify the robustness of our evaluation, we incorporated COMET-KIWI9 (Rei et al., 2022b), a reference-free metric in the COMET series. The experimental results are shown in Table 4. These results demonstrate that our method still outperforms comparison methods in terms of COMET-KIWI scores, thereby further confirming the robustness of our evaluation. 9https://github.com/Unbabel/COMET 700 Methods En-De En-Ja Cs-Uk En-Hr ChatGPT +Rerank 82.1 84.4 83.6 83 +Self-Reflect 82.0 84.4 83.3 83.1 +Dual Reflection 82.4 84.7 84.2 83.8 Table 4: WMT22 evaluation results on COMET-KIWI metric. B.2 Results of Additional Low-Resourced Language Pairs To further analyze the performance of our method in lower resource tasks, we validate the effectiveness of the DUAL-REFLECT method on 5 other lower resource languages in the WMT22 task. The experimental results are shown in Table 5: The experimental results demonstrate that our method improves the translation performance in terms of COMET22 and BLEURT scores for these languages, further indicating the effectiveness of DUAL-REFLECT in lower-resource translation tasks. B.3 Results of WMT23 To further illustrate this point, we conducted additional experiments in WMT23 for the EN-DE , EN-JA , EN-HE, and CS-UK language pairs. The experimental results are shown in Table 6: Through our experiments on WMT23, we found that our method still outperforms multiple comparison methods, further demonstrating its effectiveness and generalizability. B.4 Human Evaluations In this section, we conduct human evaluation to measure translation quality. We assess coherence, fluency, and ambiguity resolution. Four english native speakers were invited to participate, and 50 samples were randomly selected from translations generated by different methods. For the content with Chinese ambiguity in Commonsense MT, we ensured the correctness of the source side understanding by confirming it with classmates whose native language is Chinese. For translation quality, each sentence was rated on a scale from 1 to 5, with 3 indicating a pass, 4 showing substantial consistency with the reference, and 5 being the highest score. The final score is the average of these four ratings. Additionally, in the CommonsenseMT task, the four experts scored each sample for ambiguity resolution against the reference, awarding 1 point for resolved and 0 points for unresolved. B.5 Case Study 701 Methods Sah→Ru Ru→Sah Uk→Cs En→Liv Liv→En COMET BLEURT COMET BLEURT COMET BLEURT COMET BLEURT COMET BLEURT ChatGPT 57.5 36.0 52.8 73.2 88.7 79.0 52.7 41.8 40.6 41.1 +5shot 58.3 36.0 53.1 75.4 89.6 79.1 55.3 42.1 42.7 40.9 +MAD 58.1 37.1 53.5 76.4 89.6 79.3 55.5 42.5 43.2 41.3 +OUR 59.5 37.9 54.5 76.9 90.0 80.1 56.0 43.3 43.6 41.7 Table 5: The main results for the WMT22 additional low-resourced language pairs are displayed. The highest values are highlighted in bold and have p-values less than 0.05. Methods En→De En→Ja En→He Cs→Uk COMET BLEURT COMET BLEURT COMET BLEURT COMET BLEURT ChatGPT 83.5 69.1 87.3 60.2 82.1 69.3 86.7 74.1 +5shot 83.7 69.4 87.8 61.5 82.5 69.8 87.3 74.5 +MAD 83.9 70.3 88.0 63.1 82.9 70.0 87.5 74.9 +OUR 84.3 71.0 88.5 63.6 83.1 71.7 88.1 75.2 Table 6: The main results from WMT23 are shown. The highest values are in bold, with p-values less than 0.05. Investigate the rumors. 打探这些谣言。 The two sentences do not convey the same meaning and style. The first sentence "打探风声" means "to inquire about rumors or news" and has a connotation of seeking information or gossip. The second sentence " 调查这些谣言" means "to investigate these rumors" and has a more formal and objective tone, implying a systematic investigation rather than just gathering information. Suggestion: 1. 打探 - This means "to inquire" or "to investigate" in a more informal or casual manner, often associated with seeking information or gossip. 2. 风声- This literally means "sound of the wind" but figuratively refers to rumors or news. It carries a connotation of informal or unverified information. Inquire about the rumors. 询问关于谣言的事情。 : Inquire about the rumors. Input: 打探风声。 (Inquire about the news.) Figure 5: The DUAL-REFLECT methodology for translating positive examples within Chinese sentences. 702 He has already become a person who has reached the top. 他已经成为一个达到巅峰的人。 The two sentences do not convey the same meaning and style. In the first sentence, "他已经是一个登上青云的人" , the phrase "登上青 云" is a metaphorical expression meaning to achieve great success. It implies that the person has achieved a significant accomplishment or reached a prestigious status. In the second sentence, "他已经成为一个已经达到巅峰的人" (He has already become someone who has already reached the peak), the phrase "已经达到巅峰" means to have already reached the peak or the pinnacle. It indicates that the person has achieved a high level of success or accomplishment, or the geographical topmost point. The first sentence emphasizes the metaphorical concept of attaining high-level success, while the second sentence focuses on reaching the peak. Therefore, these two sentences have different nuances and are not equivalent in meaning and style. Suggestion: 登上青云: The phrase is a metaphorical expression meaning to achieve great success or reach a high position. It implies that the person has achieved a significant accomplishment or reached a prestigious status . He has already become someone who has reached great heights.   他已经成为一个取得巨大成就的人。 : He has already become someone who has reached great heights. Input: 他已经是一个登上青云的人。 (He has became a man who has ascended the positions of senior officials.) Figure 6: The DUAL-REFLECT methodology for translating positive examples within Chinese sentences. The water in this article is excessive.  这篇文章的水过多。 : The water in this article is excessive.  Input:这篇文章的水太多。 (There is too much useless stuff in this article.) Figure 7: The DUAL-REFLECT methodology for translating negative examples within Chinese sentences. 703 The company has hired several decorative vases, specifically for entertaining guests. 公司雇了几个装饰花瓶,专门用于招待客人   Input:公司招聘了几个花瓶,是专门用来招待客人的。 (The company recruited several beautiful persons to entertain guests.) : The company has hired several decorative vases, specifically for entertaining guests. Figure 8: The DUAL-REFLECT methodology for translating negative examples within Chinese sentences. 704
2024_short
64
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 705–729 August 11-16, 2024 ©2024 Association for Computational Linguistics Towards Artwork Explanation in Large-scale Vision Language Models Kazuki Hayashi†, Yusuke Sakai†, Hidetaka Kamigaito†, Katsuhiko Hayashi‡, Taro Watanabe† †Nara Institute of Science and Technology ‡The University of Tokyo {hayashi.kazuki.hl4, sakai.yusuke.sr9, kamigaito.h, taro}@is.naist.jp [email protected] Abstract Large-scale Vision-Language Models (LVLMs) output text from images and instructions, demonstrating advanced capabilities in text generation and comprehension. However, it has not been clarified to what extent LVLMs understand the knowledge necessary for explaining images, the complex relationships between various pieces of knowledge, and how they integrate these understandings into their explanations. To address this issue, we propose a new task: the artwork explanation generation task, along with its evaluation dataset and metric for quantitatively assessing the understanding and utilization of knowledge about artworks. This task is apt for image description based on the premise that LVLMs are expected to have pre-existing knowledge of artworks, which are often subjects of wide recognition and documented information. It consists of two parts: generating explanations from both images and titles of artworks, and generating explanations using only images, thus evaluating the LVLMs’ language-based and vision-based knowledge. Alongside, we release a training dataset for LVLMs to learn explanations that incorporate knowledge about artworks. Our findings indicate that LVLMs not only struggle with integrating language and visual information but also exhibit a more pronounced limitation in acquiring knowledge from images alone 1. 1 Introduction In the field of Vision & Language (V&L), Large Language Models (LLMs) (Touvron et al., 2023; Chiang et al., 2023; Bai et al., 2023a; Jiang et al., 2023) have been combined with visual encoders to create Large Scale Vision Language Models (LVLMs) (Li et al., 2023b; Liu et al., 2024; Bai et al., 2023b; Ye et al., 2023b). These models have achieved success in various V&L benchmarks (Li 1The datasets (ExpArt=Explain Artworks) are available at https://huggingface.co/datasets/naist-nlp/ExpArt LVLM Mona Lisa sfumato Leonardo da Vinci Renaissance Portrait 16th century perspective Van Gogh Realism Employ sfumato, as da Vinci did, using a soft brush to blend colors for a gradient that captures Renaissance depth and realism. Impressionism I want to paint a renaissance style painting, how do I get a gradient? Figure 1: An example of creative assistance using an LVLM, harnessing comprehensive artistic knowledge for guidance. et al., 2023a; Fu et al., 2023; Liu et al., 2023c; Bai et al., 2023c). Despite these advancements, tasks like Visual Question Answering (VQA) (Zhang et al., 2022b; Yue et al., 2023), Image Captioning(Agrawal et al., 2019; Lin et al., 2014), and querying models about artwork-related information (Garcia et al., 2020; Cetinic, 2021; Bai et al., 2021) have primarily focused on assessing models’ abilities to handle isolated pieces of knowledge. These tasks, while valuable, do not fully capture the complexity of synthesizing and explaining interconnected knowledge in real-world scenarios (Kawaharazuka et al., 2024), nor the difficulty of generating coherent text to explain this knowledge. Current evaluations often result in superficial image descriptions, lacking extensive background knowledge and interrelationships between subjects. A pertinent example of this limitation can be observed in the context of creative support for paintings and photographs. As shown in Figure 1, these models must produce explanations that integrate knowledge of the artwork’s theme, historical context, associated works, and artistic movement, highlighting a gap in current capabilities. Since this task goes beyond simply recognizing disparate knowledge, it is crucial for LVLMs to deeply understand 705 Type Template Instruction Output Section Explain the {Section} of this artwork, {Title}. Explain the History of this artwork, Mona Lisa. Of Leonardo da Vinci’s works, the Mona Lisa is the only portrait whose authenticity... Subsection Explain the {Subsection} regarding the {Section} of this artwork, {Title}. Explain the Creation and date regarding the History of this artwork, Mona Lisa. The record of an October 1517 visit by Louis d’Aragon states that the Mona Lisa... Sub subsection Explain the {Sub subsection} details within the {Subsection} aspect of the {Section} in this artwork, {Title}. Explain the Creation details within the Creation and date aspect of the History in this artwork, Mona Lisa. After the French Revolution, the painting was moved to the Louvre, but spent a brief period in the bedroom of Napoleon (d. 1821) in the.... Table 1: Examples of instructions for the proposed task. The blue part indicates the artwork’s title and the red part indicates the names of sections in the original Wikipedia articles that correspond to their explanations. the interrelationships of artwork knowledge to integrate them into explanations comprehensively. To address this gap, we propose a new task and evaluation metrics designed to measure LVLMs’ capability in generating comprehensive explanations about artworks. Our task requires LVLMs to generate explanations in response to given instructions, based on input images and titles of artworks. We have constructed a dataset from about 10,000 English Wikipedia articles of artworks for this task and also release a training dataset to facilitate LVLMs in learning to generate explanations involving artistic knowledge. Furthermore, we have evaluated LVLMs currently achieving the highest performance in various V&L benchmarks. The results show that while the LVLMs retain the artistic knowledge inherited from their base LLMs, they do not adequately correlate this knowledge with the provided visual information. 2 LVLMs LVLMs (Li et al., 2023b; Liu et al., 2024; Bai et al., 2023b; Ye et al., 2023b) integrate a Vision Encoder (Li et al., 2023b) trained through contrastive learning to process visual information with Large Language Models (LLMs) (Li et al., 2023b; Liu et al., 2024; Bai et al., 2023b; Ye et al., 2023b). This integration requires further training to effectively combine vision and language capabilities. As a result, these LVLMs outperform conventional pretrained models, even those with over ten times more parameters (et al, 2022; Driess et al., 2023). However, it is unclear whether the knowledge from the LLM and the Vision Encoder are appropriately aligned by the additional network layers in LVLMs (Chen et al., 2024a). Generating explanations that involve knowledge about art especially requires careful and systematic alignment and utilization of the information from both the Vision Encoder and the LLM. This challenge motivates us to design a new task for LVLMs. 3 Task and Evaluation Metrics 3.1 Task Our task demands LVLMs to generate explanations following instructions with images and titles. Examples of the instructions are shown in Table 1. As demonstrated by these examples, each instruction is categorized into three levels, Section, Subsection, and Subsubsection, determined by the corresponding positions in Wikipedia articles (See §3). The proposed task addresses the following two settings with or without titles: With Title In the context of creative assistance, the title often contains the author’s intent for the artwork, and it is desirable to generate explanations considering this intent. In this setting, both the image and its title are inputs, testing whether LVLMs can generate appropriate explanations based on both language and visual information. Without Title As shown in Figure 1, there are cases where a title does not exist potentially because the artwork is in the process of creation. This setting tests whether LVLMs can generate appropriate explanations using only visual information from images. Additionally, analyzing the performance changes with and without titles allows us to verify the LVLMs’ pure vision-based knowledge. Furthermore, to thoroughly assess the generalization capabilities of LVLMs, we compare two cases: 1) a seen case in which images are observed during finetuning, and 2) an unseen case in which images are not observed during finetuning. 3.2 Evaluation Metrics Since our task is a kind of natural language generation (NLG), we utilize popular metrics in NLG 706 1. Title : 2. Description: 3. Entities: 1. Title : 2. Description: 3. Entities: 1. Title : 2. Description: 3. Entities: 1. Title : 2. Description: 3. Entities: 1. Title : 2. Description: 3. Entities: 1. Title : 2. abstraction: 3. Entities:≈ 1. Title : 2. abstraction: 3. Entities:≈ 1. Title : 2. abstraction: 3. Entities: 1. Title : 2. abstraction: 3. Entities: I. Title : II. Section: III. Entities: summary Ⅳ. Title Rank Mona Lisa 1 Girl with a Pearl Earring 2 The Scream 3 Guernica 4 Venus de Milo 5 Sunflowers 6 David 7 The Last Supper 8 Café Terrace at Night 9 The Starry Night 10 Girl with a Pearl Earring, Dutch, oil painting, Dutch Golden Age, Painter Johannes Vermeer, Mauritshuis, The Hague, tronie, turban, pearl, earring, Sibyl Filtering Abstraction Description Painting Conservation Templates test Templates train train test Seen Unseen STEP3 dev Title Section Ⅰ Section Ⅱ Entities Image Title SectionⅠ Entities Image Title Section Ⅰ Entities Image Title Section Ⅰ Entities Image STEP2 STEP1 STEP4 Image Image Image Figure 2: Workflow diagram illustrating the methodology for dataset creation from Wikipedia articles on artworks, involving selection, filtering, data balancing, and instructional templating for LVLM training and evaluation. Train Dev Test (Seen) Test (Unseen) Images 7,704 963 2,407 963 Instruction 18,613 2,677 2,485 2,597 Table 2: Number of Images and Data in the Created Dataset. for evaluation, i.e., BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and BERTScore (Zhang* et al., 2020). To further focus on the ability to generate explanations for artworks, we propose the following three evaluation metrics2: Entity Coverage We evaluate how accurately the generated text includes entities (See §4) related to the artwork mentioned in the reference description, using two settings: exact match and partial match (Li et al., 2022a). Entity F1 We evaluate the frequency of occurrence of entities related to the artwork found in the generated and reference explanations by F1. Inspired by ROUGE, we consider the highest frequency of occurrence of any entities within either the generated explanation or the reference as the upper limit of occurrence frequency to calculate precision and recall. Entity Cooccurrence This metric assesses not only the coverage of independent entities but also how their interrelations are contextually combined 2For the formulas of each metric, see Appendix C. to form the overall explanation. Specifically, it considers pairs of entities that co-occur within a sentence and its preceding and following n sentences, evaluating the coverage rate of these pairs to reveal how well the model understands and integrates the relevance of knowledge. By setting the value of n to exceed the number of sentences in the generated explanation, it becomes possible to account for the co-occurrence of entity pairs throughout the entire text. Furthermore, we apply the brevity penalty used in BLEU (Papineni et al., 2002) to verify the accuracy of knowledge at an appropriate length, defined by the reference text for each data instance. This ensures models produce concise, non-redundant explanations. 4 Dataset Creation The process of dataset creation, illustrated in Figure 2, involved the following steps: STEP 1: We collected all the artwork articles from the English Wikipedia that have an infobox (about 10,000), divided them into sections, and created descriptive texts. Additionally, hyperlinked texts within the articles were extracted as entities related to the artwork. Each descriptive text is accompanied by four pieces of information: the title, the hierarchy of sections (i.e., Section, Subsection, Subsubsection), the image, and the aforementioned entities. 707 LVLM Setting Size BLUE ROUGE BertScore Entity Cov. Entity F1 Entity Cooccurrence Avg. Length 1 2 L exact partial n=0 n=1 n=2 n=∞ With Title (Language information + Visual information) mPLUG-Owl2 Unseen 7B 1.16 26.8 5.9 17.1 83.3 13.3 21.1 15.6 1.61 1.38 1.35 1.29 100 LLaVA-NeXT (Vicuna-7B) Unseen 7B 0.81 16.5 3.7 11.0 80.8 9.0 14.1 10.6 0.83 0.74 0.73 0.69 119 LLaVA-NeXT (Vicuna-13B) Unseen 13B 1.18 17.0 4.1 10.8 80.5 11.5 16.4 13.1 1.12 1.04 1.02 0.99 133 LLaVA-NeXT (Yi-34B) Unseen 34B 0.72 13.9 3.3 9.5 80.2 18.5 27.8 16.1 0.26 0.22 0.21 0.19 869 Qwen-VL-Chat Unseen 7B 1.64 28.2 6.8 17.4 83.5 17.8 26.3 20.8 1.90 1.66 1.63 1.57 155 Qwen-VL-Chat (FT) Unseen 7B 3.96 27.2 10.8 21.4 84.2 19.7 27.2 22.0 4.86 4.35 4.23 4.13 153 GPT-4-Vision Unseen 2.40 28.6 7.6 16.3 83.3 28.4 37.1 31.6 3.02 3.00 2.98 3.05 264 Without Title (Visual information) mPLUG-Owl2 Unseen 7B 0.21 23.3 3.58 15.0 82.3 4.0 10.5 4.3 0.26 0.29 0.26 0.24 91 LLaVA-NeXT (Vicuna-7B) Unseen 7B 0.13 16.0 2.21 10.6 80.1 1.8 6.3 1.8 0.07 0.10 0.10 0.11 125 LLaVA-NeXT (Vicuna-13B) Unseen 13B 0.17 16.6 2.35 11.0 80.8 2.1 7.1 2.2 0.07 0.08 0.08 0.07 164 LLaVA-NeXT (Yi-34B) Unseen 34B 0.15 11.5 1.88 8.1 78.7 3.5 10.5 2.8 0.03 0.03 0.02 0.02 903 Qwen-VL-Chat Unseen 7B 0.47 24.8 4.50 15.4 82.5 7.5 14.6 8.4 0.56 0.60 0.58 0.55 128 Qwen-VL-Chat (FT) Unseen 7B 2.07 24.5 7.79 18.6 83.4 12.9 19.6 14.7 2.25 2.03 2.00 1.96 153 GPT-4-Vision Unseen 0.10 23.1 4.43 13.2 81.9 11.6 19.0 12.3 1.18 1.35 1.37 1.34 223 Table 3: Results of LVLMs. Bold fonts indicate the best scores. Avg. Length averages generated token lengths. STEP 2: We filtered out sections that did not contribute directly to the understanding of artwork, articles without images, and texts not specific to individual art pieces to ensure the relevance and quality of the content. STEP 3: To prevent biases that may arise due to the notoriety of the artworks included in the LVLM’s training data, we shuffled the data. First, we ranked the data using six metrics: page views, number of links, number of edits, number of references, number of language versions, and article length. We then evenly split the data into test, development, and training sets at a ratio of 1:1:8 to maintain the average ranking across these sets (Table 2). As described in §3, for the Seen set, we used training images with no overlap in reference text to prevent leakage. For the Unseen set, neither images nor reference texts are from the training set. STEP 4: The sorted data for each set were then formatted into instructions using the templates described in Section 3.1. To diversify the training data, we prepared seven different templates inspired by Longpre et al. (2023) (see Appendix E.3). 5 Evaluation 5.1 Setup We evaluated four models: mPLUG-Owl2 (Ye et al., 2023b), LLaVA-NeXT (Liu et al., 2024), Qwen-VL-Chat (Bai et al., 2023b), and GPT-4 Vision (OpenAI, 2023), along with an instructiontuned version of Qwen-VL-Chat (FT), fine-tuned by our dataset with LoRA (Dettmers et al., 2022a).3 As shown in Table 2, the data is divided based on 3Further details for the evaluation setup and results for other models are described in Appendix D and Appendix A. images. In the Few-shot setting, by utilizing this data division, to prevent answer leakage in Fewshot samples, for test (Seen) evaluations, samples were randomly selected from the test (Unseen) set, and vice versa for test (Unseen) evaluations. 5.2 Results With and Without Title Table 3 shows the results In the "With Title" setting, GPT-4-Vision achieved the highest performance in Entity Coverage and Entity F1, with Qwen-VL-Chat (FT), Qwen-VL-Chat, and LLaVA-NeXT (Yi-34B-Chat) also showing strong performance. Notably, QwenVL-Chat (FT) reached the highest precision in Entity Cooccurrence, showcasing its exceptional ability to accurately contextualize knowledge within generated text. This proves the superiority of our instruction-tuning dataset. Additionally, considering the average reference token length is 174 in the unseen setting, the significantly low performance of LLaVA-Next (Yi-34B-Chat) indicates excessive token lengths may result in redundant text, which is unsuitable for generating concise explanations. In the "Without Title" setting, Qwen-VL-Chat (FT) outperformed GPT-4-Vision across all metrics, indicating that our dataset enables accurate knowledge association and generation from visual information. Comparative analysis of the models’ performance in scenarios with and without titles indicated a consistent drop in performance across the board. This observation clearly shows the challenges of generating text based solely on visual inputs. All models, including advanced ones like GPT-4-Vision, heavily depend on text-based cues. 3Since LLMs do not handle visual information, we conducted the analysis in a setting with titles. 708 LVLM Setting Size BLUE ROUGE BertScore Entity Cov. Entity F1 Entity Cooccurrence Avg. Length 1 2 L exact partial n=0 n=1 n=2 n=∞ With Title (Language information + Visual information) Qwen-VL-Chat Unseen 7B 1.64 28.2 6.8 17.4 83.5 17.8 26.3 20.8 1.90 1.66 1.63 1.57 155 Qwen-VL-Chat One-shot Unseen 7B 1.96 27.6 7.6 18.0 84.0 18.0 26.0 20.9 2.71 2.34 2.30 2.21 98 Qwen-VL-Chat Three-shot Unseen 7B 2.47 27.2 8.5 18.7 84.4 19.3 27.3 22.8 3.65 3.14 3.05 2.97 77 Qwen-VL-Chat (FT) Unseen 7B 3.96 27.2 10.8 21.4 84.2 19.7 27.2 22.0 4.86 4.35 4.23 4.13 153 Qwen-VL-Chat (FT) One-shot Unseen 7B 3.96 26.9 10.6 21.1 84.0 19.7 27.0 22.0 4.75 4.20 4.02 3.97 154 Qwen-VL-Chat (FT) Three-shot Unseen 7B 3.85 26.9 10.6 21.0 84.2 19.5 26.8 22.2 4.71 4.01 3.94 3.86 128 Qwen-VL-Chat Seen 7B 1.69 27.9 6.7 17.3 83.4 16.2 24.5 19.8 1.87 1.57 1.54 1.47 153 Qwen-VL-Chat One-shot Seen 7B 2.02 27.3 7.5 17.8 84.0 17.4 25.3 20.8 2.95 2.49 2.45 2.36 95 Qwen-VL-Chat Three-shot Seen 7B 2.34 26.5 8.22 18.3 84.3 17.9 25.8 21.3 3.43 2.72 2.69 2.61 74 Qwen-VL-Chat (FT) Seen 7B 4.13 27.6 11.4 21.8 84.5 19.8 27.4 23.5 5.47 4.43 4.30 4.19 133 Qwen-VL-Chat (FT) One-shot Seen 7B 4.06 27.4 11.1 21.6 84.4 19.8 27.3 22.7 5.43 4.45 4.40 4.30 134 Qwen-VL-Chat (FT) Three-shot Seen 7B 4.05 27.2 11.1 21.5 84.6 19.5 27.0 22.4 5.22 4.21 4.19 4.10 113 Without Title (Visual information) Qwen-VL-Chat Unseen 7B 0.47 24.8 4.50 15.4 82.5 7.5 14.6 8.4 0.56 0.60 0.58 0.55 128 Qwen-VL-Chat One-shot Unseen 7B 0.65 23.4 4.81 15.3 83.0 8.6 15.4 9.7 1.15 1.10 1.04 1.12 87 Qwen-VL-Chat Three-shot Unseen 7B 0.69 22.2 4.95 15.0 83.3 9.3 15.6 10.4 1.21 1.22 1.17 1.11 70 Qwen-VL-Chat (FT) Unseen 7B 2.07 24.5 7.79 18.6 83.4 12.9 19.6 14.7 2.25 2.03 2.00 1.96 153 Qwen-VL-Chat (FT) One-shot Unseen 7B 1.95 24.1 7.50 18.3 83.3 12.6 19.2 14.3 2.00 1.92 1.86 1.84 152 Qwen-VL-Chat (FT) Three-shot Unseen 7B 2.03 24.3 7.67 18.4 83.6 12.9 19.6 14.6 2.40 2.00 1.94 1.91 131 Qwen-VL-Chat Seen 7B 0.40 24.4 4.32 15.2 82.5 5.6 12.7 6.9 0.40 0.41 0.37 0.35 124 Qwen-VL-Chat One-shot Seen 7B 0.53 22.5 4.45 14.8 83.0 7.2 13.9 8.6 0.72 0.72 0.70 0.66 82 Qwen-VL-Chat Three-shot Seen 7B 0.69 22.2 4.95 15.0 83.3 9.3 15.6 10.4 1.21 1.22 1.17 1.11 68 Qwen-VL-Chat (FT) Seen 7B 2.09 24.9 8.00 18.9 83.8 12.4 19.4 15.0 2.19 1.85 1.82 1.78 127 Qwen-VL-Chat (FT) One-shot Seen 7B 1.99 24.4 7.72 18.5 83.6 11.5 18.7 14.0 1.89 1.55 1.51 1.48 130 Qwen-VL-Chat (FT) Three-shot Seen 7B 2.03 24.3 7.74 18.4 83.8 11.6 18.5 13.9 1.89 1.49 1.45 1.42 117 Table 4: Results of Fine-tuning and Few-shot settings for LVLMs. Bold fonts indicate the best scores. Avg. Length averages generated token lengths (see Figure 4). LLM Entity Cov. Entity F1 Entity Cooccurrence Avg. Length exact partial n=0 n=1 n=2 n=∞ With Title (Language information) Llama2 18.5 27.3 20.8 1.04 0.88 0.82 0.81 366 Vicuna 7B 12.3 18.6 14.1 1.43 1.33 1.32 1.23 129 Vicuna 13B 19.4 28.1 23.0 2.16 1.99 1.89 1.77 209 Yi-34B-Chat 17.9 25.4 13.0 0.93 0.86 0.83 0.81 745 Qwen-Chat 7.6 11.8 8.5 0.52 0.43 0.41 0.40 106 GPT-4 31.7 40.2 32.3 2.54 2.50 2.53 2.59 374 Table 5: Results of LLMs (Unseen4). Notations are the same as Table 3. LLMs vs. LVLMs Table 5 shows the results of explanation generation in the With Title setting without images for text-only LLMs. Notably, Table 5 illustrates that GPT-4 (OpenAI et al., 2023) achieves the highest accuracy across all metrics, demonstrating strong knowledge about artworks, closely followed by Llama2 (Touvron et al., 2023), Vicuna (Chiang et al., 2023) and Yi-34-Chat (01.AI, 2023). Conversely, Qwen-Chat (Bai et al., 2023a) is shown to perform comparatively lower. Additionally, the comparison of Tables 3 and 5 reveals the extent of text-only LLM’s knowledge retention through integrated vision and language learning. It is apparent that the knowledge about artworks is compromised in other LVLMs due to the integrated learning of vision and language. On the other hand, Qwen-VL-Chat achieves a 10% performance boost in titled settings, signaling successful synthesis of vision and language knowledge. Few-shot vs. Fine-tuning The results in Table 4 show that Fine-tuning outperforms both the pure model and Few-shot settings. While Few-shot settings show some improvement with an increasing number of shots, they do not match the performance of Fine-tuning. Considering the average token length of 174 in the reference sentences, the reduced token length in Few-shot settings suggests a focus on generating necessary terms but may result in less comprehensive explanations. In contrast, Fine-tuning allows the model to learn both specific vocabulary and the format for generating coherent explanations, leading to better performance. However, the lack of significant differences between Seen and Unseen settings in Fine-tuning indicates that effective alignment of visual and textual information (the knowledge originally held by the LLM) requires simultaneous learning of images and their descriptions. 6 Conclusion We introduced a new task, artwork explanation generation, and its dataset and metrics to quantitatively evaluate the artistic knowledge comprehension and application. Using LVLMs, we assessed their retention and utilization of artworks knowledge from base LLMs, with or without artwork titles. Our findings indicate that while LVLMs maintain much of the artistic knowledge from their LLM counterparts, they do slightly lose some in practice. Furthermore, the challenges in generating text solely based on visual inputs clearly show a significant dependency on text-based cues. 709 Limitations Our research elucidates the intricacies of integrating visual and language abilities within LVLMs, yet it encounters specific limitations that define the scope of our findings. Data Source A principal limitation is our reliance on the diverse authorship and open editing model of Wikipedia as our data source. Variations in detail, writing style, and information density across entries may lead to inconsistencies in the dataset, potentially skewing model performance and affecting the universality of our conclusions. Additionally, we did not filter out generic entities such as "artwork" to avoid bias. However, more specific entity filtering may improve dataset relevance to artworks. Moreover, relying on Wikipedia limits our dataset to well-known artworks, omitting lesser-known but culturally significant works not featured on the platform, thereby missing a broader spectrum of artistic significance. Human Evaluation While our current study does not include human evaluations, it is crucial to assess whether the models can provide insights beyond Wikipedia and evaluate LVLM explanations from an expert perspective for real-world applications. Another LVLM-based image explanation task, image review generation (Saito et al., 2024) actually conducts human evaluation by hiring nonexpert annotators. Unlike their work, our task requires expert knowledge to judge the quality of generated explanations. Thus, due to the cost perspective, evaluating generated explanations across various genres by experts is a left problem. Integration of Vision and Language Representations Simultaneously, our study identifies a crucial limitation in the process of integrating Vision Encoders with LLMs, particularly highlighting the models’ reliance on textual cues to generate text from visual inputs. Kamigaito et al. (2023) report the same issue when predicting infoboxes, which are kinds of summaries for Wikipedia articles. This observation underscores the difficulty of retaining language knowledge during the integration, a problem we acknowledge without offering concrete solutions. This gap clearly shows the pressing need for future research to not only further investigate these issues but also to develop innovative methodologies that ensure the preservation of language knowledge amidst the integration of visual and language abilities. Insuffcient Artwork Knowledge in LVLMs The limited improvement in entity coverage by LoRA indicates the difficulty of injecting artwork knoweldge into LVLMs. As a solution, we can consider injecting external knowledge into LVLMs. Chen et al. (2024b) introduce using knowledge graphs (KGs) as a solution. However, KGs are commonly sparse and we may need to complete them by KG completion (KGC), a task to complete missing links in KGs. Traditional KGC methods (Nickel et al., 2011; Bordes et al., 2013) are emperically (Ruffinelli et al., 2020; Ali et al., 2021) and theoretically (Kamigaito and Hayashi, 2021, 2022a,b; Feng et al., 2024) investigated in detail, and thus, these are solid whereas the pre-trainedbased KGC models can outperform them (Wang et al., 2022). On the other hand, Sakai et al. (2023) point out the leakage problem of the pre-trainedbased KGC models and the actual performance of them is uncertain. Retrieval-Augmented Generation (RAG) (Lewis et al., 2020) can be another solution if LVLMs can accept lengthy input (Zong et al., 2024). Ethical Considerations In our study, we meticulously curated our dataset derived from English Wikipedia. During the data creation phase, we individually inspected each extracted image, carefully removing those clearly unsuitable for public disclosure, ensuring no inappropriate images were included. Additionally, while English Wikipedia’s editors actively eliminate unnecessarily offensive content to compile an encyclopedia, as outlined on their official pages regarding offensive material5, bias in sources, and the use of biased or opinionated sources6 7, it is acknowledged that English Wikipedia allows the inclusion of biased information sources. Consequently, our dataset might also reflect the inherent biases present in the original English Wikipedia content. Note that in this work, we used an AI assistant tool, ChatGPT, for coding support. Acknowledgments This work was supported by JSPS KAKENHI Grant Numbers JP21K17801, JP23H03458. 5https://en.wikipedia.org/wiki/Wikipedia: Offensive_material 6https://en.wikipedia.org/wiki/Wikipedia: Neutral_point_of_view#Bias_in_sources 7https://en.wikipedia.org/wiki/Wikipedia: Reliable_sources#Biased_or_opinionated_sources 710 References 01.AI. 2023. Yi. https://github.com/01-ai/Yi. Harsh Agrawal, Karan Desai, Yufei Wang, Xinlei Chen, Rishabh Jain, Mark Johnson, Dhruv Batra, Devi Parikh, Stefan Lee, and Peter Anderson. 2019. nocaps: novel object captioning at scale. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE. Mehdi Ali, Max Berrendorf, Charles Tapley Hoyt, Laurent Vermue, Sahand Sharifzadeh, Volker Tresp, and Jens Lehmann. 2021. PyKEEN 1.0: A Python Library for Training and Evaluating Knowledge Graph Embeddings. Journal of Machine Learning Research, 22(82):1–6. Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. 2023a. Qwen technical report. arXiv preprint arXiv:2309.16609. Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. 2023b. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. arXiv preprint arXiv:2308.12966. Shuai Bai, Shusheng Yang, Jinze Bai, Peng Wang, Xingxuan Zhang, Junyang Lin, Xinggang Wang, Chang Zhou, and Jingren Zhou. 2023c. Touchstone: Evaluating vision-language models by language models. Zechen Bai, Yuta Nakashima, and Noa Garcia. 2021. Explain me the painting: Multi-topic knowledgeable art description generation. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc. Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, and Saehoon Kim. 2022. Coyo-700m: Image-text pair dataset. https: //github.com/kakaobrain/coyo-dataset. Eva Cetinic. 2021. Iconographic image captioning for artworks. In Pattern Recognition. ICPR International Workshops and Challenges: Virtual Event, January 10–15, 2021, Proceedings, Part III, pages 502–516. Springer. Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. 2021. Conceptual 12M: Pushing webscale image-text pre-training to recognize long-tail visual concepts. In CVPR. Liang Chen, Haozhe Zhao, Tianyu Liu, Shuai Bai, Junyang Lin, Chang Zhou, and Baobao Chang. 2024a. An image is worth 1/2 tokens after layer 2: Plug-andplay inference acceleration for large vision-language models. Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Conghui He, Jiaqi Wang, Feng Zhao, and Dahua Lin. 2023. Sharegpt4v: Improving large multi-modal models with better captions. Zhuo Chen, Yichi Zhang, Yin Fang, Yuxia Geng, Lingbing Guo, Xiang Chen, Qian Li, Wen Zhang, Jiaoyan Chen, Yushan Zhu, Jiaqi Li, Xiaoze Liu, Jeff Z. Pan, Ningyu Zhang, and Huajun Chen. 2024b. Knowledge graphs meet multi-modal learning: A comprehensive survey. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An opensource chatbot impressing gpt-4 with 90%* chatgpt quality. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. 2023. Instructblip: Towards general-purpose vision-language models with instruction tuning. Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. 2022a. GPT3.int8(): 8-bit matrix multiplication for transformers at scale. In Advances in Neural Information Processing Systems. Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. 2022b. Llm.int8(): 8-bit matrix multiplication for transformers at scale. arXiv preprint arXiv:2208.07339. Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete Florence. 711 2023. Palm-e: An embodied multimodal language model. In arXiv preprint arXiv:2303.03378. Jean-Baptiste Alayrac et al. 2022. Flamingo: a visual language model for few-shot learning. Xincan Feng, Hidetaka Kamigaito, Katsuhiko Hayashi, and Taro Watanabe. 2024. Unified interpretation of smoothing methods for negative sampling loss functions in knowledge graph embedding. Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, et al. 2023. Mme: A comprehensive evaluation benchmark for multimodal large language models. Noa Garcia, Chentao Ye, Zihua Liu, Qingtao Hu, Mayu Otani, Chenhui Chu, Yuta Nakashima, and Teruko Mitamura. 2020. A dataset and baselines for visual question answering on art. In Proceedings of the European Conference in Computer Vision Workshops. Danna Gurari, Qing Li, Abigale Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P. Bigham. 2018. Vizwiz grand challenge: Answering visual questions from blind people. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3608–3617. Sasun Hambardzumyan, Abhinav Tuli, Levon Ghukasyan, Fariz Rahman, Hrant Topchyan, David Isayan, Mikayel Harutyunyan, Tatevik Hakobyan, Ivo Stranic, and Davit Buniatyan. 2023. Deep lake: a lakehouse for deep learning. D. A. Hudson and C. D. Manning. 2019. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6693–6702, Los Alamitos, CA, USA. IEEE Computer Society. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. Kushal Kafle, Scott Cohen, Brian Price, and Christopher Kanan. 2018. Dvqa: Understanding data visualizations via question answering. In CVPR. Hidetaka Kamigaito and Katsuhiko Hayashi. 2021. Unified interpretation of softmax cross-entropy and negative sampling: With case study for knowledge graph embedding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5517–5531, Online. Association for Computational Linguistics. Hidetaka Kamigaito and Katsuhiko Hayashi. 2022a. Comprehensive analysis of negative sampling in knowledge graph representation learning. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 10661–10675. PMLR. Hidetaka Kamigaito and Katsuhiko Hayashi. 2022b. Erratum to: Comprehensive analysis of negative sampling in knowledge graph representation learning. Hidetaka Kamigaito, Katsuhiko Hayashi, and Taro Watanabe. 2023. Table and image generation for investigating knowledge of entities in pre-trained vision and language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1904–1917, Toronto, Canada. Association for Computational Linguistics. Kento Kawaharazuka, Tatsuya Matsushima, Andrew Gambardella, Jiaxian Guo, Chris Paxton, and Andy Zeng. 2024. Real-world robot applications of foundation models: A review. Geewook Kim, Teakgyu Hong, Moonbin Yim, JeongYeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, and Seunghyun Park. 2022. Ocr-free document understanding transformer. In European Conference on Computer Vision (ECCV). Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123:32–73. Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive NLP tasks. CoRR, abs/2005.11401. Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. 2023a. Seed-bench: Benchmarking multimodal llms with generative comprehension. Haonan Li, Martin Tomko, Maria Vasardani, and Timothy Baldwin. 2022a. MultiSpanQA: A dataset for multi-span question answering. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1250–1260, Seattle, United States. Association for Computational Linguistics. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023b. Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models. 712 Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022b. Blip: Bootstrapping language-image pretraining for unified vision-language understanding and generation. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European Conference on Computer Vision. F. Liu, T. Xiang, T. M. Hospedales, W. Yang, and C. Sun. 2018. ivqa: Inverse visual question answering. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8611–8619, Los Alamitos, CA, USA. IEEE Computer Society. Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. 2023a. Improved baselines with visual instruction tuning. Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. 2024. Llavanext: Improved reasoning, ocr, and world knowledge. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023b. Visual instruction tuning. Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. 2023c. Mmbench: Is your multi-modal model an all-around player? Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V. Le, Barret Zoph, Jason Wei, and Adam Roberts. 2023. The flan collection: Designing data and methods for effective instruction tuning. In Proceedings of the 40th International Conference on Machine Learning, ICML’23. JMLR.org. Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, KaiWei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. 2022. Learn to explain: Multimodal reasoning via thought chains for science question answering. In The 36th Conference on Neural Information Processing Systems (NeurIPS). Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao, Wei Zhang, Zhou Yu, Xiaodan Liang, and Song-Chun Zhu. 2021. Iconqa: A new benchmark for abstract diagram understanding and visual language reasoning. In The 35th Conference on Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks. K. Marino, M. Rastegari, A. Farhadi, and R. Mottaghi. 2019. Ok-vqa: A visual question answering benchmark requiring external knowledge. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3190–3199, Los Alamitos, CA, USA. IEEE Computer Society. Ahmed Masry, Xuan Long Do, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. 2022. ChartQA: A benchmark for question answering about charts with visual and logical reasoning. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2263– 2279, Dublin, Ireland. Association for Computational Linguistics. Anand Mishra, Shashank Shekhar, Ajeet Kumar Singh, and Anirban Chakraborty. 2019. Ocr-vqa: Visual question answering by reading text in images. In 2019 International Conference on Document Analysis and Recognition (ICDAR), pages 947–952. Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah. 2023. Orca: Progressive learning from complex explanation traces of gpt-4. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML’11, page 809–816, Madison, WI, USA. Omnipress. OpenAI, :, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mo Bavarian, Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, Simón Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha Gontijo-Lopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo Jun, Tomer Kaftan, Łukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar, Tabarak Khan, Logan Kilpatrick, Jong Wook 713 Kim, Christina Kim, Yongjik Kim, Hendrik Kirchner, Jamie Kiros, Matt Knight, Daniel Kokotajlo, Łukasz Kondraciuk, Andrew Kondrich, Aris Konstantinidis, Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan Leike, Jade Leung, Daniel Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie Lin, Mateusz Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Manning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne, Bob McGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, David Medina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie Monaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David Mély, Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen O’Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano, Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto, Michael, Pokorny, Michelle Pokrass, Vitchyr Pong, Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky, Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang, Nikolas Tezak, Madeleine Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan Felipe Cerón Uribe, Andrea Vallone, Arun Vijayvergiya, Chelsea Voss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan Ward, Jason Wei, CJ Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao Zheng, Juntang Zhuang, William Zhuk, and Barret Zoph. 2023. Gpt-4 technical report. OpenAI. 2023. Gpt-4v(ision) system card. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, and Furu Wei. 2023. Kosmos-2: Grounding multimodal large language models to the world. ArXiv, abs/2306.14824. Daniel Ruffinelli, Samuel Broscheit, and Rainer Gemulla. 2020. You CAN teach an old dog new tricks! on training knowledge graph embeddings. In International Conference on Learning Representations. Shigeki Saito, Kazuki Hayashi, Yusuke Ide, Yusuke Sakai, Kazuma Onishi, Toma Suzuki, Seiji Gobara, Hidetaka Kamigaito, Katsuhiko Hayashi, and Taro Watanabe. 2024. Evaluating image review ability of vision language models. Yusuke Sakai, Hidetaka Kamigaito, Katsuhiko Hayashi, and Taro Watanabe. 2023. Does pre-trained language model actually infer unseen links in knowledge graph completion? CoRR, abs/2311.09109. Alex Fang Jonathan Hayase Georgios Smyrnis Thao Nguyen Ryan Marten Mitchell Wortsman Dhruba Ghosh Jieyu Zhang Eyal Orgad Rahim Entezari Giannis Daras Sarah Pratt Vivek Ramanujan Yonatan Bitton Kalyani Marathe Stephen Mussmann Richard Vencu Mehdi Cherti Ranjay Krishna Pang Wei Koh Olga Saukh Alexander Ratner Shuran Song Hannaneh Hajishirzi Ali Farhadi Romain Beaumont Sewoong Oh Alex Dimakis Jenia Jitsev Yair Carmon Vaishaal Shankar Ludwig Schmidt Samir Yitzhak Gadre, Gabriel Ilharco. 2023. Datacomp: In search of the next generation of multimodal datasets. arXiv preprint arXiv:2304.14108. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. 2022. Laion5b: An open large-scale dataset for training next generation image-text models. Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. 2022. A-OKVQA: A Benchmark for Visual Question Answering Using World Knowledge, pages 146–162. Oleksii Sidorov, Ronghang Hu, Marcus Rohrbach, and Amanpreet Singh. 2020. Textcaps: a dataset for image captioning with reading comprehension. Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. 2019. Towards vqa models that can read. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8309–8318. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, 714 Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. Liang Wang, Wei Zhao, Zhuoyu Wei, and Jingming Liu. 2022. SimKGC: Simple contrastive knowledge graph completion with pre-trained language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4281–4294, Dublin, Ireland. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Dejing Xu, Zhou Zhao, Jun Xiao, Fei Wu, Hanwang Zhang, Xiangnan He, and Yueting Zhuang. Video question answering via gradually refined attention over appearance and motion. In ACM Multimedia. Tomas Yago, Le Hou, Chen-Ping Yu, Minh Hoai, and Dimitris Samaras. 2016. Large-scale training of shadow detectors with noisily-annotated shadow examples. volume 9910, pages 816–832. Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, Chaoya Jiang, Chenliang Li, Yuanhong Xu, Hehong Chen, Junfeng Tian, Qi Qian, Ji Zhang, and Fei Huang. 2023a. mplugowl: Modularization empowers large language models with multimodality. Qinghao Ye, Haiyang Xu, Jiabo Ye, Ming Yan, Anwen Hu, Haowei Liu, Qi Qian, Ji Zhang, Fei Huang, and Jingren Zhou. 2023b. mplug-owl2: Revolutionizing multi-modal large language model with modality collaboration. Licheng Yu, Patric Poirson, Shan Yang, Alexander Berg, and Tamara Berg. 2016. Modeling context in referring expressions. Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin, Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. 2023. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022a. Opt: Open pre-trained transformer language models. Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations. Yao Zhang, Haokun Chen, Ahmed Frikha, Yezi Yang, Denis Krompass, Gengyuan Zhang, Jindong Gu, and Volker Tresp. 2022b. Cl-crossvqa: A continual learning benchmark for cross-domain visual question answering. Yongshuo Zong, Ondrej Bohdal, and Timothy Hospedales. 2024. Vl-icl bench: The devil in the details of benchmarking multimodal in-context learning. arXiv preprint arXiv:2403.13164. 715 A Supplemental Results A.1 Detailed Evaluation of LVLMs in ’Seen’ Data Settings Table 8 presents the results of Language-Vision Learning Models (LVLMs) including ’seen’ settings, with bold type highlighting the highest score for each metric within each group. In this study, we assessed the generalizability of data and the precision of models fine-tuned on ’seen’ and ’unseen’ data during their training phase to ascertain if the fine-tuning process enhanced the models’ accuracy for images encountered during training. Despite the images being part of the training dataset, with sections meticulously segregated to prevent data leakage, our validation revealed no significant differences in accuracy between ’seen’ and ’unseen’ settings. This finding confirms the general applicability of the data and suggests that simply viewing images, without integrating them with relevant contextual knowledge, does not inherently contribute to accuracy improvement. This highlights the importance of a holistic learning approach where images are paired with pertinent information to truly boost the performance of the models. Furthermore it is generally impractical to create datasets that combine images corresponding to the vast amounts of text data seen during the training of LLMs and to acquire these through additional integrated learning. Additionally, during the integrated learning process from LLM to LVLM, the focus is on learning pairs of individual images and their descriptions. To develop the ability to individually recognize knowledge objects and explain them based on that recognition, as well as to understand the relationships between objects and generate comprehensive explanations, it is considered necessary to use enhancement methods such as RAG and new integrated learning techniques for LVLMs. A.2 Extended Analysis of Additional LVLMs In our research, we expanded our experimental investigation beyond the models outlined in the primary section to include Blip2 (Li et al., 2023b), mPLUG_Owl (Ye et al., 2023a), LLaVA-NeXT (Mistral) (Liu et al., 2024), LLaVA-1.5 (Liu et al., 2023a,b), InstructBlip (Dai et al., 2023), and Yi-6B (01.AI, 2023), integrating image and language in a manner similar to the initially described models. Utilizing the same experimental framework as the initial tests, we conducted an thorough assessment. The results, as outlined in Table 9, revealed that these additional models did not exceed the accuracy levels of those featured in the main analysis (refer to Section 5). Additionally, a comparative examination of configurations with and without titles showed a uniform decline in efficacy, emphasizing the difficulty of deriving knowledge and translating it into explanatory text generation based purely on image data. A.3 Detailed Performance Metrics for Base LLMs with Title Context Table 10 presents the results of an evaluation involving the base LLM models of the Language-Vision Learning Models (LVLMs) discussed in Tables 3 and 9. This evaluation additionally included tests on base models such as FLAN-T5-XL(Chung et al., 2022), FLAN-T5-XXL, OPT(Zhang et al., 2022a), LLaMA(Touvron et al., 2023) Mistral(Jiang et al., 2023), and Yi-6B, which were not featured in the main analysis. Since Language Models (LMs) are incapable of processing image information, the evaluation was confined to the ’With Title’ setting that incorporates textual information. Within this context, GPT-4 showcased superior performance across all tested configurations, with Mistral, Vicuna-13B, and LlaMA2 also demonstrating strong results. Consistent with the data presented in Table 3, the base model for LLaVA-NeXT (Yi-34B) yielded output sequences with excessively token lengths compared to its counterparts, mirroring the behavior of its LVLM version. This tendency for producing longer output is illustrated when compared with other models (as depicted in Figure 3 ). Furthermore, when examining the accuracy of the LVLMs tested in Table 9 alongside the base models in relation to our task proposal, there is a discernible decline in precision across nearly all models. Qwen is the exception, which highlights the nuanced challenges in effectively merging image and textual data. This complexity stands as a pivotal challenge for the evolution of sophisticated LVLMs. B Title generation In our task, the titles of artworks are a crucial element of knowledge related to the artworks. To maintain the integrity of the analysis between the settings with and without titles setting, we intentionally omitted titles from entity recognition. However, we recognized the need to understand the performance of models in generating titles of 716 artworks based solely on visual information. Therefore, We conducted an additional experiment in which we presented the models with the prompt "Please answer the title of this artwork" along with 963 images from the "Unseen" test set and evaluated the accuracy of title generation under two settings: Exact and Partial. Tables 11, 12 and 13 display the accuracy results of the main models and those from additional experiments, respectively. The results showed that GPT-4-Vision achieved the highest performance with an exact match setting at 8.97%, followed by Qwen-VL-Chat (FT) and Qwen-VL-Chat with good performances. Other models scored 2% or less, highlighting the difficulty of generating titles. Additionally, none of the LLaVA-NeXT models were able to correctly generate a single title. Furthermore, Table 14 shows the actual artwork titles generated by the top five models with the best accuracy in the exact match setting. The "Rank" in the table is used to distribute the dataset evenly at the time of its creation (refer to Section 3), between famous and less famous paintings, to prevent bias. From the table, we can infer that a higher proportion of famous artworks with higher ranks were generated, indicating that the models have a better grasp of more famous artworks. C Evaluation Metrics Formulation This section elaborates on the evaluation metrics proposed in Section 3.2 using mathematical expressions. An explanation consisting of n sentences generated by the model is denoted as G = {g1, · · · , gn}, and a reference explanation consisting of m sentences is denoted as R = {r1, · · · , rm}. The function Entity(·) is defined to extract entities contained in the input text. The notation |G| represents the total number of tokens in the generated explanation, and |R| represents the total number of tokens in the reference explanation. Entity Coverage (EC) is calculated as follows: EC(G, R) = Cov(G, R) (1) Here, Cov(G, R) is a function returning the proportion of entities in R that are covered by G. For partial matches, the Lowest Common Subsequence (LCS) is employed to calculate the longest matching length ratio in the generated explanation relative to the length of the reference entity. Entity F1 (EF1) is computed as follows: EF1 = 2 × P × R P + R (2) P = P ei∈Entity(G) Countclip(ei, G, R) P ej∈Entity(G) #(ej, G) (3) R = P ei∈Entity(R) Countclip(ei, G, R) P ej∈Entity(R) #(ej, R) , (4) where #(ej, G), #(ej, R) are functions that count the occurrences of entity ej in G and R respectively, and Countclip(ei, G, R) returns the lesser frequency of occurrence of ei in either G or R. Entity Cooccurrence (ECooc) is calculated using BP from equation (6) as follows: ECooc(G, R) =BP(G, R) × Cov(Co(G), Co(R)), (5) where BP(G, R) is given by: BP(G, R) = exp(max(0.0, |G| |R| −1)) (6) and function Co(·) returns pairs of co-occurring entities within a context window comprising a sentence and its adjacent n sentences. Sentence segmentation was performed using the nltk sentence splitter for this purpose.8 D Details of experimental setting D.1 LVLM details Model Base Model HuggingFace Name/OpenAI API BLIP2 (OPT) OPT Salesforce/blip2-opt-6.7b BLIP2 (FLAN-T5-XL) FLAN-T5-XL Salesforce/blip2-flan-t5-xl BLIP2 (FLAN-T5-XXL) FLAN-T5-XXL Salesforce/blip2-flan-t5-xxl InstructBLIP (FLAN-T5-XL) FLAN-T5-XL Salesforce/instructblip-flan-t5-xl InstructBLIP (FLAN-T5-XXL) FLAN-T5-XXL Salesforce/instructblip-flan-t5-xxl InstructBLIP (Vicuna-7B) Vicuna-7B Salesforce/instructblip-vicuna-7b InstructBLIP (Vicuna-13B) Vicuna-13B Salesforce/instructblip-vicuna-13b Yi-VL-6B Yi-6B-Chat 01-ai/Yi-VL-6B mPLUG-Owl LLaMA MAGAer13/mplug-owl-llama-7b mPLUG-Owl2 LLaMA2-7B MAGAer13/mplug-owl2-llama2-7b LLaVA-1.5 Vicuna-13B liuhaotian/llava-v1.5-13b LLaVA-NeXT (Vicuna-7B) Vicuna-7B liuhaotian/llava-v1.6-vicuna-7b LLaVA-NeXT (Vicuna-13B) Vicuna-13B liuhaotian/llava-v1.6-vicuna-13b LLaVA-Next (Mistral) Mistral liuhaotian/llava-v1.6-mistral-7b LLaVA-NeXT (Yi-34B) Yi-34B liuhaotian/llava-v1.6-34b Qwen-VL-Chat Qwen Qwen/Qwen-VL-Chat GPT-4-Vision gpt-4-1106-vision-preview 8Sentence segmentation was performed using the NLTK sentence splitter. 717 D.2 LLM details Model HuggingFace Name FLAN-T5-XL google/flan-t5-xl FLAN-T5-XXL google/flan-t5-xxl OPT facebook/opt-6.7b LLaMA openlm-research/open_llama_7b LLaMA2 meta-llama/Llama-2-7b Mistral mistralai/Mistral-7B-Instruct-v0.2 Vicuna-7B lmsys/vicuna-7b-v1.5 Vicuna-13B lmsys/vicuna-13b-v1.5 Qwen-Chat Qwen/Qwen-7B-Chat Yi-6B 01-ai/Yi-6B Yi-34B 01-ai/Yi-34B GPT-4 gpt-4-1106-preview D.3 Fine tunning and Inference setting Hyper Parameter Value torch_dtype bfloat16 seed 42 max length 2048 warmup ratio 0.01 learning rate 1e-5 batch size 4 epoch 1 lora r 64 lora alpha 16 lora dropout 0.05 lora target modules c_attn, attn.c_proj, w1, w2 Table 6: The hyper-parameters used in the experiment, and others, were set to default settings. The implementation used Transformers (Wolf et al., 2020) and bitsandbytes (Dettmers et al., 2022b). In this study, to ensure a fair comparison of performance across multiple models, all experiments were conducted on a single NVIDIA RTX 6000 Ada GPU, with 8-bit quantization utilized for model generation. However, due to resource constraints, LLaVA-NeXT (Yi-34B-Chat) model was loaded and inferred in 4-bit mode. To standardize the length of tokens generated across all models, the maximum token length was set to 1024. The same settings were applied to each model for performance comparison purposes. D.4 Training Datasets Table 16 lists the datasets employed to train the models addressed in this study. E Details of our created dataset E.1 Dataset section distribution Table 7 provides a comprehensive breakdown of various types of sections within the dataset, along with their frequency counts. In designing the test set for the "seen" setting, we meticulously considered the distribution of these sections. Through an analysis of the frequency of each section type, we managed to evenly split the data. This strategic approach ensured that the test set was constructed with a balanced representation of each section type, aiming for a more equitable and thorough evaluation process. Due to this methodology, the division of the test set into "seen" and "unseen" portions was based on the distribution of section types, rather than the number of images. Consequently, the number of images in the "seen" and "unseen" parts of the test set may not be equal (refer to Table 2). This was a deliberate choice to prioritize a balanced representation of section types over an equal count of images, enhancing the relevance and fairness of the evaluation process. E.2 Omitted sections The following sections have been omitted from this document: • References • See also • External links • Sources • Further reading • Bibliography • Gallery • Footnotes • Notes • References Sources • Bibliography (In Spanish) • Bibliography (In Italian) • Bibliography (In German) • Bibliography (In French) • Images • Links • List • Notes and references • List by location 718 These sections were deemed unsuitable for the task of generating descriptions of artwork in this study and were therefore removed. E.3 Train Templates As shown in Table 15, to ensure diversity in training, we utilized seven templates to construct the instruction-based training set. We initially created 49 templates by combining seven base sentences with seven verbs such as explore, explain, and discuss. During experimental evaluations, the models were tested with these 49 templates. We adopted the top seven templates that resulted in the highest accuracy and best adherence to instructions by the models. E.4 Train Dataset Example As shown in Figure 5 and 6, we adopted the format for fine-tuning Qwen (Bai et al., 2023a) and modified the template presented in E.3 into the form of figures. This format was used for model training and dataset publication. E.5 Entity Distribution Figures 7 and 8 present the entity distribution within our datasets. The minimal difference in data distribution between seen and unseen cases suggests that the partitioning method described in Step 3 of Section 4 is effective. F License In our study we created a dataset from Wikipedia articles of artworks. The each image is available under the Creative Commons License (CC) or other licenses. Specific license information for each image can be found on the Wikipedia page or the image description page for that image. The images in this study are used under the terms of these licenses, and links to the images are provided in the datasets we publish so that users can download the images directly. The images themselves are not directly published. Therefore, our data does not infringe upon the licenses. Type Frequency Abstract 9632 Description 2747 History 1869 Background 666 Provenance 517 Reception 346 Description History 341 Analysis 337 Painting 218 Artist 189 Historical Information 187 Composition 168 Subject 138 Legacy 127 Exhibitions 115 Interpretation 110 Condition 97 In Popular Culture 94 Information 84 Design 83 Style 78 Influence 68 Creation 65 Description Style 63 Related Works 63 Acquisition 60 Context 59 Versions 51 Other Versions 51 Literature 50 Symbolism 50 The Painting 50 Attribution 50 Details 46 Notes References 45 Exhibition History 41 Location 40 Interpretations 40 Critical Reception 39 Historical Context 39 Iconography 38 Subject Matter 37 Influences 37 Exhibition 37 Commission 36 Overview 34 Analysis Description 34 Citations 33 Painting Materials 32 Controversy 32 Restoration 32 Table 7: Frequency count of data types in the dataset. 719 LVLM Setting Size BLUE ROUGE BertScore Entity Cov. Entity F1 Entity Cooccurrence Avg. Length 1 2 L exact partial n=0 n=1 n=2 n=∞ With Title (Language information + Visual information) mPLUG-Owl2 Unseen 7B 1.16 26.8 5.9 17.1 83.3 13.3 21.1 15.6 1.61 1.38 1.35 1.29 100 LLaVA-NeXT (Vicuna-7B) Unseen 7B 0.81 16.5 3.7 11.0 80.8 9.0 14.1 10.6 0.83 0.74 0.73 0.69 119 LLaVA-NeXT (Vicuna-13B) Unseen 13B 1.18 17.0 4.1 10.8 80.5 11.5 16.4 13.1 1.12 1.04 1.02 0.99 133 LLaVA-NeXT (Yi-34B) Unseen 34B 0.72 13.9 3.3 9.5 80.2 18.5 27.8 16.1 0.26 0.22 0.21 0.19 869 Qwen-VL-Chat Unseen 7B 1.64 28.2 6.8 17.4 83.5 17.8 26.3 20.8 1.90 1.66 1.63 1.57 155 Qwen-VL-Chat (FT) Unseen 7B 3.96 27.2 10.8 21.4 84.2 19.7 27.2 22.0 4.86 4.35 4.23 4.13 153 GPT-4-Vision Unseen 2.40 28.6 7.6 16.3 83.3 28.4 37.1 31.6 3.02 3.00 2.98 3.05 264 mPLUG-Owl2 Seen 7B 1.14 26.6 5.9 17.0 83.3 12.5 20.3 15.1 1.54 1.29 1.24 1.17 94 LLaVA-NeXT (Vicuna-7B) Seen 7B 0.78 16.5 3.5 10.6 80.7 7.9 13.0 9.4 0.74 0.66 0.63 0.59 114 LLaVA-NeXT (Vicuna-13B) Seen 13B 1.14 17.0 4.0 10.8 80.5 10.3 15.5 12.4 1.32 1.08 1.01 0.96 127 LLaVA-NeXT (Yi-34B) Seen 34B 0.73 13.7 3.2 9.4 80.1 17.4 26.7 15.4 0.26 0.24 0.22 0.21 872 Qwen-VL-Chat Seen 7B 1.69 27.9 6.7 17.3 83.4 16.2 24.5 19.8 1.87 1.57 1.54 1.47 153 Qwen-VL-Chat (FT) Seen 7B 4.13 27.6 11.4 21.8 84.5 19.8 27.4 23.5 5.47 4.43 4.30 4.19 133 GPT-4-Vision Seen 2.32 28.3 7.4 16.2 83.2 26.4 34.9 29.7 2.82 2.71 2.67 2.63 254 Without Title (Visual information) mPLUG-Owl2 Unseen 7B 0.21 23.3 3.58 15.0 82.3 4.0 10.5 4.3 0.26 0.29 0.26 0.24 91 LLaVA-NeXT (Vicuna-7B) Unseen 7B 0.13 16.0 2.21 10.6 80.1 1.8 6.3 1.8 0.07 0.10 0.10 0.11 125 LLaVA-NeXT (Vicuna-13B) Unseen 13B 0.17 16.6 2.35 11.0 80.8 2.1 7.1 2.2 0.07 0.08 0.08 0.07 164 LLaVA-NeXT (Yi-34B) Unseen 34B 0.15 11.5 1.88 8.1 78.7 3.5 10.5 2.8 0.03 0.03 0.02 0.02 903 Qwen-VL-Chat Unseen 7B 0.47 24.8 4.50 15.4 82.5 7.5 14.6 8.4 0.56 0.60 0.58 0.55 128 Qwen-VL-Chat (FT) Unseen 7B 2.07 24.5 7.79 18.6 83.4 12.9 19.6 14.7 2.25 2.03 2.00 1.96 153 GPT-4-Vision Unseen 0.10 23.1 4.43 13.2 81.9 11.6 19.0 12.3 1.18 1.35 1.37 1.34 223 mPLUG-Owl2 Seen 7B 0.14 22.6 3.37 14.6 82.2 2.9 9.2 3.2 0.19 0.14 0.13 0.12 86 LLaVA-NeXT (Vicuna-7B) Seen 7B 0.11 15.4 1.95 10.2 80.0 1.0 5.6 1.2 0.05 0.04 0.06 0.06 123 LLaVA-NeXT (Vicuna-13B) Seen 13B 0.11 16.0 2.10 10.7 80.7 1.2 6.0 1.4 0.03 0.03 0.03 0.03 154 LLaVA-NeXT (Yi-34B) Seen 34B 0.10 11.1 1.71 7.9 78.6 2.1 9.2 1.9 0.01 0.01 0.01 0.01 909 Qwen-VL-Chat Seen 7B 0.40 24.4 4.32 15.2 82.5 5.6 12.7 6.9 0.40 0.41 0.37 0.35 124 Qwen-VL-Chat (FT) Seen 7B 2.09 24.9 8.00 18.9 83.8 12.4 19.4 15.0 2.19 1.85 1.82 1.78 127 GPT-4-Vision Seen 0.74 22.4 4.14 12.8 81.8 9.3 16.7 10.5 0.91 0.91 0.86 0.84 212 Table 8: Results of LVLMs including ’seen’ settings. Notations are the same as Table 3. 720 LVLM Setting Size BLUE ROUGE BertScore Entity Cov. Entity F1 Entity Cooccurrence Avg. Length 1 2 L exact partial n=0 n=1 n=2 n=∞ With Title (Language information + Visual information) BLIP2 (OPT) Unseen 6.7B 0.00 0.1 0.0 0.1 76.4 0.0 0.0 0.0 0.00 0.00 0.00 0.00 0.01 BLIP2 (FLAN-T5-XL) Unseen 3B 0.00 9.7 2.8 8.3 80.6 5.2 8.5 1.4 0.05 0.03 0.03 0.03 20 BLIP2 (FLAN-T5-XXL) Unseen 11B 0.01 2.8 0.5 2.6 76.5 0.7 2.4 0.5 0.01 0.00 0.00 0.00 21 mPLUG-Owl Unseen 7B 0.17 15.0 2.4 10.1 81.8 4.3 8.6 4.7 0.35 0.38 0.40 0.37 12 LLaVA-1.5 Unseen 13B 1.61 20.8 5.2 13.2 81.5 13.4 19.4 15.8 1.56 1.34 1.33 1.26 139 LLaVA-NeXT (Mistral) Unseen 7B 1.32 24.1 5.7 15.9 82.4 12.3 19.6 14.9 1.44 1.18 1.15 1.06 140 InstructBLIP (FLAN-T5-XL) Unseen 3B 0.70 16.9 5.2 13.0 83.2 8.5 13.8 6.6 0.80 0.62 0.59 0.56 28 InstructBLIP (FLAN-T5-XXL) Unseen 11B 1.00 16.4 4.6 12.0 81.7 8.6 13.8 9.3 1.00 0.75 0.73 0.71 54 InstructBLIP (Vicuna-7B) Unseen 7B 1.44 23.5 6.2 15.7 83.3 12.6 19.2 14.2 1.79 1.50 1.44 1.38 58 InstructBLIP (Vicuna-13B) Unseen 13B 1.11 25.9 6.2 17.2 83.6 11.8 18.8 13.7 1.42 1.19 1.16 1.09 50 Yi-VL-6B Unseen 6B 1.07 26.2 5.7 16.6 82.9 12.9 20.8 15.1 1.37 1.24 1.27 1.21 147 Qwen-VL-Chat Unseen 7B 1.64 28.2 6.8 17.4 83.5 17.8 26.3 20.8 1.90 1.66 1.63 1.57 155 Qwen-VL-Chat (FT) Unseen 7B 3.96 27.2 10.8 21.4 84.2 19.7 27.2 22.0 4.86 4.35 4.23 4.13 153 GPT-4-Vision Unseen 2.40 28.6 7.6 16.3 83.3 28.4 37.1 31.6 3.02 3.00 2.98 3.05 264 BLIP2 (OPT) Seen 6.7B 0.00 2.0 0.0 1.2 77.5 0.0 1.8 0.0 0.00 0.00 0.00 0.00 0.01 BLIP2 (FLAN-T5-XL) Seen 3B 0.01 9.9 3.0 8.5 80.7 5.2 8.3 1.7 0.07 0.03 0.03 0.03 17 BLIP2 (FLAN-T5-XXL) Seen 11B 0.01 2.9 0.5 2.7 76.5 0.9 2.6 0.6 0.04 0.03 0.03 0.03 21 mPLUG-Owl Seen 7B 0.14 15.4 2.4 10.3 81.9 4.5 9.3 4.8 0.37 0.29 0.28 0.26 13 LLaVA-1.5 Seen 13B 1.69 20.7 5.3 13.1 81.5 12.5 18.4 15.0 1.85 1.37 1.34 1.30 128 LLaVA-NeXT (Mistral) Seen 7B 1.41 24.1 5.6 16.0 82.3 11.6 19.1 14.4 1.49 1.16 1.06 1.01 145 InstructBLIP (FLAN-T5-XL) Seen 3B 0.78 16.9 5.2 13.0 83.2 8.5 14.0 7.1 0.92 0.69 0.66 0.63 29 InstructBLIP (FLAN-T5-XXL) Seen 11B 0.10 16.6 4.7 12.2 81.8 8.7 14.1 9.3 1.11 0.90 0.87 0.84 54 InstructBLIP (Vicuna-7B) Seen 7B 1.53 23.9 6.3 15.8 83.3 12.4 19.5 14.3 1.77 1.47 1.42 1.37 62 InstructBLIP (Vicuna-13B) Seen 13B 1.11 25.5 6.1 16.9 83.5 10.2 17.3 12.5 1.26 1.08 1.01 0.97 51 Yi-VL-6B Seen 6B 1.00 25.8 5.5 16.3 82.7 11.5 19.9 13.6 1.00 0.80 0.78 0.75 149 Qwen-VL-Chat Seen 7B 1.69 27.9 6.7 17.3 83.4 16.2 24.5 19.8 1.87 1.57 1.54 1.47 153 Qwen-VL-Chat (FT) Seen 7B 4.13 27.6 11.4 21.8 84.5 19.8 27.4 23.5 5.47 4.43 4.30 4.19 133 GPT-4-Vision Seen 2.32 28.3 7.4 16.2 83.2 26.4 34.9 29.7 2.82 2.71 2.67 2.63 254 Without Title (Visual information) BLIP2 (OPT) Unseen 6.7B 0.00 4.1 0.00 4.1 79.8 0.0 0.0 0.0 0.00 0.00 0.00 0.00 0.01 BLIP2 (FLAN-T5-XL) Unseen 3B 0.01 8.9 1.47 7.5 81.2 2.1 5.0 1.1 0.01 0.00 0.00 0.00 15 BLIP2 (FLAN-T5-XXL) Unseen 11B 0.00 2.5 0.16 2.4 75.8 0.6 1.7 0.2 0.00 0.00 0.00 0.00 18 mPLUG-Owl Unseen 7B 0.14 18.1 2.59 11.9 82.1 2.2 7.2 2.4 0.13 0.10 0.08 0.08 21 LLaVA-1.5 Unseen 13B 0.21 17.8 2.70 11.7 81.4 2.7 7.9 2.6 0.11 0.15 0.15 0.15 158 LLaVA-NeXT (Mistral) Unseen 7B 0.16 21.1 2.77 14.1 81.3 2.3 8.0 2.3 0.08 0.11 0.12 0.12 132 InstructBLIP (FLAN-T5-XL) Unseen 3B 0.08 13.0 2.17 10.0 82.4 2.7 6.6 2.3 0.13 0.07 0.08 0.07 28 InstructBLIP (FLAN-T5-XXL) Unseen 11B 0.16 12.5 2.11 9.3 81.1 3.0 6.9 2.7 0.16 0.13 0.11 0.11 41 InstructBLIP (Vicuna-7B) Unseen 7B 0.49 22.9 4.47 15.2 82.9 6.4 12.9 7.1 0.55 0.58 0.56 0.49 83 InstructBLIP (Vicuna-13B) Unseen 13B 0.39 23.5 4.31 15.8 82.8 4.8 11.5 5.2 0.37 0.33 0.31 0.28 85 Yi-VL-6B Unseen 6B 0.37 23.4 4.08 15.1 82.0 5.4 12.2 5.7 0.35 0.36 0.35 0.34 158 Qwen-VL-Chat Unseen 7B 0.47 24.8 4.50 15.4 82.5 7.5 14.6 8.4 0.56 0.60 0.58 0.55 128 Qwen-VL-Chat (FT) Unseen 7B 2.07 24.5 7.79 18.6 83.4 12.9 19.6 14.7 2.25 2.03 2.00 1.96 153 GPT-4-Vision Unseen 0.10 23.1 4.43 13.2 81.9 11.6 19.0 12.3 1.18 1.35 1.37 1.34 223 BLIP2 (OPT) Seen 6.7B 0.00 2.3 0.00 2.3 78.4 0.0 2.1 0.0 0.00 0.00 0.00 0.00 0.03 BLIP2 (FLAN-T5-XL) Seen 3B 0.00 9.0 1.50 7.6 81.4 1.7 4.5 1.0 0.01 0.01 0.01 0.01 13 BLIP2 (FLAN-T5-XXL) Seen 11B 0.00 2.6 0.16 2.5 75.7 0.4 1.6 0.2 0.00 0.00 0.00 0.00 18 mPLUG-Owl Seen 7B 0.08 18.4 2.64 12.1 82.1 1.9 6.9 2.5 0.08 0.05 0.04 0.04 23 LLaVA-1.5 Seen 13B 0.13 17.7 2.55 11.6 81.3 1.3 6.4 1.4 0.07 0.05 0.05 0.04 154 LLaVA-NeXT (Mistral) Seen 7B 0.08 20.7 2.50 13.9 81.3 1.3 7.0 1.4 0.04 0.04 0.04 0.03 125 InstructBLIP (FLAN-T5-XL) Seen 3B 0.05 12.5 1.99 9.6 82.4 1.9 5.9 1.9 0.04 0.06 0.06 0.06 26 InstructBLIP (FLAN-T5-XXL) Seen 11B 0.10 12.3 1.95 9.1 81.1 2.3 6.3 2.2 0.08 0.08 0.07 0.07 37 InstructBLIP (Vicuna-7B) Seen 7B 0.43 22.7 4.31 15.1 83.0 4.9 11.4 5.8 0.36 0.30 0.29 0.27 82 InstructBLIP (Vicuna-13B) Seen 13B 0.37 23.3 4.27 15.7 82.7 3.3 10.0 4.0 0.17 0.16 0.16 0.15 85 Yi-VL-6B Seen 6B 0.33 23.0 3.86 14.8 81.9 4.1 11.2 4.7 0.19 0.16 0.15 0.14 162 Qwen-VL-Chat Seen 7B 0.40 24.4 4.32 15.2 82.5 5.6 12.7 6.9 0.40 0.41 0.37 0.35 124 Qwen-VL-Chat (FT) Seen 7B 2.09 24.9 8.00 18.9 83.8 12.4 19.4 15.0 2.19 1.85 1.82 1.78 127 GPT-4-Vision Seen 0.74 22.4 4.14 12.8 81.8 9.3 16.7 10.5 0.91 0.91 0.86 0.84 212 Table 9: Comprehensive Results of Secondary (LVLMs). This includes models not highlighted in the main findings, with the gray lines representing the three models that achieved the best performance in the main evaluation. Bold type signifies the highest scores for each metric within their respective groups. 721 Figure 3: Average token lengths for 18 evaluated LVLMs on an unseen set, where yellow represents the ’With Title’ setting, blue indicates the ’Without Title’ setting, and red signifies the average token length for the base language model of the LVLM with titles. The length of the unseen reference sentence is 174 tokens. Figure 4: Average token lengths for Qwen’s Few-shot and Fine-tuning settings on an unseen set, where yellow represents the ’With Title’ setting, blue indicates the ’Without Title’ setting, and red signifies the average token length for the base language model of the LVLM with titles. The length of the unseen reference sentence is 174 tokens. 722 LVLM Setting Size BLUE ROUGE BertScore Entity Cov. Entity F1 Entity Cooccurrence Avg. Length 1 2 L exact partial n=0 n=1 n=2 n=∞ With Title (Language information + Visual information) FLAN-T5-XL Unseen 3B 0.66 15.4 6.23 13.1 83.6 10.2 15.4 10.6 1.36 0.88 0.84 0.83 20 FLAN-T5-XXL Unseen 11B 0.00 2.0 0.09 1.8 76.2 3.3 2.2 0.3 0.00 0.00 0.00 0.00 63 OPT Unseen 6.7B 0.34 8.3 1.60 7.3 76.8 12.0 18.9 8.4 0.15 0.12 0.12 0.11 872 LlaMA Unseen 7B 0.48 9.4 1.99 8.1 77.7 16.4 23.7 11.3 0.15 0.14 0.13 0.11 876 LlaMA2 Unseen 7B 1.81 24.0 5.92 14.9 82.4 18.5 27.3 20.8 1.04 0.88 0.82 0.81 366 Mistral Unseen 7B 1.82 25.1 6.41 15.2 82.7 21.8 31.2 23.4 1.33 1.30 1.27 1.25 345 Vicuna-7B Unseen 7B 1.14 20.9 4.87 13.1 82.7 12.3 18.6 14.1 1.43 1.33 1.32 1.23 129 Vicuna-13B Unseen 13B 2.35 28.4 7.34 17.7 83.4 19.4 28.1 23.0 2.16 1.99 1.89 1.77 210 Qwen-Chat Unseen 7B 0.60 12.0 2.50 7.4 79.5 7.6 11.8 8.5 0.52 0.43 0.41 0.40 106 Yi-6B-Chat Unseen 6B 0.93 14.0 3.55 10.9 79.3 14.2 21.4 11.9 0.55 0.50 0.48 0.46 717 Yi-34B-Chat Unseen 34B 1.00 13.1 3.50 10.4 79.1 17.9 25.4 12.9 0.93 0.86 0.83 0.81 745 GPT-4 Unseen 2.20 26.2 7.00 14.9 82.5 31.7 40.2 32.3 2.54 2.50 2.53 2.59 374 FLAN-T5-XL Seen 3B 0.67 15.1 6.30 12.9 83.4 9.0 14.5 9.5 1.34 0.95 0.85 0.81 22 FLAN-T5-XXL Seen 11B 0.01 8.9 1.48 7.5 81.2 2.1 5.0 1.1 0.01 0.00 0.00 0.00 66 OPT Seen 6.7B 0.35 8.3 1.63 7.2 76.8 11.4 18.4 9.0 0.08 0.06 0.05 0.05 877 LlaMA Seen 7B 0.51 9.3 2.01 8.0 77.8 15.7 23.1 11.0 0.17 0.13 0.12 0.10 877 LlaMA2 Seen 7B 1.87 24.3 6.03 15.1 82.5 19.0 28.1 21.4 1.10 0.92 0.85 0.84 357 Mistral Seen 7B 1.91 25.1 6.40 15.2 82.6 20.3 29.5 22.5 1.33 1.11 1.03 0.98 334 Vicuna-7B Seen 7B 0.98 19.6 4.42 12.3 82.6 10.0 15.9 11.8 1.03 0.92 0.86 0.83 111 Vicuna-13B Seen 13B 1.91 25.1 6.37 15.2 82.6 20.3 29.5 22.5 1.33 1.11 1.03 0.98 334 Qwen-Chat Seen 7B 0.62 11.9 2.47 7.3 79.4 7.4 11.7 8.3 0.64 0.52 0.51 0.48 104 Yi-6B-Chat Seen 6B 0.99 14.6 3.74 11.2 79.6 13.9 21.3 12.6 0.64 0.60 0.57 0.55 698 Yi-34B-Chat Seen 34B 1.00 12.9 3.41 10.3 79.0 17.6 24.8 12.7 0.92 0.85 0.81 0.79 750 GPT-4 Seen 2.20 26.0 6.90 14.8 82.5 29.7 38.3 31.0 2.50 2.30 2.32 2.31 369 Table 10: Comprehensive Performance of Base Language Models with Title Integration. This table showcases the performance of primary models, both featured and not featured in the main analysis, across ’seen’ and ’unseen’ settings, evaluated using additional metrics such as BLEU, BERTscore, and ROUGE. mPlug_owl2 LlaVA-NeXT (Vicuna13B) LlaVA-NeXT (Vicuna7B) LLaVA-NeXT (Yi34B) Qwen-VL-Chat Qwen-VL-Chat (FT) GPT-4-Vision Exact match 1.6% 0.0% 0.0% 0.0% 4.0% 5.7% 8.97% Partial match 54.2% 39.9% 27.5% 66.3% 53.6% 66.7% 64.0% Table 11: LVLM Primary Group Analysis of Title Generation Accuracy from Image Information. Setting BLIP2 (OPT) BLIP2 (FLAN-T5-XL) BLIP2 (FLAN-T5-XXL) mPLUG_Owl LLaVA-1.5 InstructBLIP (FLAN-T5-XL) Exact match 0.0% 1.04% 1.25% 1.97% 0.0% 0.93% Partial match 0.10% 49.6% 49.1% 37.0% 40.3% 44.0% Table 12: LVLM Complementary Group Analysis of Title Generation Accuracy Using Only Image Information (Part 1). Setting InstructBLIP (FLAN-T5-XXL) InstructBLIP (Vicuna-7B) Instruct Blip (Vicuna-13B) LLaVA-NeXT (mistral) Yi-VL-6B Exact match 1.04% 1.14% 1.14% 0.10% 1.36% Partial match 50.1% 50.5% 58.1% 47.7% 50.6% Table 13: LVLM Complementary Group Analysis of Title Generation Accuracy Using Only Image Information (Part 2). 723 Title Rank mPLUG-Owl mPLUG-Owl2 Qwen-VL-Chat Qwen-VL-Chat(FT) GPT-4-Vision Mona Lisa 1 " " " " " The Great Wave off Kanagawa 2 " " " " Vitruvian Man 3 " " " " " Winged Victory of Samothrace 4 " " " Girl with a Pearl Earring 5 " " " " " The Wedding at Cana 6 " " " " The Anatomy Lesson of Dr. Nicolaes Tulp 7 " " " Apollo Belvedere 9 " " " Homeless Jesus 11 " " " Raphael Rooms 12 " Almond Blossoms 13 " " " The Death of General Wolfe 14 " " " " The Persistence of Memory 15 " " " " " Doni Tondo 19 " The Turkish Bath 20 " " Look Mickey 26 " " " " " The Seven Deadly Sins and the Four Last Things 27 " " " " The Conspiracy of Claudius Civilis 28 " La Belle Ferronnière 31 " The Gross Clinic 32 " " The Wedding Dance 33 " " " Sacred and Profane Love 35 " The Sea of Ice 37 " " The Geographer 41 " " Equestrian Portrait of Charles V 45 " The Monk by the Sea 49 " My Bed 51 " " " I Saw the Figure 5 in Gold 55 " Peace Monument 57 " Littlefield Fountain 58 " " Music in the Tuileries 59 " The Cornfield 60 " " Lovejoy Columns 62 " " " The Allegory of Good and Bad Government 64 " Sibelius Monument 72 " " " Headington Shark 73 " The Great Masturbator 75 " Self-Portrait with Thorn Necklace and Hummingbird 81 " Snow Storm: Steam-Boat off a Harbour’s Mouth 83 " Bathers at Asnières 84 " " The Bacchanal of the Andrians 91 " " The Painter’s Studio 95 " Carnation, Lily, Lily, Rose 97 " " Lady Writing a Letter with her Maid 99 " " Two Sisters (On the Terrace) 104 " " " Lion of Belfort 112 " Metamorphosis of Narcissus 114 " Lady Seated at a Virginal 115 " Puerta de Alcalá 116 " " The Three Crosses 118 " Statue of Paddington Bear 119 " Our English Coasts 139 " Hahn/Cock 140 " The Wounded Deer 144 " " The Disrobing of Christ 148 " " Lion of Venice 149 " " " Cross in the Mountains 153 " Man Writing a Letter 164 " " Dying Slave 165 " Nymphs and Satyr 168 " Tomb of Pope Alexander VII 172 " Greece on the Ruins of Missolonghi 178 " The Basket of Apples 186 " " James Scott Memorial Fountain 189 " The Death of General Mercer at the Battle of Princeton, January 3, 1777 193 " Madonna of the Rabbit 200 " " Pyramid of Skulls 209 " Ascending and Descending 220 " The Madonna of Port Lligat 221 " " Le Pont de l’Europe 231 " Continued on next page 724 Table 14 – continued from previous page Title Rank mPLUG-Owl mPLUG-Owl2 Qwen-VL-Chat Qwen-VL-Chat(FT) GPT-4-Vision Bratatat! 240 " Marie Antoinette with a Rose 247 " " " The Beguiling of Merlin 256 " " Blob Tree 258 " " " " " Morning in a Pine Forest 266 " " Swann Memorial Fountain 271 " Equestrian Portrait of Philip IV 272 " Golden Guitar 274 " " " " The Blind Girl 275 " The Lament for Icarus 278 " Love’s Messenger 289 " Arrangement in Grey and Black, No. 2: Portrait of Thomas Carlyle 304 " The Return of the Herd 320 " Statue of Henry W. Grady 327 " Young Ladies of the Village 333 " Why Born Enslaved! 355 " Apollo Pavilion 358 " Looking Into My Dreams, Awilda 371 " Australian Farmer 378 " " " " " Bust of Giuseppe Mazzini 379 " Wind from the Sea 399 " " Art is a Business 415 " " Statue of George M. Cohan 417 " " The Union of Earth and Water 434 " Frederick the Great Playing the Flute at Sanssouci 440 " Procession in St. Mark’s Square 441 " Larry La Trobe 443 " From this moment despair ends and tactics begin 460 " " Winter Landscape with Skaters 479 " Bust of William H. English 489 " " Statue of Roscoe Conkling 507 " Still Life and Street 531 " Statue of William Blackstone 536 " Statue of Chick Hearn 558 " Happy Rock 587 " " " " " The Revells of Christendome 608 " Bust of Cardinal Richelieu 629 " Stag Hunt 634 " The Drover’s Wife 679 " My Egypt 684 " The Viaduct at L’Estaque 731 " The Repast of the Lion 733 " Puget Sound on the Pacific Coast 761 " Diana and Cupid 768 " " Portrait of Cardinal Richelieu 778 " Statue of Toribio Losoya 873 " Statue of Valentín Gómez Farías 877 " Table 14: List of titles that were actually output by the model with exact settings. 725 Type Template Template 1 Section Focus on {title} and explore the {section}. Subsection In the context of {title}, explore the {subsection} of the {section}. Sub subsection Focusing on the {section} of {title}, explore the {subsubsection} about the {subsection}. Template 2 Section Focus on {title} and explain the {section}. Subsection In the context of {title}, explain the {subsection} of the {section}. Sub subsection Focusing on the {section} of {title}, explain the {subsubsection} about the {subsection}. Template 3 Section Explore the {section} of this artwork, {title}. Subsection Explore the {subsection} about the {section} of this artwork, {title}. Sub subsection Explore the {subsubsection} about the {subsection} of the {section} in this artwork, {title}. Template 4 Section Focus on {title} and discuss the {section}. Subsection In the context of {title}, discuss the {subsection} of the {section}. Sub subsection Focusing on the {section} of {title}, discuss the {subsubsection} about the {subsection}. Template 5 Section How does {title} elucidate its {section}? Subsection In {title}, how is the {subsection} of the {section} elucidated? Sub subsection Regarding {title}, how does the {section}’s {subsection} incorporate the {subsubsection}? Template 6 Section Focus on {title} and analyze the {section}. Subsection In the context of {title}, analyze the {subsection} of the {section}. Sub subsection Focusing on the {section} of {title}, analyze the {subsubsection} about the {subsection}. Template 7 Section In {title}, how is the {section} discussed? Subsection Describe the characteristics of the {subsection} in {title}’s {section}. Sub subsection When looking at the {section} of {title}, how do you discuss its {subsection}’s {subsubsection}? Table 15: Prompt Templates. 1 { 2 "id": "0001_T", 3 "title": "Mona Lisa", 4 "conversations": [ 5 { 6 "from": "user", 7 "value": "<img >/ images/Mona Lisa.jpg </img >\ nFocus on Mona Lisa and explore the history." 8 }, 9 { 10 "from": "assistant", 11 "value": "Of Leonardo da V i n c i s works, the Mona Lisa is the only portrait whose authenticity ...." 12 } 13 ] 14 } Figure 5: Train set format with title. 726 1 { 2 "id": "0001_NT", 3 "conversations": [ 4 { 5 "from": "user", 6 "value": "<img >/ images/Mona Lisa.jpg </img >\ nFocus on this artwork and explore the history." 7 }, 8 { 9 "from": "assistant", 10 "value": "Of Leonardo da V i n c i s works, the Mona Lisa is the only portrait whose authenticity ...." 11 } 12 ] 13 } Figure 6: Train set format without title. Avg. entities per record: 8.018913480885312 Max entities in a record: 98 Min entities in a record": 1 Most frequent entities": 4 Freq. of most frequent entities: 498 Median entities: 6.0 Std. dev. of entities: 6.80 % of records with ≤10 entities: 76.14% Figure 7: Entity distribution within each dataset under the ’with title’ setting. 727 Avg. entities per record: 7. 75 Max entities in a record: 57 Min entities in a record: 1 Most frequent entities: 4 Freq. of most frequent entities: 580 Median entities: 6 % of records with ≤10 entities: 76.10% Std. dev. of entities: 6.79 Figure 8: Entity distribution within each dataset under the ’without title’ setting. Data Type Data Name mPlug-owl Qwen-VL-Chat LLava-v-1.5 InstructBLIP Text ShareGPT (Chen et al., 2023) " " SlimOrca (Mukherjee et al., 2023) " In-house Data " Dialogue LLaVA (Liu et al., 2023b) " " Caption COCO (Lin et al., 2014) " " " TextCaps (Sidorov et al., 2020) " " " SBU (Yago et al., 2016) " Coyo (Byeon et al., 2022) " DataComp (Samir Yitzhak Gadre, 2023) " CC12M & 3M (Changpinyo et al., 2021) " LAION-en (Schuhmann et al., 2022) & zh " VQA VQAv2 " " " " GQA (Hudson and Manning, 2019) " " " " OKVQA (Marino et al., 2019) " " " OCRVQA (Mishra et al., 2019) " " " " A-OKVQA (Schwenk et al., 2022) " " " DVQA (Kafle et al., 2018) " TextVQA (Singh et al., 2019) " " " ChartQA (Masry et al., 2022) " A12D " Grounding² GRIT (Peng et al., 2023) " Ref Grounding GRIT " VisualGenome (Krishna et al., 2017) " " RefCOCO (Yu et al., 2016) " " RefCOCO+ (Yu et al., 2016) " " RefCOCOg " " OCR SynthDoG-en (Kim et al., 2022) & zh " Common Crawl pdf & HTML " Image Captioning Web CapFilt (Li et al., 2022b) " NoCaps " Flickr30K (Hambardzumyan et al., 2023) " Visual Spatial Reasoning IconQA (Lu et al., 2021) " Visual Dialog Visual Dialog " Video Question Answering MSVD-QA (Xu et al.) " MSRVTT-QA " iVQA (Liu et al., 2018) " Image Classification VizWiz (Gurari et al., 2018) " Knowledge-Grounded Image QA ScienceQA (Lu et al., 2022) " 728 Data Type Data Name mPLUG-Owl2 Qwen-VL-Chat LLava-v-1.5 InstructBLIP Table 16: Details of training datasets. 729
2024_short
65
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 730–742 August 11-16, 2024 ©2024 Association for Computational Linguistics On the Hallucination in Simultaneous Machine Translation Meizhi Zhong1, Kehai Chen1 *, Zhengshan Xue2, Lemao Liu, Mingming Yang, Min Zhang1 1Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China 2College of Intelligence and Computing, Tianjin University, Tianjin, China [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract It is widely known that hallucination is a critical issue in Simultaneous Machine Translation (SiMT) due to the absence of source-side information. While many efforts have been made to enhance performance for SiMT, few of them attempt to understand and analyze hallucination in SiMT. Therefore, we conduct a comprehensive analysis of hallucination in SiMT from two perspectives: understanding the distribution of hallucination words and the target-side context usage of them. Intensive experiments demonstrate some valuable findings and particularly show that it is possible to alleviate hallucination by decreasing the over usage of target-side information for SiMT. 1 1 Introduction In neural machine translation, hallucination occurrences are not common due to its small quantity (Lee et al., 2018; Yan et al., 2022; Raunak et al., 2021a; Guerreiro et al., 2023). But in simultaneous machine translation (SiMT), it has been found that hallucination is extremely severe, especially as latency increases indicating that hallucination is a critical issue in SiMT. Currently, most prior works concentrate on how to enhance model performance for SiMT (Ma et al., 2019, 2020; Zheng et al., 2020; Zhang and Feng, 2022a,b; Guo et al., 2022; Zhang and Feng, 2022c), however, only a few of them measure the hallucination phenomenon (Chen et al., 2021; Deng et al., 2022; Liu et al., 2023). To our best knowledge, there are no researches which systematically analyze hallucination in SiMT. Therefore, we conduct a comprehensive analysis of hallucinations in SiMT. Initially, we seek to empirically analyze these hallucination words from *Corresponding authors 1Code is available at https://github.com/zhongmz/ SiMT-Hallucination the perspective of their distribution. We collect all hallucination words together and understand their frequency distribution, and we find that these words are randomly distributed with a high entropy: their entropy is almost as high as that for all target words. In addition, to delve into the contextual aspects of hallucination (Xiao and Wang, 2021), we consider their predictive distribution. We discover that their uncertainty is significantly higher than that of nonhallucination words. Furthermore, we find that the SiMT model does not fit the training data well for hallucination words due to the essence of SiMT (i.e., the limited source context), which explains why making correct predictions for hallucination words is difficult. Intuitively, since a SiMT model is defined on top of a limited source context, this may indirectly cause the model to focus more on the target context and lead to the emergence of hallucination words. To verify this intuition, we propose to analyze the usage of the target context for hallucination words for SiMT. Specifically, following Li et al. (2019); Miao et al. (2021); Fernandes et al. (2021); Voita et al. (2021); Yu et al. (2023); Guerreiro et al. (2023), we firstly employ a metric to measure how much targetcontext information is used by SiMT with respect to the source-context information. With the help of this metric, we find that hallucination is indeed significantly more severe when the SiMT model focuses more on target-side information. Drawing upon this, we reduce the over-targetreliance effects by introducing noise into the target-side context. Experimental results show that the proposed method achieves some modest improvements in terms of BLEU and hallucination effect when the latency is relatively small. This discovery gives us some inspiration: more flexible control over the use of target-side information may be a promising approach to alleviate the issue of hallucination. 730 Our key contributions are as follows: • We study hallucination words from frequency and predictive distributions and observe that the frequency distribution of hallucination words is with high entropy and hallucination words are difficult to be memorized by the predictive distribution during training. • We analyze hallucination words according to the usage of (limited) source context. We find that hallucination words make use of more target-context information than sourcecontext information, and it is possible to alleviate hallucination by decreasing the usage of the target context. 2 Experimental Settings Our analysis is based on the most widely used SiMT models and datasets. This section introduces these models and datasets as follows. SiMT Models and Datasets. SiMT models translate by reading partial source sentences. Ma et al. (2019) proposed widely used Wait-k models for SiMT. It involves reading k words initially and then iteratively generating each word until the end of the sentence. We conducted experiments on it. We use two standard benchmarks from IWSLT14 De↔En (Cettolo et al., 2013) and MuSTC Release V2.0 Zh→En (Cattoni et al., 2021) to conduct experiments. Appendix A provides detailed settings. Due to space limitation, we only present the experimental results for the De→En benchmark. The results for Zh→En and En→De are similar, as shown in Appendix D and C. Hallucination Metric. In SiMT, Chen et al. (2021) pioneers the definition of Hallucination Metrics based on word alignment a. A target word ˆyt, is a hallucination if there is no alignment to any source word xj. This is formally represented as: H(t, a) = 1 [{(i, t) ∈a} = ∅] . (1) Conversely, a target word ˆyt, is not a hallucination if there is alignment to any source word xj. The Hallucination Rate (HR) is defined as following: HR(x, ˆy, a) = 1 |ˆy| |ˆy| X t=1 H(t, a). (2) Deng et al. (2022) propose GHall to measure hallucination in Wait-k. Formally, a word is a k 1 3 5 7 9 ∞ HR % 31.28 22.57 18.58 16.41 15.21 11.50 Table 1: HR on valid set of wait-k, where k = ∞means Full-sentence MT. Word 0 60 120 180 240 Word Frequency Hallucination Overall Figure 1: Word frequency of Hallucination and Overall on valid hypotheses set of wait-1 (x-axis is ordered randomly, with additional k results in Appendix B.1). hallucination if it does not align with the current source: Hwait−k(t, a) = 1[{(s, t) ∈a | s ≥t + k} = ∅]. (3) The definition of HR remains consistent with Chen et al. (2021). We utilize GHall metrics to conduct experiments. We use Awesome-align (Dou and Neubig, 2021) as the word aligner a. 3 Understanding Hallucination Words from Distribution Hallucination is severe in SiMT. We measure HR of Wait-k models, illustrated in Table 11. We obtain that Wait-k models suffer more from hallucinations than Full-sentence MT. Furthermore, with k decreasing, hallucinations increase clearly. This shows that hallucination is an important issue and it is worth the in-depth study. 3.1 Understanding Hallucination from Frequency Distribution Hallucination words are with high distribution entropy. To investigate hallucination words in Wait-k, we compare frequency distributions of hallucination and overall words. Figure 1 and Table 2 illustrate that their distributions are remarkably similar and both exhibit high entropy. It suggests that understanding hallucination from high distribution entropy is challenging. 731 k 1 3 5 7 9 Hallucination 7.82 8.22 8.19 8.10 8.07 Overall 8.70 8.97 9.00 9.01 9.02 Table 2: Word frequency distribution entropy of Hallucination and Overall on the valid set of wait-k. Wait-k Valid set Training subset Uncertainty Confidence Uncertainty Confidence H NH H NH H NH H NH k=1 3.53 2.35 0.40 0.61 3.47 2.13 0.41 0.65 k=3 3.00 2.04 0.48 0.66 2.98 1.90 0.49 0.69 k=5 2.81 1.97 0.52 0.67 2.76 1.90 0.52 0.69 k=7 2.55 1.89 0.55 0.69 2.48 1.81 0.57 0.70 k=9 2.48 1.92 0.57 0.68 2.42 1.96 0.58 0.69 Table 3: The Uncertainty and Confidence of Hallucination (H) and Non-Hallucination (NH) on the valid set and training subset of wait-k models. 3.2 Understanding Hallucination from Predictive Distribution We investigate Confidence and Uncertainty of the predictive distribution. We define the Confidence of a word as its probability and the Uncertainty of a word as the entropy of its predictive distribution. Hallucination words are difficult to translate. To explore the difficulty of translating hallucination and non-hallucination words, we calculate the average confidence and uncertainty on the valid set. The results in the left of Table 3 reveal that during decoding hallucination words, the models exhibit higher uncertainty. Additionally, the confidence is lower. It suggests that models encounter challenges in accurately translating hallucination words. Hallucination words are difficult to memorize. To investigate the reasons behind the difficulty in translating hallucination words, we measure confidence and uncertainty for hallucination and non-hallucination words on the training data. We sample examples from the training data as a training subset with the same size as the valid set. The results in the right of Table 3 illustrate that even in previously encountered contexts, models remain uncertain when dealing with hallucination words. These findings suggest that models do not fit well with hallucination words during training, leading to a limited ability to generalize to similar contexts on the valid set. Consequently, the difficulty in translating hallucination words can be attributed to challenges in memorization during the training. Additionally, we observe that as k increases, the uncertainty decreases significantly. It can be attributed to the model encountering source-side context more, enabling a improved memorization. 4 Analysis of Target Context Usage for Hallucination Words To verify the hypothesis that using more on target-side context leads to the emergence of hallucination, we propose to analyze the usage of target-side context. Measure on Target-side Context Usage. To explicitly measure Target Context Usage, we adapt an interpretive approach that evaluates the relevance of both target and source words. It involves deactivating connections between the corresponding words and the network. We compute the relevance between the words in the source or target and the next word to be generated and determine the maximum absolute relevance as source or target relevance. It allows us to calculate the Target-Side Relevance to Source-Side Relevance ’s Ratio (TSSR). To begin with, we assess the relevance of targetside words and source-side words to the next word to be generated. This evaluation is conducted by selectively deactivating the connection between xj or yj and the encoder or decoder network in a deterministic manner, following the approach described in Li et al. (2019). More formally, the relevance R(yi, xj) or R(yi, yj) in Wait-k is directly determined through the dropout effect on xj or yj, as outlined below: R (yi, xj) = P (yi | y<i, x≤i+k−1) −P yi | y<i, x≤i+k−1,(j,0)  . (4) R (yi, yj) = P (yi | y<i, x≤i+k−1) −P yi | y<i,(j,0), x≤i+k−1  . (5) The relevance of the source-side and target-side is determined by selecting the maximum absolute value of the word’s relevance on the current sourceside and the current target-side. Formally, this can be expressed as: R (yi)source−side = max{|R (yi, xj)|}. (6) R (yi)target−side = max{|R (yi, yj)|}. (7) 732 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.00 0.15 0.30 0.45 Hallucination Rate Wait-1 Wait-3 Wait-5 Wait-7 Wait-9 Figure 2: HR on the valid set in different TSSR intervals of wait-k models. Finally, the ratio of target-side relevance to source-side relevance(TSSR) is calculated. A larger TSSR indicates a higher usage of target-side context in generating the next word yi. TSSR (yi) = R (yi)target−side R (yi)source−side . (8) Our final algorithm, referred to as Algorithm 1, is presented. Algorithm 1 Compute TSSR Input: model, hypotheses sentence, source sentence, k Output: TSSR for i in hypotheses sentence length do if j < i then Compute the relevance of next word yi and yj according to 5 end if end for for i in source sentence length do if j ≤i + k −1 then Compute the relevance of next word yi and xj according to 4 end if end for Compute Target-Side Relevance according to 7 Compute Source-Side Relevance according to 6 Compute TSSR according to 8 TSSR is categorized into 10 intervals from 0 to INF, indicating the degree of Target Context Usage. 4.1 The Relationship between Hallucination and Target-side Context Usage Using more target context leads to more severe hallucination. Initially, we analyze the relationship between a word’s usage of the 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.00 0.08 0.16 0.24 0.32 Word Freq Rate Hallucination Non-Hallucination (a) De-En 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.0 0.1 0.2 0.3 0.4 Word Freq Rate Hallucination Non-Hallucination (b) Zh-En (human alignment annotation) Figure 3: Word Frequency Rate of Hallucination and Non-Hallucination in different TSSR intervals for wait1 model. target-side context and its likelihood of being a hallucination. Building upon this, we explore the HR across different TSSR intervals, as depicted in Figure 2. Our findings demonstrate that in high TSSR intervals, HR is higher compared to low TSSR intervals. It indicates that a word using more target context is more likely to be a hallucination. Further analysis revealed that when comparing different Wait- values, there is a more pronounced increase in HR from low TSSR intervals to high TSSR intervals as k decreases, as depicted in Figure 2. This means that there maybe an increased likelihood of hallucinations occurring in words that are utilized with limited source-side context Hallucination words use more target context than Non-Hallucination words. The aforementioned analysis motivates us to investigate whether hallucination words indeed exhibit a higher usage of target-side context than nonhallucination words. To explore this, we analyze the TSSR distributions of hallucination and nonhallucination word frequencies. Figure 3(a) reveals that hallucination words are concentrated on high TSSR intervals. This means the model tends to use more target-side context for the generation of a hallucination word. Furthermore, we observed that the word frequency rate of non-hallucination words is higher in the 0.8 ~1.2 TSSR range, also illustrated in Figure 3(a). Therefore, we propose that the model utilizes source-side context and 733 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.05 0.00 0.05 0.10 Wait-1 Wait-3 Wait-5 Wait-7 Wait-9 Figure 4: Word Frequency Rate Change (∆) in different TSSR intervals with scheduled sampling training compared to the Baselines. k=1 k=3 k=5 k=7 k=9 Baselines BLEU ↑19.69 26.76 29.61 31.10 32.03 HR % ↓31.28 22.57 18.58 16.41 15.21 Scheduled- BLEU ↑20.53 27.32 30.23 31.73 32.34 Sampling HR % ↓30.85 21.62 17.84 15.16 13.84 Table 4: BLEU scores and HR of wait-k models. target-side context similarly during the generation of non-hallucination words. To further validate our claims of above analysis, we sample 100 sentences from the translation results of Zh-En using wait1 decoding for human alignment annotation. We then conduct experiments similar to Figure 3(a). The results as shown in Figure 3(b) are consistent with the conclusions drawn in automatic alignment annotation. 4.2 Increasing Source-side Context Usage via Reducing Target-side Context Usage Observing the association between hallucination and usage of target-side context, we posit that reducing this reliance might be a viable approach to mitigate the hallucination in SiMT. Inspired by (Bengio et al., 2015; Zhang et al., 2019), we adopt the scheduled sampling training to guide the models to pay more attention on the sourceside context by adding noise to the target-side context. Specifically, we randomly replace the ground truth tokens with predicted ones using a decaying probability. The results shown in Figure 4 indicate a decrease in target-context usage and an increase in source-context usage. Scheduled sampling training exhibits improvements in BLEU scores and reductions in HR as presented in Table 4. It successfully reduces hallucination words using more target-side context, but also indirectly increases hallucination words using more sourceside context, as shown in Figure 5. Therefore, a 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 1500 0 1500 Wait-1 Wait-3 Wait-5 Wait-7 Wait-9 Figure 5: Hallucination Frequency Change (∆) in different TSSR intervals with scheduled sampling training compared to the Baselines. better method to flexibly handle the usage between target-side and source-side context is required. 5 Related Work In NMT, previous works have delved into the phenomenon of hallucinations(Lee et al., 2018; Müller et al., 2020; Wang and Sennrich, 2020; Raunak et al., 2021b; Zhou et al., 2021). Specifically, Voita et al. (2021) assessed the relative contributions of source and target context to predictions. Weng et al. (2020); Miao et al. (2021) argued that an important reason for hallucination is the model’s excessive attention to partial translations in NMT. Furthermore, Guerreiro et al. (2023) conducted a comprehensive study of hallucinations in NMT. Differing from these works focusing on NMT, this paper conducts a comprehensive analysis of hallucination in SiMT. 6 Conclusions This paper conducts the first comprehensive analysis of hallucinations in SiMT from two perspectives: understanding the hallucination words from both frequency and predictive distributions and their effects on the usage of target-context information. Intensive Experiments demonstrate some valuable findings: 1) the frequency distribution of hallucination words is with high entropy and their predictive distribution is with high uncertainty due to the difficulty in memorizing hallucination words during training. 2) hallucination words make use of more target-side context than source-side context, and it is possible to alleviate hallucination by decreasing the usage of target-side context. 734 Limitations We highlight four main limitations of our work. Firstly, instead of focusing on more recent adaptive policy, our analysis focuses on the hallucinations in the Wait-k Policy (Ma et al., 2019), which is the most widely used fixed policy in SiMT to ensure a simple and familiar setup that is easy to reproduce and generalize. Secondly, although we propose a simple methods to control the usage of target information, attempting to mitigate the hallucination in SiMT, we only achieve limited improvement. In the future, we will explore more flexible and robust approaches for controlling target context usage to better mitigate the hallucination and achieve greater performance. A further limitation of our study is that we exclusively analyze hallucinations as defined in Section 2, without considering detached hallucinations. This omission arises from the absence of established and reliable automated evaluation methods for detecting such detached hallucinated words. Moreover, our study is constrained by its reliance on aligner tools, potentially introducing alignment biases. Therefore, when applying our approach to datasets with lower alignment accuracy, careful consideration is warranted regarding the necessity for additional validation and adjustment. Acknowledgements We would like to thank the anonymous reviewers and meta-reviewer for their insightful suggestions. The work was supported by the National Natural Science Foundation of China under Grant 62276077, Guangdong Basic and Applied Basic Research Foundation (2024A1515011205), and Shenzhen College Stability Support Plan under Grants GXWD20220811170358002 and GXWD20220817123150002. References Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1171–1179. Roldano Cattoni, Mattia Antonino Di Gangi, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2021. Must-c: A multilingual corpus for end-to-end speech translation. Computer Speech & Language, 66:101155. Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, and Marcello Federico. 2013. Report on the 10th IWSLT evaluation campaign. In Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign, Heidelberg, Germany. Junkun Chen, Renjie Zheng, Atsuhito Kita, Mingbo Ma, and Liang Huang. 2021. Improving simultaneous translation by incorporating pseudo-references with fewer reorderings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5857–5864, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Hexuan Deng, Liang Ding, Xuebo Liu, Meishan Zhang, Dacheng Tao, and Min Zhang. 2022. Improving Simultaneous Machine Translation with Monolingual Data. Zi-Yi Dou and Graham Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2112–2128, Online. Association for Computational Linguistics. Maha Elbayad, Laurent Besacier, and Jakob Verbeek. 2020. Efficient wait-k models for simultaneous machine translation. In Interspeech 2020, 21st Annual Conference of the International Speech Communication Association, Virtual Event, Shanghai, China, 25-29 October 2020, pages 1461–1465. ISCA. Patrick Fernandes, Kayo Yin, Graham Neubig, and André F. T. Martins. 2021. Measuring and increasing context usage in context-aware machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6467–6478, Online. Association for Computational Linguistics. Nuno M. Guerreiro, Elena Voita, and André Martins. 2023. Looking for a needle in a haystack: A comprehensive study of hallucinations in neural machine translation. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 1059–1075, Dubrovnik, Croatia. Association for Computational Linguistics. Shoutao Guo, Shaolei Zhang, and Yang Feng. 2022. Turning fixed to adaptive: Integrating post-evaluation into simultaneous machine translation. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2264–2278, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. 735 Katherine Lee, Orhan Firat, Ashish Agarwal, Clara Fannjiang, and David Sussillo. 2018. Hallucinations in neural machine translation. In NIPS 2018 Interpretability and Robustness for Audio, Speech and Language Workshop. Xintong Li, Guanlin Li, Lemao Liu, Max Meng, and Shuming Shi. 2019. On the word alignment from neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1293–1303, Florence, Italy. Association for Computational Linguistics. Mengge Liu, Wen Zhang, Xiang Li, Yanzhi Tian, Yuhang Guo, Jian Luan, Bin Wang, and Shuoying Chen. 2023. Cbsimt: Mitigating hallucination in simultaneous machine translation with weighted prefixto-prefix training. ArXiv preprint, abs/2311.03672. Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, and Haifeng Wang. 2019. STACL: Simultaneous translation with implicit anticipation and controllable latency using prefix-to-prefix framework. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3025–3036, Florence, Italy. Association for Computational Linguistics. Xutai Ma, Juan Miguel Pino, James Cross, Liezl Puzon, and Jiatao Gu. 2020. Monotonic multihead attention. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Mengqi Miao, Fandong Meng, Yijin Liu, Xiao-Hua Zhou, and Jie Zhou. 2021. Prevent the language model from being overconfident in neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3456–3468, Online. Association for Computational Linguistics. Mathias Müller, Annette Rios, and Rico Sennrich. 2020. Domain robustness in neural machine translation. In Proceedings of the 14th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track), pages 151–164, Virtual. Association for Machine Translation in the Americas. Vikas Raunak, Arul Menezes, and Marcin JunczysDowmunt. 2021a. The curious case of hallucinations in neural machine translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1172–1183, Online. Association for Computational Linguistics. Vikas Raunak, Arul Menezes, and Marcin JunczysDowmunt. 2021b. The curious case of hallucinations in neural machine translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1172–1183, Online. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Elena Voita, Rico Sennrich, and Ivan Titov. 2021. Analyzing the source and target contributions to predictions in neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1126–1140, Online. Association for Computational Linguistics. Chaojun Wang and Rico Sennrich. 2020. On exposure bias, hallucination and domain shift in neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3544–3552, Online. Association for Computational Linguistics. Rongxiang Weng, Heng Yu, Xiangpeng Wei, and Weihua Luo. 2020. Towards enhancing faithfulness for neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2675–2684, Online. Association for Computational Linguistics. Yijun Xiao and William Yang Wang. 2021. On hallucination and predictive uncertainty in conditional language generation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2734–2744, Online. Association for Computational Linguistics. Jianhao Yan, Fandong Meng, and Jie Zhou. 2022. Probing causes of hallucinations in neural machine translations. ArXiv preprint, abs/2206.12529. Tengfei Yu, Liang Ding, Xuebo Liu, Kehai Chen, Meishan Zhang, Dacheng Tao, and Min Zhang. 2023. PromptST: Abstract prompt learning for end-to-end speech translation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 10140– 10154, Singapore. Association for Computational Linguistics. Shaolei Zhang and Yang Feng. 2021. Universal simultaneous machine translation with mixture-ofexperts wait-k policy. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7306–7317, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. 736 Shaolei Zhang and Yang Feng. 2022a. Informationtransport-based policy for simultaneous translation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 992– 1013, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Shaolei Zhang and Yang Feng. 2022b. Modeling dual read/write paths for simultaneous machine translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2461– 2477, Dublin, Ireland. Association for Computational Linguistics. Shaolei Zhang and Yang Feng. 2022c. Reducing position bias in simultaneous machine translation with length-aware framework. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6775–6788, Dublin, Ireland. Association for Computational Linguistics. Wen Zhang, Yang Feng, Fandong Meng, Di You, and Qun Liu. 2019. Bridging the gap between training and inference for neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4334–4343, Florence, Italy. Association for Computational Linguistics. Baigong Zheng, Kaibo Liu, Renjie Zheng, Mingbo Ma, Hairong Liu, and Liang Huang. 2020. Simultaneous translation policies: From fixed to adaptive. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2847–2853, Online. Association for Computational Linguistics. Chunting Zhou, Graham Neubig, Jiatao Gu, Mona Diab, Francisco Guzmán, Luke Zettlemoyer, and Marjan Ghazvininejad. 2021. Detecting hallucinated content in conditional neural sequence generation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1393–1404, Online. Association for Computational Linguistics. A Detailed Experimental Settings On IWSLT’14 De↔En, we train on 160K pairs, develop on 7K held out pairs.All data is tokenized and lower-cased and we segment sequences using byte pair encoding (Sennrich et al., 2016) with 10K merge operations. The resulting vocabularies are of 8.8K and 6.6K types in German and English respectively. On MuST-C Release V2.0 Zh→En2, we train on 358,853 pairs, develop on 1,349 pairs. Jieba3 are employed for Chinese word segmentation. All 2https://ict.fbk.eu/must-c-release-v2-0/ 3https://github.com/fxsjy/jieba data is tokenized by SentencePiece resulting in 32k word vocabularies in Chinese and English. Following Elbayad et al. (2020) and Zhang and Feng (2021), We train Transformer Small on IWSLT14 De→En. We train Transformer Base on MuST-C Release V2.0 Zh→En. B Experimental Results on IWSLT14 En →De Dataset B.1 Results of Word Frequency Distribution on IWSLT14 De→En Dataset Word 0 60 120 180 240 Word Frequency Hallucination Overall Figure 6: Word frequency of Hallucination and Overall on IWSLT14 De→En valid hypotheses set of wait-1. Word 0 40 80 120 160 Word Frequency Hallucination Overall Figure 7: Word frequency of Hallucination and Overall on IWSLT14 De→En valid hypotheses set of wait-3. Word 0 60 120 180 240 Word Frequency Hallucination Overall Figure 8: Word frequency of Hallucination and Overall on IWSLT14 De→En valid hypotheses set of wait-5. 737 Word 0 40 80 120 Word Frequency Hallucination Overall Figure 9: Word frequency of Hallucination and Overall on IWSLT14 De→En valid hypotheses set of wait-7. Word 0 50 100 150 200 Word Frequency Hallucination Overall Figure 10: Word frequency of Hallucination and Overall on IWSLT14 De→En valid hypotheses set of wait-9. B.2 Results of Word Frequency Rate in TSSR on IWSLT14 De→En Dataset 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.00 0.08 0.16 0.24 0.32 Word Freq Rate Hallucination Non-Hallucination Figure 11: Word Frequency Rate of Hallucination and Non-Hallucination in different TSSR intervals for the wait-1 model. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.00 0.08 0.16 0.24 Word Freq Rate Hallucination Non-Hallucination Figure 12: Word Frequency Rate of Hallucination and Non-Hallucination in different TSSR intervals for the wait-1 model with WSPAlign Annotation (?). 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.00 0.08 0.16 0.24 0.32 Word Freq Rate Hallucination Non-Hallucination Figure 13: Word Frequency Rate of Hallucination and Non-Hallucination in different TSSR intervals for the wait-3 model. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.00 0.08 0.16 0.24 0.32 Word Freq Rate Hallucination Non-Hallucination Figure 14: Word Frequency Rate of Hallucination and Non-Hallucination in different TSSR intervals for the wait-5 model. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.00 0.08 0.16 0.24 0.32 Word Freq Rate Hallucination Non-Hallucination Figure 15: Word Frequency Rate of Hallucination and Non-Hallucination in different TSSR intervals for the wait-7 model. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.00 0.08 0.16 0.24 0.32 Word Freq Rate Hallucination Non-Hallucination Figure 16: Word Frequency Rate of Hallucination and Non-Hallucination in different TSSR intervals for wait9 model. 738 C Experimental Results on IWSLT14 En→De Dataset 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.00 0.08 0.16 0.24 0.32 Hallucination Rate Wait-1 Wait-3 Wait-5 Wait-7 Wait-9 Figure 17: HR on the valid set in different TSSR intervals of wait-k models. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.00 0.08 0.16 0.24 0.32 Word Freq Rate Hallucination Non-Hallucination Figure 18: Word Frequency Rate of Hallucination and Non-Hallucination in different TSSR intervals for the wait-1 model. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.00 0.08 0.16 0.24 0.32 Word Freq Rate Hallucination Non-Hallucination Figure 19: Word Frequency Rate of Hallucination and Non-Hallucination in different TSSR intervals for the wait-3 model. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.0 0.1 0.2 0.3 0.4 Word Freq Rate Hallucination Non-Hallucination Figure 20: Word Frequency Rate of Hallucination and Non-Hallucination in different TSSR intervals for the wait-5 model. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.0 0.1 0.2 0.3 0.4 Word Freq Rate Hallucination Non-Hallucination Figure 21: Word Frequency Rate of Hallucination and Non-Hallucination in different TSSR intervals for the wait-7 model. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.0 0.1 0.2 0.3 0.4 Word Freq Rate Hallucination Non-Hallucination Figure 22: Word Frequency Rate of Hallucination and Non-Hallucination in different TSSR intervals for wait9 model. k=1 k=3 k=5 k=7 k=9 Baselines BLEU ↑15.75 22.03 24.99 26.22 26.60 HR % ↓27.46 19.73 16.72 16.24 15.93 Scheduled- BLEU ↑16.83 22.78 25.80 26.98 27.41 Sampling HR % ↓26.19 18.58 15.66 14.96 14.81 Table 5: BLEU scores and HR of wait-k models. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.12 0.06 0.00 0.06 0.12 Wait-1 Wait-3 Wait-5 Wait-7 Wait-9 Figure 23: Word Frequency Rate Change (∆) in different TSSR intervals with scheduled sampling training compared to the Baselines. 739 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 2000 0 2000 4000 Wait-1 Wait-3 Wait-5 Wait-7 Wait-9 Figure 24: Hallucination Frequency Change (∆) in different TSSR intervals with scheduled sampling training compared to the Baselines. D Experimental Results on MuST-C Zh→En Dataset k 1 3 5 7 9 ∞ HR % 33.96 25.31 23.22 21.84 20.73 19.43 Table 6: HR on MuST-C Zh→En valid set of wait-k, where k = ∞means Full-sentence MT. Word 0 15 30 45 Word Frequency Hallucination Overall Figure 25: Word frequency of Hallucination and Overall on valid hypotheses set of wait-1. Word 10 20 30 40 Word Frequency Hallucination Overall Figure 26: Word frequency of Hallucination and Overall on valid hypotheses set of wait-3. Word 0 15 30 45 60 Word Frequency Hallucination Overall Figure 27: Word frequency of Hallucination and Overall on valid hypotheses set of wait-5. Word 15 30 45 60 Word Frequency Hallucination Overall Figure 28: Word frequency of Hallucination and Overall on valid hypotheses set of wait-7. Word 15 30 45 60 Word Frequency Hallucination Overall Figure 29: Word frequency of Hallucination and Overall on valid hypotheses set of wait-9. k 1 3 5 7 9 Hallucination 6.57 6.52 6.35 6.29 6.23 Overall 8.23 8.44 8.49 8.53 8.52 Table 7: Word frequency distribution entropy of Hallucination and Overall on MuST-C Zh→En valid hypotheses set of wait-k. Train Ref Valid Ref Valid Hypo Train Ref 1.00 0.25 0.18 Valid Ref 0.25 1.00 0.54 Valid Hypo 0.18 0.54 1.00 Table 8: The correlation between the HR of words on the Valid Hypotheses (Valid Hypo), Valid Reference (Valid Ref) and Train Reference (Train Ref) of Hwait−1(t, a). Wait-k Valid set Training subset Uncertainty Confidence Uncertainty Confidence H NH H NH H NH H NH k=1 3.23 2.70 0.44 0.54 3.27 2.34 0.44 0.60 k=3 3.00 2.43 0.49 0.58 2.91 2.14 0.50 0.63 k=5 2.67 2.33 0.53 0.60 2.59 2.00 0.55 0.65 k=7 2.64 2.32 0.54 0.60 2.50 2.00 0.56 0.65 k=9 2.60 2.29 0.55 0.60 2.44 2.00 0.57 0.65 Table 9: The Uncertainty and Confidence of Hallucination (H) and Non-Hallucination (NH) on the valid set and training subset of wait-k models. 740 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.00 0.15 0.30 0.45 Hallucination Rate Wait-1 Wait-3 Wait-5 Wait-7 Wait-9 Figure 30: HR on the valid set in different TSSR intervals of wait-k models. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.0 0.1 0.2 0.3 0.4 Word Freq Rate Hallucination Non-Hallucination Figure 31: Word Frequency Rate of Hallucination and Non-Hallucination in different TSSR intervals for the wait-1 model. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.0 0.1 0.2 0.3 0.4 Word Freq Rate Hallucination Non-Hallucination Figure 32: Word Frequency Rate of Hallucination and Non-Hallucination in different TSSR intervals for the wait-3 model. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.0 0.1 0.2 0.3 0.4 Word Freq Rate Hallucination Non-Hallucination Figure 33: Word Frequency Rate of Hallucination and Non-Hallucination in different TSSR intervals for the wait-5 model. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.0 0.1 0.2 0.3 0.4 Word Freq Rate Hallucination Non-Hallucination Figure 34: Word Frequency Rate of Hallucination and Non-Hallucination in different TSSR intervals for the wait-7 model. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.0 0.1 0.2 0.3 0.4 Word Freq Rate Hallucination Non-Hallucination Figure 35: Word Frequency Rate of Hallucination and Non-Hallucination in different TSSR intervals for wait9 model. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0.030 0.015 0.000 0.015 Wait-1 Wait-3 Wait-5 Wait-7 Wait-9 Figure 36: Word Frequency Rate Change (∆) in different TSSR intervals with scheduled sampling training compared to the Baselines. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 INF TSSR Interval 0 200 400 600 Wait-1 Wait-3 Wait-5 Wait-7 Wait-9 Figure 37: Hallucination Frequency Change (∆) in different TSSR intervals with scheduled sampling training compared to the Baselines. 741 k=1 k=3 k=5 k=7 k=9 Baselines BLEU ↑12.33 15.39 16.26 16.66 16.66 HR % ↓33.96 25.31 23.22 21.84 20.73 Scheduled- BLEU ↑12.42 15.51 16.43 16.61 17.03 Sampling HR % ↓33.69 25.29 22.68 21.61 23.50 Table 10: BLEU scores and HR of wait-k models. E Examples of Hallucinations ér bù shì ji¯an x¯ın de sùi yuè Source Input 而 不是 艰辛 的 岁月 and not hard time Output And not hard work . Figure 38: Translation examples of hallucination words as defined in Section 2 under the wait-1 policy. Words highlighted in red indicate hallucinations. t¯ong guò hé g¯ong s¯ı de hé zuò Source Input 通过 和 公司 的 合作 ... by with company ’s working ... Output And by working with the company ... Figure 39: Translation examples of hallucination words as defined in Section 2 under the wait-1 policy. Words highlighted in red indicate hallucinations. when decoding the word “working”, the source-side context is “通过和公司” and this context lacks the semantic information of “working”, as it does not include the aligned word “working” in the current source-side context. Consequently, “working” can be identified as one of the hallucinated words in this output. xiˇan rán qí zh¯ong zh¯ı y¯ı de gˇan shòu bˇı lìng y¯ı gè hái chà Source Input 显然 其中 之一 的 感受 比 另 一个 还 差 。 Obviously Among them One of of feelings than another one still worse . Output Obviously , one of them feels more different than the other . Figure 40: Translation examples of hallucination words as defined in Section 2 under the wait-1 policy. nà lˇı de rén x¯u yào zhè xi¯e Source Input 那里 的 人 需要 这些 , there people need these , Output There ’s a lot of people out there who need it , Figure 41: Translation examples of hallucination words as defined in Section 2 under the wait-1 policy. F Alignment Error Rate of Awesome-Align Alignment Error Rate 7.30 % Precision 0.950 Recall 0.885 Table 11: The alignment error rate, precision, and recall of hallucination detection using Awesome-align, with human annotations as the ground truth. We report the alignment error rate as well as the precision and recall of hallucination detection using Awesome-align. Based on the precision and recall results, we believe that the automatic word alignment is suitable for detecting hallucinated words. 742
2024_short
66
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 743–753 August 11-16, 2024 ©2024 Association for Computational Linguistics Self-Augmented In-Context Learning for Unsupervised Word Translation Yaoyiran Li Anna Korhonen Ivan Vuli´c Language Technology Lab, TAL, University of Cambridge {yl711,alk23,iv250}@cam.ac.uk Abstract Recent work has shown that, while large language models (LLMs) demonstrate strong word translation or bilingual lexicon induction (BLI) capabilities in few-shot setups, they still cannot match the performance of ‘traditional’ mapping-based approaches in the unsupervised scenario where no seed translation pairs are available, especially for lower-resource languages. To address this challenge with LLMs, we propose self-augmented in-context learning (SAIL) for unsupervised BLI: starting from a zero-shot prompt, SAIL iteratively induces a set of high-confidence word translation pairs for in-context learning (ICL) from an LLM, which it then reapplies to the same LLM in the ICL fashion. Our method shows substantial gains over zero-shot prompting of LLMs on two established BLI benchmarks spanning a wide range of language pairs, also outperforming mapping-based baselines across the board. In addition to achieving state-of-the-art unsupervised BLI performance, we also conduct comprehensive analyses on SAIL and discuss its limitations. 1 Introduction and Motivation The task of word translation (WT), also known as bilingual lexicon induction (BLI), aims to automatically induce lexica of words with the same or similar meaning in different languages, thus bridging the lexical gap between languages. Even in the era of large language models (LLMs), BLI still has wide applications in machine translation and cross-lingual transfer learning (Sun et al., 2021; Zhou et al., 2021; Wang et al., 2022; Ghazvininejad et al., 2023; Jones et al., 2023). A particular BLI setup, termed (fully) unsupervised BLI, is especially compelling because it is not only more technically challenging but is also used as a pivotal component towards unsupervised machine translation (Lample et al., 2018; Artetxe et al., 2018b; Marchisio et al., 2020; Chronopoulou et al., 2021). Until recently, BLI approaches have predominantly relied on learning cross-lingual word embedding (CLWE) mappings: these are known as MAPPING-BASED approaches and are developed based on static or decontextualised word embeddings (WEs) (Patra et al., 2019; Grave et al., 2019; Li et al., 2022a; Yu et al., 2023). Meanwhile, autoregressive LLMs have become the cornerstone of modern NLP techniques (Brown et al., 2020; Ouyang et al., 2022; Touvron et al., 2023a) with success in many real-world tasks (Kasneci et al., 2023; Wu et al., 2023; Thirunavukarasu et al., 2023; Li et al., 2024). Given this trend, recent BLI research has also started to shift towards exploring LLMs. Li et al. (2023) first show that prompting LLMs with gold-standard WT pairs as in-context examples (few-shot in-context learning: ICL) outperforms all existing BLI approaches in the supervised and semi-supervised BLI setups (where typically 1K∼5K gold-standard WT pairs are available for training or ICL), while zero-shot prompting still falls behind traditional MAPPING-BASED approaches in the fully unsupervised BLI setup, especially for lower-resource languages. In this work, we thus aim at improving unsupervised BLI with LLMs. To this end, we analyze the limitations of zero-shot prompting and propose a novel self-augmented in-context learning (SAIL) method for unsupervised BLI with LLMs. The key idea is to first retrieve a set of high-confidence WT pairs by zero-shot prompting LLMs, then iteratively refine the high-confidence dictionary and finally use the gradually refined bilingual lexicon for BLI inference in an ICL fashion (§2). Our extensive experiments show that SAIL establishes new state-of-the-art unsupervised BLI performance on two standard BLI benchmarks. We also conduct thorough analyses on our approach, providing further insights into its inner workings (§3§4). Our code is publicly available at https: //github.com/cambridgeltl/sail-bli. 743 2 Methodology Unsupervised BLI: Task Preliminaries. We assume a pair of two languages: a source language Lx with its vocabulary X and a target language Ly with vocabulary Y. In a typical, standard BLI setup the vocabulary of each language contains the most frequent 200, 000 word types in the language (Glavaš et al., 2019; Li et al., 2022a). Given a source word wx ∈X, the unsupervised BLI task then aims to infer its translation in Ly, without any word-level parallel data (i.e., seed translation pairs from a lexicon) available for training or ICL.1 Zero-Shot Prompting. Li et al. (2023) have proposed to prompt autoregressive LLMs for the BLI task, where the input word wx is embedded into a predefined text template. We adopt the pool of templates provided by Li et al. (2023) and conduct template search for each LLM on a randomly chosen language pair. As an example, the zero-shot template for LLAMA-27B is as follows:2 ‘The Lx word wx in Ly is:’, where Lx, Ly, and wx are placeholders for the source language, target language, and the query word in the source language (e.g., Lx = Hungarian, wx = macska, Ly = Catalan). The deterministic beam search (with beam size of n as a hyper-parameter) is adopted to generate n output text pieces in the final beam, ranked by their sequence scores.3 For each of the n outputs, the first word in the generated output following the input sequence is extracted as a candidate answer. After filtering out those candidate answers not in Y, the candidate Ly word with the highest associated sequence score is returned as the final word translation prediction. Limitations of Zero-Shot Prompting. The above zero-shot approach for unsupervised BLI, proposed by Li et al. (2023), comes with several limitations. First, the template does not stipulate the output format and thus parsing the output text may not be as straightforward as expected. Put simply, LLM’s prediction may not be the first word in the generated sequence. Second, the LLM may not fully ‘understand’ the input template and sometimes may 1Following prior work, when wx has multiple ground truth translations in Ly, a prediction is considered correct if it is any of the ground truth answers. 2The full list of templates used for other LLMs are presented in Appendix C. 3We use n = 5 following Li et al. (2023). tend not to generate words in the target language especially for lower-resource languages. For the supervised BLI setup, where a dictionary of gold standard translation pairs is assumed and available, few-shot in-context learning can substantially improve final BLI performance (Li et al., 2023), since it not only provides examples of the desired output format but also helps LLMs ‘understand’ the BLI task. However, the availability of such a seed dictionary is not assumed in the unsupervised BLI task variant, and the key idea of this work is to derive and iteratively refine a seed dictionary by prompting LLMs. SAIL: Self-Augmented In-Context Learning for Unsupervised BLI. We thus propose to facilitate and improve unsupervised BLI by S1) using zero-shot prompting to retrieve Dh, a set of highconfidence translation pairs, and then S2) leveraging these pairs as ‘self-augmented’ in-context examples for few-shot prompting to further iteratively refine Dh (across 0 to Nit −1 iterations, where Nit is a hyper-parameter denoting total times of Dh inference in S1 and S2), and finally S3) conducting few-shot learning with the final, Nit-th self-created seed lexicon Dh for BLI inference on the test set. Deriving High-Confidence Pairs. For both steps S1 and S2 outlined above, we start with the most frequent Nf words in Lx since representations of less frequent words are considered to be much noisier in general (Artetxe et al., 2018a). For each wx, we conduct Lx →Ly translation: we refer to this predicted word as ˆwy.4 We then propose to conduct word back-translation, translating ˆwy from Ly back into Lx. The word pair (wx, ˆwy) is considered a high-confidence pair only if wx is also the output word of the back-translation step.5 We denote the set of all high-confidence pairs from the Lx words as Dx h. Likewise, we also start from the most frequent Nf words in Ly and symmetrically derive Dy h. Finally, we update the high-confidence dictionary with Dh = Dx h ∪Dy h.6 Few-Shot Prompting with High-Confidence Pairs. Step S1 of SAIL relies on zero-shot prompting, but all the subsequent iterations in S2 and 4We do not require ˆwy to be one of the most frequent Nf words in Ly. 5Earlier MAPPING-BASED approaches have retrieved highconfidence pairs through ranking cross-lingual word similarity scores (e.g., cosine similarity) to refine CLWE mappings (Artetxe et al., 2018a; Li et al., 2022a); in a sense, our work renovates and revitalises the idea with LLMs. 6Therefore, |Dx h| ≤Nf, |Dy h| ≤Nf, and |Dh| ≤2×Nf. 744 S3 apply few-shot prompting/ICL with the ‘selfaugmented’ high-confidence translation pairs Dh. Following Li et al. (2023), we adopt 5-shot prompting, and again conduct template search on the BLI task with a single, randomly selected language pair.7 The in-context examples, (wx i , wy i ) ∈ Dh, 1 ≤i ≤5, are retrieved where the wx i words are the nearest neighbours of the input word wx in Lx’s static word embedding space. The few-shot template for LLAMA-27B is then as follows: ‘The Lx word wx 1 in Ly is wy 1. The Lx word wx 2 in Ly is wy 2. ... The Lx word wx in Ly is’. 3 Experimental Setup BLI Data and LLMs. We adopt two standard BLI benchmarks: 1) 5 languages from XLING (Glavaš et al., 2019) including German (DE), English (EN), French (FR), Italian (IT), and Russian (RU), their combinations resulting in 20 BLI directions; 2) 3 lower-resource languages including Bulgarian (BG), Catalan (CA), and Hungarian (HU) from PanLex-BLI (Vuli´c et al., 2019), which result in 6 BLI directions.8 For both benchmarks, a test set of 2K WT pairs is provided for each BLI direction. We experiment with four open-source LLMs: LLAMA 7B, LLAMA-27B, LLAMA 13B, and LLAMA-213B (Touvron et al., 2023a,b). Li et al. (2023) found that 4 other families of LLMs, including mT5, mT0, mGPT and XGLM, underperform LLAMA; we thus skip these LLMs in our work. Implementation Details and BLI Evaluation. As mentioned in §2, our hyper-parameter and template search are conducted on a single, randomly selected language pair, which is DE-FR, following Li et al. (2023). Batch size is set to 1. We adopt Nit = 1, Nf = 5, 000 in our main experiments (§4.1) and then investigate their influence on BLI performance and the effectiveness of our proposed word back-translation in our further analyses (§4.2). Half-precision floating-point format (torch.float16) is adopted for all our SAIL and zero-shot experiments. Since our method does not imply any randomness, all results are from single runs. For evaluation, we adopt the standard top-1 accuracy as prior work. 7The decoding and output parsing strategy is the same as in zero-shot prompting. 8The two datasets are also used in many recent BLI works (Sachidananda et al., 2021; Aboagye et al., 2022; Li et al., 2022a,b; Vuli´c et al., 2020, 2023; Li et al., 2023). Baselines. We adopt two established MAPPINGBASED baselines. 1) VECMAP is a representative unsupervised BLI approach and features a selflearning mechanism that refines linear maps for deriving CLWEs (Artetxe et al., 2018a). 2) CONTRASTIVEBLI learns CLWEs with a two-stage contrastive learning framework and is the strongest MAPPING-BASED approach for supervised and semisupervised BLI tasks on our two benchmarks (Li et al., 2022a); however, it does not support unsupervised setup. We extend CONTRASTIVEBLI to unsupervised BLI by initialising the initial map with the unsupervised VECMAP method. The CONTRASTIVEBLI C1 variant based on static WEs and its stronger C2 variant combining static and decontextualised WEs are both used as our baselines. We adopt Cross-domain Similarity Local Scaling (CSLS) retrieval (Lample et al., 2018) for all MAPPING-BASED approaches as recommended in the baselines. In addition, we report 3) ZERO-SHOT prompting with each of our LLMs as baselines following the previous findings of Li et al. (2023). 4 Results and Discussion 4.1 Main Results Results on the Two BLI Benchmarks are summarised in Tables 1 and 2 respectively, with full BLI scores per each individual language pair in Tables 8 and 9 in Appendix F. As the main findings, 1) our SAIL shows consistent gains against ZERO-SHOT prompting for each of the 4 LLMs, showing the effectiveness of the proposed approach; 2) while ZERO-SHOT prompting still lags behind MAPPINGBASED approaches on PanLex-BLI’s lower-resource languages, applying SAIL outperforms MAPPINGBASED baselines across the board. The only exception is that CONTRASTIVEBLI (C2) still has a slight edge over SAIL with the weakest LLM overall, LLAMA 7B. 3) Among the 4 LLMs, LLAMA-213B presents the strongest BLI capability. Variance and Statistical Significance. The whole SAIL method does not imply any variance due to randomness: it does not rely on any actual LLM fine-tuning; we adopt deterministic beam search; the deterministic nearest neighbour retrieval is used for deriving in-context examples. Here, we report the statistical significance with χ2 tests. When comparing SAIL and ZERO-SHOT prompting (both with LLAMA-213B), the p-value is 1.1e-251 on 20 XLING BLI directions and 2.7e-109 on 6 PanLex-BLI BLI directions. We then compare 745 [Unsupervised BLI] DE EN FR IT RU AVG. MAPPING-BASED VECMAP 44.14 51.7 51.51 51.03 34.36 46.55 CONTRASTIVEBLI (C1) 44.72 52.12 52.29 51.77 35.5 47.28 CONTRASTIVEBLI (C2) 46.02 53.32 53.26 52.99 37.26 48.57 ZERO-SHOT LLAMA 7B 41.94 50.16 48.25 46.91 40.04 45.46 LLAMA-27B 43.91 52.7 50.68 48.23 42.8 47.66 LLAMA 13B 45.39 53.35 52.39 50.58 41.74 48.69 LLAMA-213B 47.12 55.02 51.31 52.02 43.09 49.71 SAIL (Ours) LLAMA 7B 51.39 61.92 58.92 56.94 50.7 55.97 LLAMA-27B 53.81 64.12 61.09 59.96 53.77 58.55 LLAMA 13B 55.35 64.84 62.49 61.27 54.5 59.69 LLAMA-213B 57.69 67.0 64.11 63.18 57.04 61.8 Table 1: Main results on the 20 XLING BLI directions. For each language, the average accuracy scores over 8 BLI directions (i.e., going from and going to other 4 languages) is reported. See also Appendix F. [Unsupervised BLI] BG CA HU AVG. MAPPING-BASED VECMAP 37.22 36.27 36.89 36.8 CONTRASTIVEBLI (C1) 36.7 35.86 37.82 36.79 CONTRASTIVEBLI (C2) 38.87 38.48 40.54 39.3 ZERO-SHOT LLAMA 7B 27.9 28.87 27.18 27.98 LLAMA-27B 28.2 27.21 26.92 27.45 LLAMA 13B 27.49 30.61 28.2 28.77 LLAMA-213B 29.08 32.38 30.53 30.66 SAIL (Ours) LLAMA 7B 37.02 37.63 36.29 36.98 LLAMA-27B 40.06 40.51 40.22 40.27 LLAMA 13B 41.71 42.76 42.07 42.18 LLAMA-213B 45.4 46.26 44.88 45.51 Table 2: Main results on 6 PanLex-BLI BLI directions. For each language, the average accuracy scores over 4 BLI directions (i.e., going from and going to other 2 languages) is reported. See also Appendix F. SAIL (with LLAMA-213B) against CONTRASTIVEBLI (C2) which is our strongest MAPPING-BASED baseline: the p-values are 3.1e-300 and 7.8e-20 respectively. These show that our findings are strongly statistically significant.9 4.2 Further Analyses Inspection of High-Confidence Dictionaries. To provide additional insight into our SAIL approach, we present statistics on the size of high-confidence dictionaries derived in our main experiments 9Usually p < 0.05 or p < 0.001 is considered to indicate statistical significance. LLM (SAIL) |Dh|: XLING |Dh|: PanLex-BLI MEAN MIN∼MAX MEAN MIN∼MAX LLAMA 7B 2471 1731∼3180 1735 1363∼2095 LLAMA-27B 3019 2086∼3824 1873 1690∼2183 LLAMA 13B 2850 2064∼3579 2005 1548∼2351 LLAMA-213B 2612 1577∼3362 1737 1184∼2049 Table 3: Statistics on |Dh| for each LLM over 20 XLING BLI directions and 6 PanLex-BLI BLI directions respectively. (Nit = 1, Nf = 5, 000, and with word backtranslation) over 20 XLING BLI directions and 6 PanLex-BLI BLI directions respectively for each of our four LLMs in Table 3. The values indicate that |Dh| of higher-resource languages (XLING) is typically greater than that of lower-resource languages (PanLex-BLI). In addition to the dictionary size, it is also worth investigating the quality of highconfidence dictionaries. However, to directly evaluate the quality of the ‘silver standard’ generated dictionaries is difficult since we do not have ground truth dictionaries for comparison. As a preliminary investigation, we randomly sample 50 translation pairs from the EN-DE LLAMA-213B-augmented dictionary and compare them with answers derived from Google Translate10 (EN→DE). We found that 40 out of the 50 pairs in our augmented dictionary are the same as the results from Google Translate. Although these results from Google Translate are also not ‘gold standard’ ground truth, it does point in the direction of reliability of extracted WT pairs. Impact of Nit. Figure 1 shows the influence of the number of iterations Nit on the average BLI scores on XLING. When Nit = 1, where only step S1 is executed (see §2), SAIL already approaches (almost) its optimal performance. Further refining the Dh for more iterations (step S2) only leads to small fluctuations in BLI performance, which we deem not worth the increased computational cost. Figure 3 (Appendix B) with results on PanLex-BLI shows a similar trend. Impact of Nf. We then study the impact of the frequency threshold Nf on the average BLI performance with a subset of XLING spanning DE-FR, EN-RU and RU-FR, each in both directions. The results in Figure 2 reveal that even with Nf = 1, 000, the BLI performance is boosted substantially when compared against the ZERO-SHOT baseline (i.e., when Nf = 0). When we further increase Nf, the 10https://translate.google.com/ 746 0 1 2 3 4 Nit 45 50 55 60 Accuracy×100% LLaMA-213B LLaMA13B LLaMA-27B LLaMA7B Figure 1: Top-1 accuracy (×100%) averaged over 20 XLING BLI directions with respect to Nit. Nit = 0 yields the ZERO-SHOT baseline. 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 Nf 45 50 55 60 Accuracy×100% LLaMA-213B LLaMA-27B Figure 2: Top-1 accuracy on a subset of XLING with respect to Nf. Nf = 0 yields the ZERO-SHOT baseline. LLM ZERO-SHOT SAIL (w/o back-translation) SAIL LLAMA-27B 45.36 52.9 56.12 LLAMA-213B 46.26 55.1 59.31 Table 4: BLI results on XLING, demonstrating the usefulness of back-translation when constructing Dh. Top-1 accuracy (×100%) scores. accuracy score still increases slowly, and the gain seems negligible with Nf ≥5000: i.e., increasing Nf again may not be worth the extra computation. Impact of Word Back-Translation. The backtranslation step aims to improve the quality of Dh. Here, we experiment with the ablated version of SAIL without back-translation on the same XLING subset (DE-FR, EN-RU and RU-FR) as before. The results in Table 4 clearly demonstrate the effectiveness of proposed word back-translation: the p-values (χ2 tests) are 8.8e-7 and 1.0e-10 respectively for LLAMA-27B and LLAMA-213B when comparing SAIL variants with and without the backtranslation mechanism. CHATGPT for BLI? We additionally report GPT3.5 (OpenAI, 2022) and GPT-4 (Achiam et al., 2023) results on DE-FR, EN-RU and RU-FR with ZERO-SHOT prompting (see Appendix E for exBLI Direction LLAMA-213B GPT-3.5 GPT-4 LLAMA-213B ZERO-SHOT SAIL DE→FR 46.64 59.52 62.6 61.5 FR→DE 50.8 58.41 60.63 56.29 EN→RU 47.6 55.85 55.9 63.75 RU→EN 51.44 59.93 60.35 59.93 RU→FR 41.17 59.77 61.39 60.29 FR→RU 39.94 46.82 49.35 54.11 Avg. 46.26 56.72 58.37 59.31 Table 5: Comparisons with GPT models. perimental details). Note that the procedure of instruction-tuning of LLMs usually covers largescale parallel data for machine translation. Therefore, leveraging CHATGPT models, even with ZEROSHOT prompting, is not in line with the motivation of unsupervised BLI and leads to unfair comparisons with the results of our main experiments and baselines.11 Here, we report CHATGPT results as an upper bound for ZERO-SHOT prompting. Our results in Table 5 show that 1) as expected, the instruction-tuned CHATGPT models outperform pretrained LLAMA-213B by a large margin in the ZEROSHOT setup, but 2) our SAIL method with the same pretrained LLAMA-213B outperforms both GPT-3.5 and the state-of-the-art GPT-412 in terms of the average performance, even for the selected higherresource languages, again demonstrating the effectiveness of the proposed SAIL approach. 5 Conclusion We proposed Self-Augmented In-Context Learning (SAIL) to improve unsupervised BLI with LLMs. The key idea is to iteratively retrieve a set of highconfidence word translation pairs by prompting LLMs and then leverage the retrieved pairs as incontext examples for unsupervised BLI. Our experiments on two standard BLI benchmarks showed that the proposed SAIL method substantially outperforms established MAPPING-BASED and ZERO-SHOT BLI baselines. We also conducted a series of indepth analyses on the high-confidence dictionary, key hyper-parameters, and the back-translation mechanism, and we additionally show that our SAIL approach with LLAMA-213B can even outperform ZERO-SHOT prompting with the state-of-the-art GPT-4 model. 11The four LLAMA models used in our main experiments are pretrained LLMs without instruction-tuning (see Appendix D); our MAPPING-BASED baselines adopt static WEs derived from monolingual corpora of respective languages and our CONTRASTIVEBLI (C2) baseline additionally leverages pretrained mBERT (Devlin et al., 2019). 12We adopt the strong ‘gpt-4-turbo-2024-04-09’ model which ranked 1st on the LMSYS Chatbot Arena Leaderboard at the time of experimentation (May 12, 2024). 747 Limitations The main limitation of this work, inherited from prior work as well (Li et al., 2023) is that the scope of our languages is constrained to the languages supported (or ‘seen’) by the underlying LLMs. For example, LLAMA-2 is reported to support only around 27 natural languages (Touvron et al., 2023b). This limitation could be mitigated if more advanced LLMs that support more languages are available in the future. It might also be feasible to adapt existing LLMs to more languages by fine-tuning on their monolingual corpora potentially combined with modern cross-lingual transfer learning techniques, whereas such adaptations of LLMs to unseen languages extend way beyond this work focused on the BLI task. In addition, compared to the ZERO-SHOT baseline, our SAIL framework organically requires more computational time and budget, as reported in Table 7 of Appendix D. Moreover, the SAIL framework is proposed and evaluated for the unsupervised BLI task. This work does not discuss if and how adapted variants of SAIL could also be applied to other NLP tasks beyond BLI. Further, the SAIL method should be equally applicable in weakly supervised BLI setups (Vuli´c et al., 2019) where a tiny set of available seed word translations (e.g., 50-500 word pairs) can be assumed to seed the iterative procedure. We leave this to future work. Acknowledgements We thank the anonymous reviewers for their valuable feedback. Yaoyiran Li is supported by Grace & Thomas C. H. Chan Cambridge International Scholarship. Anna Korhonen is supported by the UK Research and Innovation (UKRI) Frontier Research Grant EP/Y031350/1 (the UK government’s funding guarantee for ERC Advanced Grants). Ivan Vuli´c is supported by a personal Royal Society University Research Fellowship Inclusive and Sustainable Language Technology for a Truly Multilingual World’ (no 221137). References Prince Osei Aboagye, Jeff Phillips, Yan Zheng, Junpeng Wang, Chin-Chia Michael Yeh, Wei Zhang, Liang Wang, and Hao Yang. 2022. Normalization of language embeddings for cross-lingual alignment. In International Conference on Learning Representations. Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Anonymous. 2023. Dm-bli: Dynamic multiple subspaces alignment for unsupervised bilingual lexicon induction. OpenReview Preprint. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 789–798, Melbourne, Australia. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018b. Unsupervised neural machine translation. In International Conference on Learning Representations. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc. Alexandra Chronopoulou, Dario Stojanovski, and Alexander Fraser. 2021. Improving the lexical ability of pretrained language models for unsupervised neural machine translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 173–180, Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Marjan Ghazvininejad, Hila Gonen, and Luke Zettlemoyer. 2023. Dictionary-based phrase-level prompting of large language models for machine translation. arXiv preprint arXiv:2302.07856. 748 Goran Glavaš, Robert Litschko, Sebastian Ruder, and Ivan Vuli´c. 2019. How to (properly) evaluate crosslingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 710–721, Florence, Italy. Association for Computational Linguistics. Edouard Grave, Armand Joulin, and Quentin Berthet. 2019. Unsupervised alignment of embeddings with wasserstein procrustes. In Proceedings of the TwentySecond International Conference on Artificial Intelligence and Statistics, volume 89 of Proceedings of Machine Learning Research, pages 1880–1890. PMLR. Alex Jones, Isaac Caswell, Ishank Saxena, and Orhan Firat. 2023. Bilex rx: Lexical data augmentation for massively multilingual machine translation. arXiv preprint arXiv:2303.15265. Enkelejda Kasneci, Kathrin Sessler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan Günnemann, Eyke Hüllermeier, Stephan Krusche, Gitta Kutyniok, Tilman Michaeli, Claudia Nerdel, Jürgen Pfeffer, Oleksandra Poquet, Michael Sailer, Albrecht Schmidt, Tina Seidel, Matthias Stadler, Jochen Weller, Jochen Kuhn, and Gjergji Kasneci. 2023. Chatgpt for good? on opportunities and challenges of large language models for education. Learning and Individual Differences, 103:102274. Guillaume Lample, Alexis Conneau, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. In Proceedings of the International Conference on Learning Representations. Yaoyiran Li, Anna Korhonen, and Ivan Vuli´c. 2023. On bilingual lexicon induction with large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9577–9599, Singapore. Association for Computational Linguistics. Yaoyiran Li, Fangyu Liu, Nigel Collier, Anna Korhonen, and Ivan Vuli´c. 2022a. Improving word translation via two-stage contrastive learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4353–4374, Dublin, Ireland. Association for Computational Linguistics. Yaoyiran Li, Fangyu Liu, Ivan Vuli´c, and Anna Korhonen. 2022b. Improving bilingual lexicon induction with cross-encoder reranking. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4100–4116, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Yaoyiran Li, Xiang Zhai, Moustafa Alzantot, Keyi Yu, Ivan Vuli´c, Anna Korhonen, and Mohamed Hammad. 2024. Calrec: Contrastive alignment of generative llms for sequential recommendation. arXiv preprint arXiv:2405.02429. Sasha Luccioni, Victor Schmidt, Alexandre Lacoste, and Thomas Dandres. 2019. Quantifying the carbon emissions of machine learning. In NeurIPS 2019 Workshop on Tackling Climate Change with Machine Learning. Kelly Marchisio, Kevin Duh, and Philipp Koehn. 2020. When does unsupervised machine translation work? In Proceedings of the Fifth Conference on Machine Translation, pages 571–583, Online. Association for Computational Linguistics. OpenAI. 2022. Openai: Introducing chatgpt. URL https://openai.com/blog/chatgpt. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, volume 35, pages 27730–27744. Curran Associates, Inc. Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R. Gormley, and Graham Neubig. 2019. Bilingual lexicon induction with semi-supervision in non-isometric embedding spaces. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 184–193, Florence, Italy. Association for Computational Linguistics. Vin Sachidananda, Ziyi Yang, and Chenguang Zhu. 2021. Filtered inner product projection for crosslingual embedding alignment. In International Conference on Learning Representations. Jimin Sun, Hwijeen Ahn, Chan Young Park, Yulia Tsvetkov, and David R. Mortensen. 2021. Crosscultural similarity features for cross-lingual transfer learning of pragmatically motivated tasks. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2403–2414, Online. Association for Computational Linguistics. Arun James Thirunavukarasu, Darren Shu Jeng Ting, Kabilan Elangovan, Laura Gutierrez, Ting Fang Tan, and Daniel Shu Wei Ting. 2023. Large language models in medicine. Nature medicine, 29(8):1930– 1940. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay 749 Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Ivan Vuli´c, Goran Glavaš, Fangyu Liu, Nigel Collier, Edoardo Maria Ponti, and Anna Korhonen. 2023. Probing cross-lingual lexical knowledge from multilingual sentence encoders. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2089– 2105, Dubrovnik, Croatia. Association for Computational Linguistics. Ivan Vuli´c, Goran Glavaš, Roi Reichart, and Anna Korhonen. 2019. Do we really need fully unsupervised cross-lingual embeddings? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4407–4418, Hong Kong, China. Association for Computational Linguistics. Ivan Vuli´c, Edoardo Maria Ponti, Robert Litschko, Goran Glavaš, and Anna Korhonen. 2020. Probing pretrained language models for lexical semantics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7222–7240, Online. Association for Computational Linguistics. Xinyi Wang, Sebastian Ruder, and Graham Neubig. 2022. Expanding pretrained models to thousands more languages via lexicon-based adaptation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 863–877, Dublin, Ireland. Association for Computational Linguistics. Zongxiao Wu, Yizhe Dong, Yaoyiran Li, and Baofeng Shi. 2023. Unleashing the power of text for credit default prediction: Comparing human-generated and ai-generated texts. Available at SSRN 4601317. Shenglong Yu, Wenya Guo, Ying Zhang, and Xiaojie Yuan. 2023. Cd-bli: Confidence-based dual refinement for unsupervised bilingual lexicon induction. In Natural Language Processing and Chinese Computing, pages 379–391, Cham. Springer Nature Switzerland. Yucheng Zhou, Xiubo Geng, Tao Shen, Wenqiang Zhang, and Daxin Jiang. 2021. Improving zero-shot cross-lingual transfer for multilingual question answering over knowledge graph. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5822–5834, Online. Association for Computational Linguistics. 750 A Languages Family Language Code Germanic English EN German DE Romance Catalan CA French FR Italian IT Slavic Bulgarian BG Russian RU Uralic Hungarian HU Table 6: Languages used in our experiments with their ISO 639-1 codes. B Impact of Nit with PanLex-BLI 0 1 2 3 4 Nit 25 30 35 40 45 Accuracy×100% LLaMA-213B LLaMA13B LLaMA-27B LLaMA7B Figure 3: Top-1 accuracy (×100%) averaged over 6 PanLex-BLI BLI directions with respect to Nit. Nit = 0 yields the ZERO-SHOT baseline. C Templates Li et al. (2023) provide the suggested (carefully searched) templates for LLAMA 7B and LLAMA 13B, which we directly adopt in our work. For LLAMA27B and LLAMA-213B, we conduct template search following Li et al. (2023) on a single language pair DE-FR in both directions. For CHATGPT models used in §4.2, details about their templates are provided in Appendix E. Zero-Shot Template. LLAMA 7B, LLAMA-27B and LLAMA-213B share the same zero-shot template as introduced in §2. LLAMA 13B’s zero-shot template is as follows: ‘Translate from Lx to Ly: wx=>’. Few-Shot Template.. We have introduced the fewshot template of LLAMA-27B in §2. The remaining three LLMs happen to share the same few-shot template, given as follows: ‘The Lx word 'wx 1' in Ly is wy 1. The Lx word 'wx 2' in Ly is wy 2. ... The Lx word 'wx' in Ly is’. D Reproducibility Checklist • Source Code: our code is publicly available at https://github.com/cambridgeltl/sail-b li. • Hyper-Parameter Search: Nit is selected from {1, 2, 3, 4} and Nf from {1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000}. • Software: Python 3.9.7, PyTorch 1.10.1, Transformers 4.28.1, OpenAI 1.28.1. • Computing Infrastructure: we run our codes on Wilkes3, a GPU cluster hosted by the University of Cambridge. Each run makes use of a single Nvidia 80GB A100 GPU and 32× CPU cores. • Half-Precision Floating-Point Format: as introduced in §3, our BLI inference relies on torch.float16 for both our SAIL and the ZEROSHOT baseline. We have verified that fp16 can accelerate our computation with only negligible impact on the absolute BLI performance. Note that Li et al. (2023) did not specify torch.float16 in their ZERO-SHOT experiments with LLAMA 7B and LLAMA 13B, so the BLI scores reported are slightly different from ours. • Data, WEs, LLMs: all the BLI data, WEs, LLMs (excluding CHATGPT models) and baseline codes are open-source and publicly available. The WEs for retrieving in-context examples are fastText WEs (Bojanowski et al., 2017) trained on monolingual corpora of respective languages: the version pretrained on Wikipedia13 is used for XLING and the version pretrained with Wikipedia plus Common Crawl14 is used for PanLex-BLI, as recommended by XLING and PanLex-BLI, respectively. The same WEs are used for our MAPPING-BASED baselines. The LLMs used in our main experiments (LLAMA models) are summarised in Table 7. Note that we only adopt pretrained versions of LLAMA (e.g., ‘meta-llama/Llama-2-7b-hf’) rather than the instruction-tuned models (e.g., ‘meta-llama/Llama-2-7b-chat-hf’). The details of CHATGPT models used in §4.2 are provided in Appendix E. 13https://fasttext.cc/docs/en/pretrained-vecto rs.html 14https://fasttext.cc/docs/en/crawl-vectors.h tml 751 • Baselines: for every baseline, we use its recommended setup for unsupervised BLI and make sure the recommended setup achieves its own (near)optimal performance. As introduced in §3, we extend CONTRASTIVEBLI to the unsupervised BLI setup. Specifically, we adopt the set of its hyperparameters recommended for the weakly supervised BLI setup, which we found can also achieve strong unsupervised BLI performance. • Parameter Count and Runtime: we report the number of parameters of each LLM and the GPU runtime for BLI inference on a single BLI direction DE→FR, which contains circa 2K word pairs, in Table 7. • Carbon Footprint: our work consumes about 750 A100 GPU hours in total. We estimate that our experiments causes the emission of circa 90kg CO2 equivalents according to a publicly available ‘machine learning emissions calculator’ (Luccioni et al., 2019)15. E Details of CHATGPT Experiments We run our CHATGPT experiments introduced in §4.2 with the OpenAI API.16 The model ID for GPT-3.5 is ‘gpt-3.5-turbo-0125’. For GPT-4, we adopt the state-of-the-art ‘gpt-4-turbo-2024-04-09’ model which ranked 1st on the LMSYS Chatbot Arena Leaderboard at the time of experimentation (May 12, 2024). Our input to CHATGPT consists of two types of input messages: a system message followed by a user message. For the user message, we adopt the following template for both GPT-3.5 and GPT-4 as recommended in Anonymous (2023): ‘Translate the Lx word wx into Ly:’, which is also selected from the template pool of Li et al. (2023). We additionally adopt the following system message which is not used in Anonymous (2023) or Li et al. (2023): ‘Please complete the following sentence and only output the target word.’. In our preliminary investigation, we find that our system message can considerably improve the BLI performance of both CHATGPT models. 15https://mlco2.github.io/impact/#compute 16https://platform.openai.com/docs/overview There are two hyper-parameters used in our API calls: temperature = 0 and max_tokens = 5. Like our main experiments, we also extract the first word in the generated output sequence as the prediction for the target word. But different from our LLAMA experiments, we only derive a single output sequence from the CHATGPT API for each prompt. The code for our CHATGPT experiments is also provided in our GitHub repository. F Full BLI Results Table 8 shows detailed BLI scores for each BLI direction in the XLING dataset. Similarly, individual per-direction results on PanLex-BLI are presented in Table 9. 752 LLM Model ID Parameter Count Runtime: ZERO-SHOT Runtime: SAIL LLAMA 7B ‘huggyllama/llama-7b’ 6, 738, 415, 616 5 min 40 min LLAMA-27B ‘meta-llama/Llama-2-7b-hf’ 6, 738, 415, 616 5 min 40 min LLAMA 13B ‘huggyllama/llama-13b’ 13, 015, 864, 320 6 min 49 min LLAMA-213B ‘meta-llama/Llama-2-13b-hf’ 13, 015, 864, 320 6 min 49 min Table 7: LLMs adopted in our work with their huggingface.co model IDs, parameter count, and GPU runtime on a single BLI direction for ZERO-SHOT prompting and SAIL respectively. [Unsupervised BLI] VECMAP CONTRASTIVEBLI (C1) CONTRASTIVEBLI (C2) LLAMA 7B LLAMA-27B LLAMA 13B LLAMA-213B LLAMA 7B LLAMA-27B LLAMA 13B LLAMA-213B MAPPING-BASED ZERO-SHOT SAIL (Ours) DE→FR 48.98 50.39 51.8 42.46 44.44 47.37 46.64 54.67 54.77 58.37 61.5 FR→DE 43.97 43.61 44.9 43.2 45.47 48.11 50.8 50.08 54.16 54.47 56.29 DE→IT 48.41 49.77 50.23 42.78 42.78 46.06 48.51 53.36 54.25 57.38 59.05 IT→DE 44.03 43.93 45.43 38.6 41.55 44.39 45.27 46.15 51.63 52.2 52.92 DE→RU 25.67 28.22 31.09 30.41 35.32 32.76 36.62 45.12 46.9 48.98 51.59 RU→DE 39.13 40.02 41.33 43.53 44.68 43.11 42.12 46.83 50.55 50.65 53.9 EN→DE 48.4 47.45 47.4 52.0 52.1 54.35 59.85 59.55 61.75 62.8 65.05 DE→EN 54.51 54.36 55.97 42.57 44.91 46.95 47.16 55.35 56.44 57.96 61.24 EN→FR 60.15 61.05 61.25 57.6 62.65 62.65 61.75 72.6 73.8 75.85 76.35 FR→EN 61.25 62.34 63.58 54.58 55.56 57.27 53.03 63.68 65.13 65.29 66.63 EN→IT 57.4 57.6 58.75 58.95 60.85 60.4 65.8 71.7 73.0 74.25 77.6 IT→EN 60.83 62.02 63.46 47.39 50.08 54.94 53.54 60.1 64.08 64.13 65.43 EN→RU 24.55 25.45 26.1 42.05 44.6 40.1 47.6 57.4 60.25 61.05 63.75 RU→EN 46.52 46.67 50.03 46.15 50.81 50.13 51.44 54.95 58.51 57.41 59.93 IT→FR 64.75 65.12 65.89 51.42 54.47 57.36 55.3 61.91 65.58 65.94 68.17 FR→IT 63.37 63.94 64.61 57.32 55.98 60.01 61.87 64.72 66.22 69.22 69.53 RU→FR 45.31 46.78 47.93 43.58 48.04 47.77 41.17 54.79 57.62 57.52 60.29 FR→RU 24.26 25.09 26.07 35.8 38.8 38.59 39.94 48.94 51.42 53.29 54.11 RU→IT 43.95 44.89 46.15 47.3 47.15 45.99 49.45 53.54 56.26 56.31 59.25 IT→RU 25.48 26.87 29.35 31.52 33.02 35.45 36.38 44.03 48.63 50.75 53.49 Avg. 46.55 47.28 48.57 45.46 47.66 48.69 49.71 55.97 58.55 59.69 61.8 Table 8: Full BLI results on 20 XLING BLI directions. [Unsupervised BLI] VECMAP CONTRASTIVEBLI (C1) CONTRASTIVEBLI (C2) LLAMA 7B LLAMA-27B LLAMA 13B LLAMA-213B LLAMA 7B LLAMA-27B LLAMA 13B LLAMA-213B MAPPING-BASED ZERO-SHOT SAIL (Ours) BG→CA 39.6 38.08 39.66 32.83 29.79 32.77 33.47 40.19 42.23 42.52 47.9 CA→HU 34.09 34.2 36.85 23.7 23.2 24.42 30.17 32.27 35.25 38.34 39.83 HU→BG 36.46 38.36 40.44 28.28 27.71 26.5 26.73 38.19 41.47 43.89 46.66 CA→BG 33.6 31.39 33.94 26.35 27.2 27.03 28.39 36.54 38.47 42.27 45.67 HU→CA 37.79 39.77 43.45 32.62 28.66 38.23 37.51 41.53 46.09 47.91 51.65 BG→HU 39.24 38.95 41.44 24.13 28.12 23.67 27.72 33.16 38.08 38.14 41.38 Avg. 36.8 36.79 39.3 27.98 27.45 28.77 30.66 36.98 40.27 42.18 45.51 Table 9: Full BLI results on 6 PanLex-BLI BLI directions. 753
2024_short
67
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 754–765 August 11-16, 2024 ©2024 Association for Computational Linguistics RAM-EHR: Retrieval Augmentation Meets Clinical Predictions on Electronic Health Records Ran Xu♡*, Wenqi Shi♠*, Yue Yu♠, Yuchen Zhuang♠, Bowen Jin♣ May D. Wang♠, Joyce C. Ho♡, Carl Yang♡ ♡Emory University ♠Georgia Institute of Technology ♣University of Illinois at Urbana Champaign {ran.xu,joyce.c.ho,j.carlyang}@emory.edu, {wshi83,yczhuang,yueyu,maywang}@gatech.edu, [email protected] Abstract We present RAM-EHR, a Retrieval AugMentation pipeline to improve clinical predictions on Electronic Health Records (EHRs). RAMEHR first collects multiple knowledge sources, converts them into text format, and uses dense retrieval to obtain information related to medical concepts. This strategy addresses the difficulties associated with complex names for the concepts. RAM-EHR then augments the local EHR predictive model co-trained with consistency regularization to capture complementary information from patient visits and summarized knowledge. Experiments on two EHR datasets show the efficacy of RAM-EHR over previous knowledge-enhanced baselines (3.4% gain in AUROC and 7.2% gain in AUPR), emphasizing the effectiveness of the summarized knowledge from RAM-EHR for clinical prediction tasks. The code will be published at https://github.com/ritaranx/RAM-EHR. 1 Introduction Electronic Health Records (EHRs), encompassing detailed information about patients such as symptoms, diagnosis, and medication, are widely used by physicians to deliver patient care. Recently, a vast amount of deep learning models have been developed on EHR data (Choi et al., 2020; Gao et al., 2020; Wang et al., 2023a) for various downstream prediction tasks (e.g., disease diagnosis, risk prediction) to facilitate precision healthcare. To further improve the downstream predictive performance, several works attempt to augment the EHR visits with external knowledge. For example, van Aken et al. (2021) and Naik et al. (2022) incorporate additional clinical notes, although these clinical notes can be noisy and contain irrelevant contents for clinical predictions; another solution is to leverage external clinical knowledge graphs (KGs), such as UMLS (Chandak et al., 2023), * Equal contribution. which contain rich medical concepts (e.g., disease, medications) and their corresponding relationships. Integrating KGs with EHRs has been shown to boost model performance (Xu et al., 2023b; Gao et al., 2023). However, these works mostly rely on knowledge from a single source and medical KGs mainly focus on specific types of relations (e.g., hierarchical relations), which do not comprehensively capture the semantic information for medical codes (e.g., phenotype). Besides, it is nontrivial to align medical codes in EHRs with KGs due to the non-uniformity of surface names (e.g., abbreviations or colloquial terms) (Hao et al., 2021; Zhang et al., 2022). There also exist methods that use knowledge generated from large language models (LLMs) to assist EHR prediction (Jiang et al., 2024), but LLMs may not always provide the most relevant knowledge for target tasks and face the risk of hallucination. Effectively leveraging external knowledge to facilitate EHR predictive tasks remains a significant challenge. In this work, we propose RAM-EHR, a retrievalaugmented framework tailored for clinical predictive tasks on EHRs. Instead of leveraging a single knowledge source, RAM-EHR collects multiple knowledge sources (e.g., KGs, scientific literature) and converts them to text corpus, which enjoys the merits of a more comprehensive coverage of knowledge in a unified format. Then, to obtain unified representations for different knowledge sources, we leverage dense retrieval (DR) (Karpukhin et al., 2020; Lin et al., 2023) to encode corpus and medical codes as dense vectors, intuitively capturing the semantics of medical codes and addressing the alignment issue between EHR and external knowledge. Finally, to reduce irrelevant information, we utilize an LLM to summarize the top-retrieved passages into concise and informative knowledge summaries relevant to downstream tasks for each medical code. This process enhances the relevance and utility of the retrieved knowledge for clinical tasks. 754 To leverage external knowledge to assist clinical prediction, we introduce a retrieval-augmented model alongside the local EHR predictive model, which relies solely on patient visit information. The augmented model concatenates summarized passages and medical codes, feeding them into a moderate-size, pre-trained language model. We then co-train the local model and the augmented model with a consistency regularization, which captures the complementary information from patient visits and summarized knowledge and helps the model with better generalization (Wan, 2009). We verify the effectiveness of RAM-EHR by conducting experiments on two EHR datasets and show that RAM-EHR outperforms strong knowledge-enhanced predictive baselines by 3.4% in AUROC and 7.2% in AUPR on average. Our analysis further confirms the advantage of leveraging multi-source external knowledge as well as retrieval augmentation as plugins to assist vanilla EHR predictive models based on visits only. Additional studies justify the usefulness of summarized knowledge for assisting clinical prediction tasks. 2 Methodology 2.1 Problem Setup The EHR data consists of a group of patients P with corresponding hospital visits V = {v1, v2, ..., v|V |}. Each visit vi includes a set of medical codes Ci ⊂C, where C is the total set of medical codes for P. In this study, C contains multiple types of medical codes including diseases, medications, and procedures. Each medical code ci ∈Ci is a clinical concept, and it is associated with a name si in the form of short text snippets. Given the clinical record vi with the involved medical codes Ci, we aim to predict the patient’s clinical outcome yi (a binary label). Figure 1 presents a comprehensive workflow of RAM-EHR, with a specific focus on dense retrieval from multiple knowledge sources and consistency regularization with co-training. 2.2 Retrieval Augmentation w/ Medical Codes Existing approaches often treat each visit as context-free vectors, which fail to capture the concrete semantics of medical codes. Being aware of this, we aim to create the summarized knowledge for each medical code ci using its surface name si via retrieval augmentation with additional contexts. Multi-source Corpus Creation. Retrieval augmentation requires additional corpora as external knowledge. To ensure the coverage of clinical knowledge, we collect a diverse external resources M = {d1, d2, . . . , d|M|}. We represent each knowledge unit as a raw text to facilitate retrieval. The detailed information of M is in Appendix C. Passage Retrieval. Given a collection of |M| passages, the objective of the retriever is to transform passages in a dense vector, so that it can efficiently retrieve the most relevant information to the input query. In our work, we adopt Dragon (Lin et al., 2023), a dual-encoder model with strong performance across domains as the retriever. Specifically, we first use the passage encoder RD(·) to build an index for corpus M to support retrieval. Then, at runtime, we use the query encoder RQ(·) to map the input to an embedding (same dimension as the passage embedding) and calculate the similarity as f(q, d) = RQ(q)⊤RD(d). For the medical code ci with the surface name si, we retrieve top-k (k = 5 in this work) passages Ti from the corpus M as Ti = Top- k d∈M f(si, d). (1) The top retrieved passages are considered as the external knowledge for the medical code ci. Summarized Knowledge Generation. Although Ti contains the most relevant information for ci from M, directly using them to assist predictions can be suboptimal, as simply concatenating these passages often leads to long contexts, and some of the retrieved passages can also be irrelevant (Yu et al., 2023). Motivated by the fact that LLMs have strong capabilities in text summarization (Zhang et al., 2024), we propose to use the off-the-shelf LLM (gpt-3.5-turbo-0613) to generate the summarized knowledge ei for medical code ci as ei = LLM([Prompt, ti,1, · · · , ti,k]), (2) where ti ∈Ti stands for the retrieved passages in Eq.(1). We incorporate information related to the downstream task within our prompt to ensure the generated summaries are task-specific. Detailed prompt designs can be found in Appendix F. Remark. The retrieval step is efficient as the corpus indexing only needs to be done once before applying to prediction tasks. It only needs one extra ANN retrieval operation per query, which is efficiently supported by FAISS (Johnson et al., 2021). Besides, we cache the summarized knowledge for each medical code to avoid redundant operations. 755 𝑐𝑐0 𝑐𝑐1 𝑐𝑐2 𝑐𝑐3 … 𝑒𝑒0 𝑒𝑒1 𝑒𝑒2 𝑒𝑒3 Retrieval Augmentation 𝑐𝑐0 𝑒𝑒0 𝑐𝑐1 𝑒𝑒1 𝑐𝑐2 HyGT +MLP Hypergraph … Flattened Document 𝑐𝑐0 𝑐𝑐1 𝑐𝑐2 𝑐𝑐3 PLM +MLP 𝑦𝑦0 𝑦𝑦1 𝑦𝑦 Ground-Truth CE Loss CE Loss KL Divergence 𝑐𝑐𝑖𝑖 HMG CoA reductase inhibitors Retriever LLM Summarizer HMG CoA reductase inhibitors, known as statins, lower cholesterol and reduce cardiovascular disease risk. Retrieval Augmentation RAM-EHR Medical Code Summarized Knowledge 𝑒𝑒𝑖𝑖 Co-training … Figure 1: An overview of retrieval augmentation framework (left) and a detailed workflow of RAM-EHR (right). RAM-EHR initially gathers multiple knowledge sources and converts them into textual format. We then use dense retrieval to obtain information related to medical concepts. Next, we design an additional module to augment the local EHR predictive model co-trained with consistency regularization, capturing complementary information from both patient visits and summarized knowledge. 2.3 Augmenting Patient Visits with Summarized Knowledge via Co-training Recall that patient visits and summarized knowledge encode complementary information for clinical prediction tasks — visits capture cooccurence relationships, while summarized knowledge encodes semantic information. To effectively aggregate these two types of information, we design a co-training approach, detailed as follows. Augmented Model gϕ with Summarized Knowledge. For patient pi having the hospital visit vi with involved medical codes Ci, we decompose Ci into three subsets: Cd i for diseases, Cm i for medications, and Cp i for procedures. For each type of medical code, we flatten the visit into a document by concatenating all the codes and their summarized knowledge in a reversed sequential order. For example, for disease code Cd i , the flattened document can be Xd i = {[CLS], Dt, Dt−1, . . . , D1}, where Di = ||c∈Di(c, e) is the concatenation of disease code and its summarized knowledge (Eq. 2) within the i-th visit. We then use a pre-trained language model (PLM) with a multi-layer perceptron (MLP) classification head as gϕ for prediction with flattened documents as inputs: hk i = PLM(Xk i ), byi,1 = MLP  ||k∈Shk i  . (3) Here S = {p, m, d}, hi is the representation of [CLS] token of Xi, byi,1 is the prediction for the target task. We share PLM weights for three types of medical codes to improve efficiency. Local Model fθ with Visit Information. To harness the visit-level information, various deep learning architectures have been proposed. In principle, gϕ can be combined with any fθ to improve performance. In main experiments, we use a hypergraph transformer (HyGT, Cai et al. (2022); Xu et al. (2023a)) due to its strong ability to capture high-order relationships between visits and medical codes. It first builds hypergraphs G = (V, E) by treating medical codes as nodes and patients as hyperedges, then leverages self-attention for aggregating neighborhood information. The details for HyGT are in Appendix E. We obtain the prediction byi,2 with fθ as ei = HyGT(G, Vi), byi,2 = MLP(ei), (4) where ei is the representation of patient i after hypergraph transformer. Co-training. We integrate the two predictors into a co-training framework, with the learning objective: Laug =E(Vi,yi)∼P ℓ(byi,1, yi) + λDKL(byi,1, ey), Lloc =E(Vi,yi)∼P ℓ(byi,2, yi) + λDKL(byi,2, ey), (5) where ℓ(·) is the binary cross-entropy loss, ey = βbyi,1 + (1 −β)byi,2, λ, β are two hyperparameters. Two losses in Eq. 5 are designed to encourage fθ and gϕ regularize each other, which can stabilize the learning for two models. During the inference stage, we directly use the eyj as the final prediction for the j-th test example pj. 3 Experiments 3.1 Experiment Setups ⋄Datasets. We conduct experiments on the public MIMIC-III dataset (Johnson et al., 2016) and a private CRADLE dataset collected from a large healthcare system in the United States. We perform a 25-label phenotypes prediction task on MIMICIII, and a cardiovascular disease (CVD) endpoints 756 Table 1: The statistics of MIMIC-III and CRADLE. Stats MIMIC-III CRADLE # of diagnosis 846 7915 # of medication 4525 489 # of procedure 2032 4321 # of health records 12353 36611 prediction task for diabetes patients on CRADLE. We randomly split them into train/validation/test sets by 7:1:2. We present the detailed statistics of MIMIC-III and CRADLE in Table 1. Please refer to Appendix B for details. ⋄Evaluation Metrics. Following Choi et al. (2020), we employ Accuracy, AUROC, AUPR, and Macro-F1 as evaluation metrics, where AUROC is the main metric. For accuracy and F1 score, we use a threshold of 0.5 after obtaining predicted results. ⋄Baselines. We consider three groups of baselines: (a) Predictive models with visit information only: (1) Transformers (Li et al., 2020); (2) GCT (Choi et al., 2020); (3) HyGT (Cai et al., 2022); (b) Predictive models with external knowledge: (4) MedRetriever (Ye et al., 2021); (5) GraphCare (Jiang et al., 2024); (c) Predictive models with clinical notes: (6) CORE (van Aken et al., 2021); (7) BEEP (Naik et al., 2022). See Appendix D for more details. ⋄Implementation Details. In this work, we use Dragon (Lin et al., 2023) as the dense retriever, with the passage encoder RD(·)1 and the query encoder RQ(·)2. We use k = 5 during the retrieval stage without tuning. We choose UMLSBERT (Michalopoulos et al., 2021) for RAM-EHR and relevant baselines as gϕ, with a maximum length of 512, and HyGT (Cai et al., 2022) as fθ in main experiments, but RAM-EHR can be adapted to multiple gϕ and fθ (Sec 3.3). We set the learning rate to 5e-5 for gϕ and 1e-4 for fθ, batch size to 32, and the number of epochs to 5. We select β, λ based on the performance of the validation set, and present the parameter study in Appendix G. For the model training, all the experiments are conducted on a Linux server with one NVIDIA A100 GPU. 3.2 Main Experimental Results Table 2 exhibits the experiment results of RAMEHR and baselines. First, we observe RAM-EHR surpasses baselines lacking external knowledge, 1https://huggingface.co/facebook/ dragon-plus-query-encoder 2https://huggingface.co/facebook/ dragon-plus-context-encoder highlighting the benefits of retrieval augmentation. Second, RAM-EHR outperforms knowledgeenhanced baselines due to the diverse collection of external knowledge as well as the co-training scheme that leverages information from both visit and semantic perspectives. Third, directly using medical notes leads to inferior outcomes due to potential irrelevance, whereas combining medical codes with summarized knowledge as RAM-EHR proves more effective for prediction tasks. 3.3 Additional Studies Ablation Study. On the bottom of Table 2, we inspect different components in RAM-EHR and observe that removing any of them hurts the performance, which justifies the necessity of our designs. Besides, we observe that using the summarized knowledge with gϕ already achieves strong performance, highlighting the benefit of capturing the semantics of medical codes. Effect of fθ and gϕ. With various fθ and gϕ, we demonstrate the flexibility of RAM-EHR in Figure 2(a) and 2(b) by the consistent performance gain across different models. Notably, even with a lightweight fθ (Clin-MobileBERT) having only 25M parameters, RAM-EHR reaches close performance to UMLS-BERT, providing an efficient option for EHR predictive modeling. Effect of Information Source M. We then evaluate the effectiveness of each knowledge source within M. Figure 2(c) indicates that incorporating all corpus yields the highest performance, highlighting the value of diverse corpora. Besides, using Drugbank alone contributes minimally, likely due to its limited scope of medication information. Moreover, we observe that leveraging knowledge bases (e.g., MeSH) is more beneficial than literature sources, as they offer broader and more generic information conducive to clinical prediction tasks. Parameter Study. In Figure 3, we conduct parameter studies on both datasets for β and λ in Eq. 5. Figure 3(a) demonstrates that the model achieves the best performance when β is set to 0.2 and 0.4 on MIMIC-III and CRADLE, respectively, while the gain diminishes at the extremes. This highlights the contribution of combining the predictions from both the augmented model and the local model on the performance gain. In addition, λ is set to 1 and 5 on MIMIC-III and CRADLE, respectively, according to Figure 3(b). The positive values of λ indicate that the consistency loss enhances model performance. 757 Table 2: Performance on two EHR datasets compared with baselines. The result is averaged over five runs. We use * to indicate statistically significant results (p < 0.05). For ‘w/o Retrieval’, we directly use LLM to generate summarized knowledge. For ‘w/o LLM Summarization’, we concatenate top-k retrieved documents as summarized knowledge. ‘w/ gϕ only’ means we set λ = 0, β = 1 (i.e., only use the prediction from gϕ as the final prediction). Model MIMIC-III CRADLE ACC AUROC AUPR F1 ACC AUROC AUPR F1 Transformer (Li et al., 2020) 76.18 80.61 67.12 42.75 78.10 69.49 40.14 58.23 GCT (Choi et al., 2020) 77.20 78.62 64.87 37.57 76.51 68.31 37.55 44.10 HyGT (Cai et al., 2022) 78.07 81.09 68.08 44.93 79.45 70.59 41.04 60.00 MedRetriever (Ye et al., 2021) 77.15 80.14 68.45 39.29 78.95 70.07 42.19 57.96 GraphCare (Jiang et al., 2024) 80.11 82.26 71.19 44.33 79.09 71.12 43.98 59.00 CORE (van Aken et al., 2021) 79.63 82.05 70.79 43.76 77.11 67.84 40.74 61.12 BEEP (Naik et al., 2022) 79.90 82.67 71.58 44.15 79.29 68.59 41.93 60.95 RAM-EHR 81.59* (1.8%) 84.97* (2.8%) 74.64* (4.3%) 48.19* (7.2%) 80.41* (1.2%) 73.80* (3.8%) 48.40* (10.1%) 63.98* (4.7%) w/o Retrieval 80.68 83.29 72.95 44.65 79.83 73.06 47.05 63.25 w/o LLM Summarization 80.08 82.14 71.35 41.49 77.30 69.71 42.58 61.70 w/ Augmented Model gϕ Only 81.04 83.80 73.41 46.83 79.70 73.15 47.62 63.33 UMLSBERT PubmedBERT Clin-DistilBERT Clin-MobileBERT 82 84 86 AUROC Augmented Model gϕ Only w/ RAM-EHR (a) Effect of gϕ HyGT GCT Transformer 75 80 85 AUROC Local Model fθ Only w/ RAM-EHR (b) Effect of fθ All PrimeKG Wikipedia Drugbank Pubmed MeSH None 82 83 84 85 AUROC (c) Effect of M D M P 1.0 1.5 2.0 Average Quality Score MIMIC-III D M P 1.0 1.5 2.0 IH-D1 RAM-EHR Direct LLM Prompting (d) Human Study Figure 2: Results for Additional Studies. (a), (b), (c) is for MIMIC dataset, the results on CRADLE is in Appendix G. 0.0 0.5 1.0 80 82 84 86 AUROC 70 72 74 76 MIMIC-III IH-D1 (a) β in Eq. 5 0 0.5 1 5 82 83 84 85 AUROC 72 73 74 75 MIMIC-III IH-D1 (b) λ in Eq. 5 Figure 3: Parameter studies of β and λ on both datasets. 3.4 Case Study Figure 4 presents a case study on CRADLE to compare knowledge summarized by RAM-EHR and directly generated by LLM prompting. We observe that RAM-EHR provides more relevant information for the downstream task, particularly regarding the CVD outcome in this case, compared to direct LLM prompting. This also aligns with the human study evaluating the quality of 40 randomly sampled knowledge per type of code on a scale of [0,1,2] in Figure 2(d). The study on hyperparameters and retrieval components is in Appendix G. 4 Conclusion We propose RAM-EHR, which uses dense retrieval with multiple knowledge sources and consistency regularization to enhance EHR prediction tasks. Experiments on two EHR datasets show the efficacy of RAM-EHR over baselines with a gain of Aortic valve disorder encompasses congenital, syndromic, and acquired conditions, leading to impaired blood flow and heart functionality. Thiazolidinediones are insulin-sensitizing agents used to treat type 2 diabetes, with potential side effects of heart failure. Elevated serum creatinine levels can indicate various health conditions, including kidney disease and heart defects. Aortic valve disorder is a condition affecting the heart's aortic valve, leading to potential health complications. Thiazolidinediones are a class of medications used to treat type 2 diabetes and improve insulin sensitivity. Understanding the levels of creatinine from other sources is crucial for predicting health outcomes. RAM-EHR Direct LLM Prompting Figure 4: Comparing knowledge summarized by RAMEHR and directly generated by LLM prompting. Bold denotes disease, medication and procedure concepts. Blue and Red indicate useful and irrelevant knowledge. 3.4% in AUROC and 7.2% in AUPR. In addition, we conduct human studies to confirm the utility of generated knowledge. Acknowledgement We thank the anonymous reviewers and area chairs for valuable feedbacks. This research was partially supported by the Emory Global Diabetes Center of the Woodruff Sciences Center, Emory University. Research reported in this publication was supported by the National Institute Of Diabetes And Digestive And Kidney Diseases of the National Institutes of Health under Award Number K25DK135913. The research also receives partial support by the National Science Foundation under Award Number IIS-2145411. The content is solely the responsibility of the authors and does not necessarily rep758 resent the official views of the National Institutes of Health. We also thank Microsoft for providing research credits under the Accelerating Foundation Models Research Program. Limitations In this work, we propose RAM-EHR to unify external knowledge in text format and adapt it for EHR predictive tasks. Despite its strong performance, we have listed some limitations of RAM-EHR: Building Multi-source Corpus M. In this study, we construct a multi-source corpus M by manually selecting five relevant sources within the clinical domain. In real-world scenarios, the grounding corpora usually require customization according to query domains and user needs. Therefore, effectively selecting grounding corpora and efficiently evaluating their relative contributions remains an unresolved issue. Furthermore, retrieved evidence may contain noise that could potentially degrade model performance, highlighting the importance of developing fine-grained filtering or re-ranking modules as a crucial area for future research. Efficiency. The integration of the augmented model gϕ can result in additional time complexity. In our main experiment setups (using UMLSBERT), co-training usually takes 1.5× to 2× more times than using the local model alone. One potential solution is to use a lightweight model (e.g., Clin-MobileBERT) to improve efficiency. Ethical Considerations One potential ethical consideration concerns the use of credential data (MIMIC-III and CRADLE) with GPT-based online services. We have signed and strictly adhered to the PhysioNet Credentialed Data Use Agreement3 for the legal usage of the MIMIC-III dataset. To prevent sensitive information from being shared with third parties through APIs, we carefully follow the guidelines4 for the responsible use of MIMIC data in online services. Specifically, we have requested to opt out of human review of the data by filling out the Azure OpenAI Additional Use Case Form5 in order to utilize the Azure Open AI service while ensuring that Microsoft does not have access to the patient 3https://physionet.org/about/licenses/ physionet-credentialed-health-data-license-150/ 4https://physionet.org/news/post/ gpt-responsible-use 5https://aka.ms/oai/additionalusecase data. The utilization of LLMs in our framework is strictly for the purpose of building medical conceptspecific KGs. In addition, the building of medical concept-specific KGs does not involve direct interaction with any individual patient information. We iterate through all concepts in the medical coding system (e.g., CCS and ICD) to generate their respective KGs using LLMs, and these KGs are stored locally. References Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, et al. 2022. Improving language models by retrieving from trillions of tokens. In Proceedings of the 39th International Conference on Machine Learning, pages 2206–2240. PMLR. Derun Cai, Chenxi Sun, Moxian Song, Baofeng Zhang, Shenda Hong, and Hongyan Li. 2022. Hypergraph contrastive learning for electronic health records. In Proceedings of the 2022 SIAM International Conference on Data Mining (SDM), pages 127–135. SIAM. Payal Chandak, Kexin Huang, and Marinka Zitnik. 2023. Building a knowledge graph to enable precision medicine. Scientific Data, 10(1):67. Edward Choi, Mohammad Taha Bahadori, Le Song, Walter F Stewart, and Jimeng Sun. 2017. Gram: graph-based attention model for healthcare representation learning. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pages 787–795. Edward Choi, Zhen Xu, Yujia Li, Michael Dusenberry, Gerardo Flores, Emily Xue, and Andrew Dai. 2020. Learning the graphical structure of electronic health records with graph convolutional transformer. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 606–613. Junyi Gao, Cao Xiao, Yasha Wang, Wen Tang, Lucas M Glass, and Jimeng Sun. 2020. Stagenet: Stage-aware neural networks for health risk prediction. In Proceedings of The Web Conference 2020, pages 530– 540. Yanjun Gao, Ruizhe Li, John Caskey, Dmitriy Dligach, Timothy Miller, Matthew M. Churpek, and Majid Afshar. 2023. Leveraging a medical knowledge graph into large language models for diagnosis prediction. Junheng Hao, Chuan Lei, Vasilis Efthymiou, Abdul Quamar, Fatma Özcan, Yizhou Sun, and Wei Wang. 2021. Medto: Medical data to ontology matching using hybrid graph neural networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 2946–2954. 759 Hrayr Harutyunyan, Hrant Khachatrian, David C Kale, Greg Ver Steeg, and Aram Galstyan. 2019. Multitask learning and benchmarking with clinical time series data. Scientific data, 6(1):96. Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane DwivediYu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2023. Atlas: Few-shot learning with retrieval augmented language models. Journal of Machine Learning Research, 24(251):1–43. Minbyul Jeong, Jiwoong Sohn, Mujeen Sung, and Jaewoo Kang. 2024. Improving medical reasoning through retrieval and self-reflection with retrievalaugmented large language models. Pengcheng Jiang, Cao Xiao, Adam Cross, and Jimeng Sun. 2024. Graphcare: Enhancing healthcare predictions with open-world personalized knowledge graphs. In The Twelfth International Conference on Learning Representations. Alistair EW Johnson, Tom J Pollard, Lu Shen, Li-wei H Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic-iii, a freely accessible critical care database. Scientific data, 3(1):1–9. Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2021. Billion-scale similarity search with gpus. IEEE Transactions on Big Data, 7(3):535–547. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474. Yikuan Li, Shishir Rao, José Roberto Ayala Solares, Abdelaali Hassaine, Rema Ramakrishnan, Dexter Canoy, Yajie Zhu, Kazem Rahimi, and Gholamreza Salimi-Khorshidi. 2020. Behrt: transformer for electronic health records. Scientific reports, 10(1):7155. Sheng-Chieh Lin, Akari Asai, Minghan Li, Barlas Oguz, Jimmy Lin, Yashar Mehdad, Wen-tau Yih, and Xilun Chen. 2023. How to train your dragon: Diverse augmentation towards generalizable dense retrieval. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 6385–6400, Singapore. Association for Computational Linguistics. George Michalopoulos, Yuanxin Wang, Hussam Kaka, Helen Chen, and Alexander Wong. 2021. UmlsBERT: Clinical domain knowledge augmentation of contextual embeddings using the Unified Medical Language System Metathesaurus. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1744–1753, Online. Association for Computational Linguistics. Aakanksha Naik, Sravanthi Parasa, Sergey Feldman, Lucy Lu Wang, and Tom Hope. 2022. Literatureaugmented clinical outcome prediction. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 438–453, Seattle, United States. Association for Computational Linguistics. Cecilia Panigutti, Alan Perotti, and Dino Pedreschi. 2020. Doctor xai: an ontology-based approach to black-box sequential data classification explanations. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 629–639. Md Rizwan Parvez, Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Retrieval augmented code generation and summarization. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2719–2734, Punta Cana, Dominican Republic. Association for Computational Linguistics. Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen tau Yih. 2023. Replug: Retrieval-augmented black-box language models. Betty van Aken, Jens-Michalis Papaioannou, Manuel Mayrdorfer, Klemens Budde, Felix Gers, and Alexander Loeser. 2021. Clinical outcome prediction from admission notes using self-supervised knowledge integration. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 881–893, Online. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. Denny Vrandeˇci´c and Markus Krötzsch. 2014. Wikidata: a free collaborative knowledgebase. Communications of the ACM, 57(10):78–85. Xiaojun Wan. 2009. Co-training for cross-lingual sentiment classification. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 235–243, Suntec, Singapore. Association for Computational Linguistics. Han Wang, Yang Liu, Chenguang Zhu, Linjun Shou, Ming Gong, Yichong Xu, and Michael Zeng. 2021. Retrieval enhanced model for commonsense generation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3056– 3062, Online. Association for Computational Linguistics. 760 Xiaochen Wang, Junyu Luo, Jiaqi Wang, Ziyi Yin, Suhan Cui, Yuan Zhong, Yaqing Wang, and Fenglong Ma. 2023a. Hierarchical pretraining on multimodal electronic health records. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2839–2852, Singapore. Association for Computational Linguistics. Yubo Wang, Xueguang Ma, and Wenhu Chen. 2023b. Augmenting black-box llms with medical textbooks for clinical question answering. David S Wishart, Craig Knox, An Chi Guo, Dean Cheng, Savita Shrivastava, Dan Tzur, Bijaya Gautam, and Murtaza Hassanali. 2008. Drugbank: a knowledgebase for drugs, drug actions and drug targets. Nucleic acids research, 36(suppl_1):D901–D906. Ran Xu, Mohammed K Ali, Joyce C Ho, and Carl Yang. 2023a. Hypergraph transformers for ehr-based clinical predictions. AMIA Summits on Translational Science Proceedings, 2023:582. Yongxin Xu, Xu Chu, Kai Yang, Zhiyuan Wang, Peinie Zou, Hongxin Ding, Junfeng Zhao, Yasha Wang, and Bing Xie. 2023b. Seqcare: Sequential training with external medical knowledge graph for diagnosis prediction in healthcare data. In Proceedings of the ACM Web Conference 2023, pages 2819–2830. Muchao Ye, Suhan Cui, Yaqing Wang, Junyu Luo, Cao Xiao, and Fenglong Ma. 2021. Medretriever: Targetdriven interpretable health risk prediction via retrieving unstructured medical text. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 2414–2423. Wenhao Yu, Hongming Zhang, Xiaoman Pan, Kaixin Ma, Hongwei Wang, and Dong Yu. 2023. Chain-ofnote: Enhancing robustness in retrieval-augmented language models. Cyril Zakka, Rohan Shad, Akash Chaurasia, Alex R Dalal, Jennifer L Kim, Michael Moor, Robyn Fong, Curran Phillips, Kevin Alexander, Euan Ashley, et al. 2024. Almanac—retrieval-augmented language models for clinical medicine. NEJM AI, 1(2):AIoa2300068. Sheng Zhang, Hao Cheng, Shikhar Vashishth, Cliff Wong, Jinfeng Xiao, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2022. Knowledge-rich self-supervision for biomedical entity linking. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 868– 880, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, and Tatsunori B Hashimoto. 2024. Benchmarking large language models for news summarization. Transactions of the Association for Computational Linguistics, 12:39–57. 761 A Related Works Retrieval Augmented Learning and its Application in Clinical Domain. Retrieval augmented learning, which collects additional contextual information from external corpus, has shown effectiveness on diverse tasks including language modeling (Borgeaud et al., 2022), knowledge-intensive NLP (Lewis et al., 2020; Shi et al., 2023), commonsense reasoning (Wang et al., 2021), code genereation (Parvez et al., 2021), and few/zeroshot learning (Izacard et al., 2023). Compared to the general domain, the application of retrieval augmented learning to clinical tasks is still underexplored. Some efforts have been paid to retrieval augmented clinical language models (Zakka et al., 2024) as well as clinical question answering (Wang et al., 2023b; Jeong et al., 2024). The most relevant works are Ye et al. (2021); Naik et al. (2022), which leverage clinical literature to augment clinical predictive models. Compared to these works, our contribution lies in two folds: (1) we design a retrieval augmentation for structured EHRs with a diverse collection of external knowledge, which provides more relevant information for target clinical prediction tasks; (2) we incorporate a co-training scheme to leverage both the visit-level information and external knowledge for predictions. Knowledge-enhanced EHR Predictive Models. Many studies attempt to harness external knowledge for clinical prediction tasks. The majority of them leverage structured knowledge, such as medical ontology (Choi et al., 2017; Panigutti et al., 2020), to capture hierarchical relationships among medical codes, or employ personalized knowledge graphs (Xu et al., 2023b; Jiang et al., 2024) to integrate patient-specific information. However, these methods often suffer from limited coverage of all medical codes due to the complexity of surface names. Alternatively, some approaches utilize unstructured medical text for health prediction tasks (Ye et al., 2021). However, Ye et al. (2021) rely on a restricted corpus of approximately 30,000 passages as their external corpus, resulting in limited coverage. B Task Information MIMIC-III. The MIMIC-III dataset (Johnson et al., 2016) is a large, freely available database that contains de-identified health-related data from over 4,000 patients who stayed in critical care units at Algorithm 1 Overview of RAM-EHR. 1: Input: P: patients; V : corresponding hospital visits of patients. 2: Initializing multi-source external knowledge M; 3: for i = 1, · · · , |V | do 4: for ci ∈vi do 5: Get the medical code ci and the corresponding textual name si included in visit vi; 6: // Passage Retrieval 7: Retrieve passages Ti via Eq. (1) 8: // Knowledge Summarization (Accelerated with caching) 9: Summarize knowledge ei for ci via Eq. (2); 10: end for 11: // Co-training 12: Predict byi,1 with knowledge-augmented model gϕ via Eq. (3); 13: Predict byi,2 with visit-based local model fθ via Eq. (4); 14: // Update Model Parameters 15: Compute loss function L via Eq. (5); 16: Update model parameters ϕ and θ; 17: end for Output: Augmented model gϕ and local model fθ; Final prediction eyj = βbyj,1 + (1 −β)byj,2 for the j-th test example pj. the Beth Israel Deaconess Medical Center between 2001 and 2012. We conduct the phenotyping prediction task proposed by (Harutyunyan et al., 2019). It aims to predict whether the 25 pre-defined acute care conditions (see Table 3) are present in a patient’s next visit, based on the information from their current visit. The problem is formulated as a 25-label binary classification, considering that multiple phenotypes may exist in a single visit. For data preprocessing, we focus on patients with multiple hospital visits, identified based on their admission information. We extract pairs of consecutive visits for each patient. For each pair, we extract diseases, medications, and procedures from the health records in the former visit as input, and identify the phenotypes in the latter visit as labels, using Clinical Classifications Software (CCS) from the Healthcare Cost and Utilization Project (HCUP)6. CRADLE. For the CRADLE dataset, we conduct a CVD outcome prediction task, which predicts whether patients with type 2 diabetes will experience CVD complications within 1 year after their initial diagnosis, including coronary heart disease (CHD), congestive heart failure (CHF), myocardial infarction (MI), or stroke. Diseases are identified by their ICD-9 or ICD-10 clinical codes. The usage of data has been approved by the Institutional Review Board (IRB). 6https://hcup-us.ahrq.gov/toolssoftware/ccs/ AppendixASingleDX.txt 762 Table 3: The 25 pre-defined phenotypes in MIMIC-III. Phenotype Type Acute and unspecifed renal failure acute Acute cerebrovascular disease acute Acute myocardial infarction acute Cardiac dysrhythmias mixed Chronic kidney disease chronic Chronic obstructive pulmonary disease chronic Complications of surgical/medical care acute Conduction disorders mixed Congestive heart failure; nonhypertensive mixed Coronary atherosclerosis and related chronic Diabetes mellitus with complications mixed Diabetes mellitus without complication chronic Disorders of lipid metabolism chronic Essential hypertension chronic Fluid and electrolyte disorders acute Gastrointestinal hemorrhage acute Hypertension with complications chronic Other liver diseases mixed Other lower respiratory disease acute Other upper respiratory disease acute Pleurisy; pneumothorax; pulmonary collapse acute Pneumonia acute Respiratory failure; insufficiency; arrest acute Septicemia (except in labor) acute Shock acute C Knowledge Sources C.1 Descriptions • PubMed7: PubMed is a free search engine accessing primarily the MEDLINE database of references and abstracts on life sciences and biomedical topics. It provides users with access to millions of scientific documents, including research papers, reviews, and other scholarly articles. We use the Entrez package to extract the PubMed articles8, resulting in 230k documents. • DrugBank9 (Wishart et al., 2008): DrugBank is a comprehensive and freely accessible online database containing information on drugs and drug targets. It integrates detailed drug data (chemical, pharmacological, and pharmaceutical) with comprehensive information on drug targets (sequence, structure, and pathway). We use the data from the original database, which contains 355k documents. • Medical Subject Headings (MeSH)10: Medical Subject Headings (MeSH) is a comprehen7https://pubmed.ncbi.nlm.nih.gov/ 8https://biopython.org/docs/1.75/api/Bio. Entrez.html 9https://go.drugbank.com/releases/latest 10https://www.ncbi.nlm.nih.gov/mesh/ sive controlled collection for indexing journal articles and books in the life sciences. It organizes information on biomedical and health-related topics into a hierarchical structure. The corpus contains 32.5k documents covering various medical concepts. • Wikipedia11 (Vrandeˇci´c and Krötzsch, 2014): Wikipedia is a free, web-based, collaborative, multilingual encyclopedia project that is supported by the non-profit Wikimedia Foundation. We extract web pages that contain medical-related information by using the medical codes list (e.g., ICD10 and ATC), resulting in 150k documents. • KG12 (Chandak et al., 2023): We use PrimeKG in our experiments. It offers a comprehensive overview of diseases, medications, side effects, and proteins by merging 20 biomedical sources to detail 17,080 diseases across ten biological levels. For this study, we select knowledge triplets that contain medical codes within three types (disease, medication, procedure) used in this work, resulting in 707k triplets. We use the template in Appendix C.2 to transform these triplets into sentences. C.2 Translating Format We list the template to transform knowledge triplets into sentences in KG as follows: candidate_relation = ["disease_phenotype_positive", "disease_protein", "disease_disease", "drug_effect", "drug_protein"] relations = { "phenotype present": "[ent1] has the phenotype [ent2]", "carrier": "[ent1] interacts with the carrier [ent2]", "enzyme": "[ent1] interacts with the enzyme [ent2]", "target": "The target of [ent1] is [ent2]", "transporter": "[ent2] transports [ent1]", "associated with": "[ent2] is associated with [ent1]", "parent-child": "[ent2] is a subclass of [ent1]", "side effect": "[ent1] has the side effect of [ent2]" } 11https://www.wikipedia.org/ 12https://github.com/mims-harvard/PrimeKG 763 D Baseline Information • Transformer (Li et al., 2020): It leverages the Transformer (Vaswani et al., 2017) architecture to model sequential EHR visits for clinical prediction tasks. • GCT (Choi et al., 2020): It employs the Transformer model to learn the EHR’s hidden structure via medical codes. Additionally, it introduces the Graph Convolutional Transformer, integrating graph neural networks to utilize the EHR structure for prediction. • HyGT (Cai et al., 2022): It leverages hypergraph transformers that regard patients as hyperedges and medical codes as nodes for EHR predictive tasks. • MedRetriever (Ye et al., 2021): It retrieves the most relevant text segments from a local medical corpus using string similarity. Then, it uses query features aggregated with EHR embeddings and disease-specific documents via self-attention. • GraphCare (Jiang et al., 2024): It generates personalized knowledge graphs via prompting LLMs and leverages attention-based graph neural networks for healthcare predictions. • CORE (van Aken et al., 2021): It integrates clinical knowledge with specialized outcome pre-training, and uses language models to predict clinical notes for prediction. • BEEP (Naik et al., 2022): It augments the language models with the retrieved PubMed articles and fuses them with information from notes to predict clinical outcomes. E Details for Hypergraph Transformer First of all, we construct a hypergraph G = (V, E) based on EHR data, where each patient visit is represented as a hyperedge connecting to all medical codes associated with the visit as nodes. Then we utilize HyGT (Cai et al., 2022) to jointly learn the node and hyperedge embeddings. Specifically, The hyperedge embeddings aggregate information from nodes within each hyperedge, while the node embeddings aggregate information from hyperedges connecting the nodes. In the l-th neural network layer, the node and hyperedge embeddings are updated as X(l) v = fE→V  Ev,E(l−1)  , (6) E(l) e = fV→E  Ve,X(l−1)  , (7) where X(l) v and E(l) e represent the embeddings of node v and hyperedge e in the l-th layer (1 ≤l ≤ L), respectively. Ev,E denotes the hidden representations of hyperedges that connect the node v, while Ve,X is the hidden representations of nodes that are contained in the hyperedge e. The two message-passing functions fV→E(·) and fE→V(·) utilize multi-head self-attention (Vaswani et al., 2017) to identify significant neighbors during propagation as fV→E(S) = fE→V(S) = Self-Att(S), where S is the input embedding for the attention layer, Self-Att(S) = LayerNorm(Y + FFN(Y )). Y is the output from the multihead self-attention block Y = LayerNorm(S + h i=1SAi(S)), SAi(S) denotes the scaled dotproduct attention: SAi(S) = softmax W Q i (SW K i )⊤ p ⌊d/h⌋ ! SW V i . W Q i , W K i , and W V i are learnable parameters for the i-th head corresponding to queries, keys, and values, respectively. To interpret the above process, the input sequence S is projected into different h heads. The output of each head is then concatenated (denoted by ) to form the multi-head attention output. This output of multi-head attention layer Y is then fed into a feed-forward neural network (FFN), comprising a two-layer Multilayer Perceptron (MLP) with ReLU activation functions. F Details for Prompt Design We present the detailed design of the prompt template as follows: Prompt for LLM Summarization Suppose you are a physician working on a health -related outcome prediction task and need to get relevant information for the given <task>. Here is relevant information: <medical code type> Name: <medical code name> Retrieve Passage #1: <retrieved document 1> Retrieve Passage #2: <retrieved document 2> ... Based on the above information, Could you generate 1 sentence of around 10-20 words to summarize the knowledge for the 764 UMLSBERT PubmedBERT Clin-DistilBERT Clin-MobileBERT 70 72 74 AUROC Augmented Model gϕ Only w/ RAM-EHR (a) Effect of gϕ HyGT GCT Transformer 66 68 70 72 74 AUROC Local Model fθ Only w/ RAM-EHR (b) Effect of fθ All PrimeKG Wikipedia Drugbank Pubmed MeSH None 71 72 73 74 AUROC (c) Effect of M Figure 5: Results for additional studies on the effect of (a) fθ, (b) gϕ, and (c) the information source M on the CRADLE dataset. <medical code type> that is useful for the <task>? <task> is the brief description of the downstream task. <medical code type> is either “disease”, “medication” or “procedure”, depending on the input. G Additional Experiment Results We evaluate the effect of the augmented model gϕ, the local model fθ, and the knowledge source M on CRADLE in Figure 5. The experimental results further demonstrate that both models and different knowledge sources contribute to the performance gain. Moreover, it is observed that RAM-EHR is flexible to be applied upon different models, with a comparable performance with RAM-EHR. For the human studies in Section 3.3, we provide the following guidelines for the annotators to evaluate the quality of the generated knowledge. The goal of this evaluation is to assess the helpfulness of generated knowledge explaining or relating to specific medical codes in the context of target prediction tasks. Helpfulness is defined by the relevance, accuracy, and utility of the information in facilitating understanding or decision-making related to medical coding and its implications for predictive tasks. Please rate the following generated knowledge with score 0, 1 or 2. > 0: Irrelevant Definition: The knowledge does not provide any relevant information related to the medical code in question. It might be factually accurate but completely off-topic or not applicable to the context of target prediction tasks. > 1: Partially Relevant and Useful Definition: The knowledge provides some relevant information but either lacks completeness, specificity, or direct applicability to target prediction tasks. It might include general facts or insights that are related to the medical code but does not fully support decision-making or understanding in a predictive context. > 2: Very Useful Definition: The knowledge directly addresses the medical code with accurate, relevant, and comprehensive information that is highly applicable to target prediction tasks. It should provide detailed understanding, or specific examples that facilitate decision-making, understanding, or application in predictive modeling. H Cost Information Utilizing GPT-3.5-turbo as our base LLM model for generating summarized knowledge, we observe an average retrieval augmentation cost of $0.0025 per medical code in MIMIC-III and $0.0032 in CRADLE. Consequently, RAM-EHR does not result in excessive monetary expenses. 765
2024_short
68
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 778–789 August 11-16, 2024 ©2024 Association for Computational Linguistics Estimating the Level of Dialectness Predicts Interannotator Agreement in Multi-dialect Arabic Datasets Amr Keleg, Walid Magdy, Sharon Goldwater Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh [email protected], {wmagdy,sgwater}@inf.ed.ac.uk Abstract On annotating multi-dialect Arabic datasets, it is common to randomly assign the samples across a pool of native Arabic speakers. Recent analyses recommended routing dialectal samples to native speakers of their respective dialects to build higher-quality datasets. However, automatically identifying the dialect of samples is hard. Moreover, the pool of annotators who are native speakers of specific Arabic dialects might be scarce. Arabic Level of Dialectness (ALDi) was recently introduced as a quantitative variable that measures how sentences diverge from Standard Arabic. On randomly assigning samples to annotators, we hypothesize that samples of higher ALDi scores are harder to label especially if they are written in dialects that the annotators do not speak. We test this by analyzing the relation between ALDi scores and the annotators’ agreement, on 15 public datasets having raw individual sample annotations for various sentence-classification tasks. We find strong evidence supporting our hypothesis for 11 of them. Consequently, we recommend prioritizing routing samples of high ALDi scores to native speakers of each sample’s dialect, for which the dialect could be automatically identified at higher accuracies. 1 Introduction Arabic is spoken natively by over 420 million people and is an official language of 24 countries (Bergman and Diab, 2022), making it an important language for NLP systems. However, NLP for Arabic faces a major challenge in that usergenerated text is typically a mixture of Modern Standard Arabic (MSA)—the standardized variant that is taught in schools and used in official communications and newspapers—and regional variants of Dialectal Arabic (DA), which are used in everyday communications, including both speech and social media text (Habash, 2010).1 While MSA can be largely understood by most Arabic speakers, the different variants of DA are not always fully mutually intelligible. Despite this mutual unintelligibility, a common practice when developing datasets for multi-dialect Arabic NLP is to randomly recruit annotators without regard to their dialect. However, routing dialectal content to speakers of a different dialect for annotation or moderation can present real problems. For example, it has been shown to contribute to unjust online content moderation of DA (Business for Social Responsibility, 2022), and racially biased toxicity annotation in American English varieties (Sap et al., 2022). Two recent studies of multi-dialect DA annotation showed that for annotating hate speech or sarcasm, respectively, annotators were more lenient (for hate speech) and more accurate (for sarcasm) when annotating sentences in their native dialect (Bergman and Diab, 2022; Abu Farha and Magdy, 2022). The authors of both studies made the same recommendation for creating new Arabic datasets, namely to first identify the dialect of each sample and then route it to appropriate annotators. This recommendation is theoretically appealing, but presents practical difficulties since automatic dialect identification (DI) is challenging (AbdulMageed et al., 2023), and existing systems assume a single correct label when in fact some texts can be natural in different dialects (Keleg and Magdy, 2023; Olsen et al., 2023). Moreover, the representation of native speakers of the different Arabic dialects on crowdsourcing sites might be skewed (Mubarak and Darwish, 2016). Therefore, recruiting native speakers of some Arabic dialects might be challenging, given the tough conditions of the countries in which these dialects are spoken. 1Refer to §B of the Appendix for a further discussion about the relationship between MSA and DA. 778 In this paper, we address these challenges by building on recent work by Keleg et al. (2023), who presented a system for estimating Arabic Level of Dialectness (ALDi)—i.e., the degree to which a sentence diverges from MSA, on a scale from 0 to 1. We hypothesize that as sentences with low ALDi scores do not diverge much from MSA, they can still be understood and accurately annotated by most Arabic speakers, while this will be less true for sentences with high ALDi scores. If our hypothesis holds, then annotation can be made more efficient while maintaining accuracy, by routing samples with low ALDi scores to speakers of any dialect. Only high-ALDi samples need to be routed to native speakers of the appropriate dialect. We test our hypothesis by investigating the impact of ALDi score on interannotator agreement for 15 publicly released datasets annotated for 6 different sentence-classification tasks.2 We confirm that for most tasks and datasets, higher ALDi scores correlate with lower annotator agreement. A notable exception is the dialect identification (DI) task, where higher ALDi scores correlate with higher agreement, presumably because it is easier to identify a single dialect for sentences that are strongly dialectal. This finding is encouraging for annotation routing, since automatic DI systems may also have higher accuracy on these sentences. We conclude that a combination of automatic ALDi scoring, followed by DI and annotator routing only for high-ALDi sentences, is a promising strategy for annotating multi-dialect Arabic datasets. 2 Methodology Data We study the impact of ALDi scores on the annotators’ agreement for publicly released Arabic datasets. We analyze datasets satisfying the following criteria: • Language: Mixture of MSA and DA. • Variation: Targeting multiple variants of DA. • Annotators: Speakers of different variants of DA that are randomly assigned to the samples. • Tasks Setup: Sentence-level classification. • Released Labels: Individual annotator labels or the percentage of annotators agreeing on the majority-vote label.3 2Instructions to replicate the experiments can be accessed through https://github.com/AMR-KELEG/ALDi-and-IAA 3For some datasets, the percentage of annotators agreeing on the majority vote is weighted by their performance on the annotation quality-assurance test samples. This distinction is irrelevant to our study, where we only consider whether all annotators agreed or not. We searched for datasets on Masader, a community-curated catalog of Arabic datasets (Alyafeai et al., 2021; Altaher et al., 2022). Each dataset on Masader has a metadata field for the variants of Arabic included. We discarded the datasets that only included MSA samples, and manually inspected the remaining 151. After identifying 28 potential datasets that satisfy the criteria above, we contacted the authors of the datasets that do not have the individual annotations publicly released. Eventually, we had 15 datasets to analyze, listed in Table 1, covering: Offensive Text Classification, Hate Speech Detection, Sarcasm Detection, Sentiment Analysis, Speech Act Detection, Stance Detection, and Dialect Identification. Analysis For each dataset, we compute the Arabic Level of Dialectness (ALDi) score for each annotated sample (sentence) using the SentenceALDi model (Keleg et al., 2023), which returns a score from 0 (MSA/non-dialectal) to 1 (strongly dialectal). To investigate the effect of ALDi on annotator agreement, we bin the samples by their ALDi score into 10 bins of width 0.1. We compute % full agree, the percentage of samples in that bin for which all the annotators agreed on a single label. We employ Pearson’s correlation coefficient to analyze the relation between ALDi (represented by each bin’s midpoint ALDi score) and % full agree, and also report the slope of the best-fitting line as a measure of the effect size.4 As aforementioned, our initial hypothesis is that % full agree negatively correlates with high ALDi scores. 3 Results and Discussion We use scatter plots to visualize the relation between % full agree and ALDi on the studied datasets, as shown in Figure 1. Additionally, the histograms of samples across the different bins indicate the dialectal content within the dataset. As per Table 1, 6 datasets out of the 15 have more than 50% of the samples with ALDi scores less than 0.1, which are expected to be written in MSA. However, we found that the overall trends depicted 4The exact values of the slopes and correlation coefficients depend on the number of bins. However, we got similar qualitative results on using 4 or 20 equal-width bins. 10 bins are enough to check if trends are non-linear while keeping a reasonable number of samples in the smallest bins. We also fitted logistic regression (logreg) models using ALDi as a continuous variable and a binary outcome Full Agreement (Yes/No) for each sample. Both analysis tools reveal similar patterns (See Appendix §C) but the binning method provides useful additional visualization. 779 Dataset Task (# labels) %ALDi<0.1 Description Deleted Comments Dataset (DCD) (Mubarak et al., 2017) Offensive (3) 62.57% About 32K deleted comments from aljazeera.com. Confidence scores for the majority vote of 3 annotations are provided. MPOLD (Chowdhury et al., 2020) Offensive (2) 27.82% 4000 sentences interacting with news sources, sampled from Twitter, Facebook, and YouTube, annotated three times. YouTube Cyberbullying Offensive (2) 10.24% 15,050 comments and replies to 9 YouTube videos labeled by 3 annotators (Iraqi, Egyptian, Libyan). (YTCB) (Alakrot et al., 2018) ASAD (Alharbi et al., 2021) Sentiment (3) 35.63% 95,000 tweets with a skewed representation toward the Gulf area and Egypt. ArSAS (Elmadany et al., 2018) Sentiment (4) 57.45% 21,064 tweets related to a pre-specified set of entities or events, with confidence scores for the majority votes across three annotations per sample. Speech Act (6) ArSarcasm-v1 Dialect (5) 57.44% 10,547 tweets, sampled from two different Sentiment Analysis datasets: ATSD (Nabil et al., 2015), SemEval2017 (Rosenthal et al., 2017), reannotated for Sentiment, Dialect, and Sarcasm. (Abu Farha and Magdy, 2020) Sarcasm (2) Sentiment (4) Mawqif Sarcasm (2) 58.04% 4,121 tweets about "COVID-19 vaccine", "digital transformation", or "women empowerment" annotated separately for stance and sentiment/sarcasm till the label confidence reaches 0.7 (min. 3 annotators) or 7 annotators label the sample. (Alturayeif et al., 2022) Sentiment (3) 58.04% Stance (3) 57.99% iSarcasm’s test set Dialect (5) 30.5% 200 sarcastic sentences provided by crowdsourced authors and 1200 non-sarcastic tweets from ArSarcasm-v2 (Abu Farha et al., 2021) reannotated 5 times. (Abu Farha et al., 2022) Sarcasm (2) DART (Alsarsour et al., 2018) Dialect (5) 0.8% 24,279 tweets with distinctive dialectal terms annotated three times for the dialectal region. Samples of complete disagreement are not in the released dataset. Table 1: The datasets included in our study. All datasets have three annotations per sample, except for iSarcasm (5 annotations/sample) and Mawqif (3 or more annotations/sample). For the labels used in each dataset and the proportion of each label, see Table A1. For some datasets, there is a discrepancy between the number of samples listed in the paper and the raw data files with individual labels (See §A of the Appendix). Sentiment Analysis 0.2 0.4 0.6 0.8 ALDi 35 45 55 65 75 % full agree m = -25.01 5427 35802 ASAD ( = -0.99) * 0.2 0.4 0.6 0.8 ALDi 25 35 45 55 65 % full agree m = -6.08 341 5816 ArSarcasm-v1 ( = -0.71) * 0.2 0.4 0.6 0.8 ALDi 5 15 25 35 45 % full agree m = -28.75 57 2288 Mawqif ( = -0.91) * 0.2 0.4 0.6 0.8 ALDi 25 35 45 55 65 % full agree m = -2.05 804 12101 ArSAS ( = -0.25) Sarcasm Detection 0.2 0.4 0.6 0.8 ALDi 30 40 50 60 70 % full agree m = -32.69 58 427 iSarcasm ( = -0.83) * 0.2 0.4 0.6 0.8 ALDi 40 50 60 70 80 % full agree m = -24.59 341 5816 ArSarcasm-v1 ( = -0.92) * 0.2 0.4 0.6 0.8 ALDi 50 60 70 80 90 % full agree m = -40.94 57 2288 Mawqif ( = -0.98) * Speech Act Detection 0.2 0.4 0.6 0.8 ALDi 45 55 65 75 85 % full agree m = 6.79 804 12101 ArSAS ( = 0.78) * Offensive Text Classification and Hate Speech Detection 0.2 0.4 0.6 0.8 ALDi 35 45 55 65 75 % full agree m = -3.21 716 19831 DCD ( = -0.4) 0.2 0.4 0.6 0.8 ALDi 50 60 70 80 90 % full agree m = 10.67 230 1113 MPOLD ( = 0.64) * 0.2 0.4 0.6 0.8 ALDi 45 55 65 75 85 % full agree m = -11.83 679 3273 YTCB ( = -0.92) * Stance Detection 0.2 0.4 0.6 0.8 ALDi 30 40 50 60 70 % full agree m = -22.48 58 2267 Mawqif ( = -0.88) * Arabic Dialect Identification 0.2 0.4 0.6 0.8 ALDi 0 10 20 30 40 % full agree m = 23.27 58 427 iSarcasm ( = 0.69) * 0.2 0.4 0.6 0.8 ALDi 15 25 35 45 55 % full agree m = -1.95 341 5816 ArSarcasm-v1 ( = -0.06) 0.2 0.4 0.6 0.8 ALDi 40 50 60 70 80 % full agree m = 38.45 194 8542 DART ( = 0.97) * Figure 1: Scatter plots showing the relationship between binned ALDi scores (x-axis) and the percentage of samples with full annotator agreement (y-axis). The histogram represents the # of samples per bin (with min and max values for any bin labeled on the right-hand axis). The slope of the best-fitting line (m) is shown, and to enable visual comparison of slopes, all plots have the same y-axis scale (possibly shifted up or down). Note: Statistically significant (p<0.05) correlation coefficients (ρ) are marked with *. in Figure 1 will not be affected if we discard these samples with low ALDi scores and only focus on the rest. For non-DI tasks, ALDi negatively correlates with agreement. Inspecting the trends depicted in Figure 1, strong negative Pearson’s correlation 780 Dataset Sample Translation of the Underlined Cue ALDi Individual Labels YTCB Hatespeech Detection ­rK A§ –y xA At ¯ ¤ Tn§A• M¤ ŠA• rŒm˜Lkrˆ A› you insect 0.98 HateSpeech(3x) A§ ”r`˜ Ty`yJ Tylˆ x¤d ¨˜ rtl˜ ­¤d ¤r rmˆ r A§ rˆ you scabies 0.81 HateSpeech(3x) ¨k§ yl˜ L fˆ ™bl ¯ œZA• P’Až  A•A› ¢hhhhhhhhhhhhhh¡ wnˆ Gulf’s colt 0.98 HateSpeech(3x) ArSarcasm-v1 Sentiment Analysis ¢›®• d` H ¨nb`§ ™b’  A• Ÿ CA› ¨k§C db˜A Šwl§ ‘rq› ¢tys I found him disgusting 0.81 Negative(2x) Positive(1x) xCdm˜A# ŠwtA§ ^\??? šAfV L› ¯¤ š¤ šAfVA# ¢hhhhhhhhhhhhhhhhhhh¡ hahaha 0.94 Positive(2x) Negative(1x) Table 2: Five qualitative samples of high ALDi scores. The underlined segments represent the cues that the annotators might have used to choose a label even if they do not fully understand the sentence. Despite the presence of these cues, the annotators still disagreed on labeling the last two samples. We only provide translations for the underlined cue segments. coefficients exist for 8 out of the 12 datasets for the non-DI tasks (sentiment analysis, sarcasm, hate speech, and stance detection). Both the trends (quantified by the slope m) and the correlation coefficients for most of the tasks indicate that the percentage of samples for which all the annotators assign the same label decreases as the ALDi scores increase, often by a large margin.5 We notice different trends for DI that we elaborate on below. For DI, agreement is lowest for mid-range ALDi scores (if MSA is a possible label) or low ALDi scores (if it is not). By definition, MSA sentences have an ALDi of 0, and normally the ALDi estimation model assigns them very low scores. For the ArSarcasm-v1 and iSarcasm datasets, the set of labels for the DI task includes MSA (i.e., some sentences in these datasets are not dialectal). For both datasets, one notices high percentages of agreement scores for the bin having ALDi scores ∈[0, 0.1] (generally agreeing that the label is MSA). The percentages decrease for the few succeeding bins, before rising again for the bins with high ALDi scores. Sentences of high ALDi scores (e.g., ∈[0.8, 1]) are expected to have multiple dialectal cues, which increases the chance of attributing them to a single dialect. For sentences of intermediate ALDi scores, annotators can agree a sentence is not in MSA, however, they would struggle to determine the dialect of the sentence, which is manifested as having lower percentages of the full agreement for these bins. 5Refer to Appendix §E for a possible explanation of the unexpected trends of the ArSAS dataset. The authors of the DART dataset do not include MSA in the labels since they curated sentences with distinctive dialectal terms. This explains the low percentage of full agreement for the bin of ALDi scores ∈[0, 0.1], unlike the other two DI datasets. However, the pattern of having higher full agreement percentages for bins with higher ALDi scores still holds. 4 Analysis of Trends by Class Label A more nuanced analysis of the non-DI datasets can be done by splitting the samples according to their majority-vote labels (See Appendix §D). The declining trend for agreement as ALDi scores increase is consistently salient for the negative class of the different tasks (i.e., Not Sarcastic, Neutral (sentiment), Non-offensive). One explanation is that agreeing on one of these labels requires fully understanding the sample, which is expected to be harder for non-native speakers of the dialects of sentences with high ALDi scores. Moreover, the general usage of MSA for formal communications, and DA for more personal ones might be biasing the annotators. Further controlled analysis is required to investigate these intuitive explanations. Conversely, the presence of specific words might be a strong cue for the label of the overall sentence, which might be useful for the positive classes (i.e., Sarcastic, Positive/Negative, Obscene/Offensive). Consequently, annotators might be able to agree on a label for a sentence even if it contains some unintelligible segments. Table 2 shows five examples, in which the annotators can use cues to label the 781 whole sentence. Consider the first offensive sentence in the Table. The presence of the MSA insult ­rK A§ (you insect) is enough to guess that the sentence is offensive, even if the remaining segment is not fully intelligible. Lastly, note that agreeing on a label does not imply it is accurate, especially when relying on cues for annotation. 5 Conclusion and Recommendation Factors such as task subjectivity and vague guidelines could cause disagreement between annotators. For Arabic, we demonstrate that the Arabic level of dialectness of a sentence (ALDi), automatically estimated using the Sentence-ALDi model (Keleg et al., 2023), is an additional overlooked factor. Analyzing 15 datasets, we find strong evidence of a negative correlation between ALDi and the full annotator agreement scores for 8 of the 12 nonDialect Identification datasets. Moreover, for the 3 Dialect Identification datasets, we find that annotators have higher agreement scores for samples of higher ALDi scores, which by definition would have more dialectal features. The combination of more dialectal features in a sentence is more probable to be distinctive of a specific dialect. Previous research recommended routing samples to native speakers of the different Arabic dialects for higher annotation quality. Our analysis indicates that a large proportion of 6 datasets are samples with ALDi scores < 0.1, which are expected to be MSA samples that can be routed to speakers of any Arabic dialect. Moreover, the lower agreement scores for samples with high ALDi scores show that extra care should be given to these samples. Dataset creators should prioritize routing high-ALDi samples to native speakers of the dialects of these samples, for which the dialects can be automatically identified at higher accuracy as these samples have more dialectal cues. Limitations The trends we report validate our hypothesis. However, more thorough analyses need to be done to understand how ALDi affects each task given its unique nature. Knowing the demographic information about the annotators might have allowed for revealing deeper insights into how speakers of specific Arabic dialects understand samples from other dialects. However, this would have required running a controlled experiment re-annotating the 15 datasets, which we hope future work will attempt. Another potential extension to this work is to analyze the interannotator disagreement on annotating dialectal data for token-level tasks. To the best of our knowledge, all the publicly available token-level Arabic datasets are built by carefully selecting samples written in specific dialects and recruiting native speakers of each of these dialects to perform the annotation, after closely training them. However, even if a multi-dialect token-level dataset is annotated by randomly assigning the samples to speakers of different dialects, the analysis would require a new model to estimate the level of dialectness on the token level, since the Sentence-ALDi model used here works at the sentence level. Lastly, we acknowledge that there are multiple reasons for the annotators to disagree, which include the task’s subjectivity, the annotators’ background, and their worldviews (Uma et al., 2021). However, these factors would have less impact on the annotators’ disagreement if a sample is not fully intelligible. Acknowledgments This work could not have been done without the help of the datasets’ creators who have kindly agreed to share the individual labels for their datasets’ samples. Thanks, Ibrahim Abu Farha, Nora Alturayeif, and Manal Alshehri. We also thank Hamdy Mubarak, Nuha Albadi, Nedjma Ousidhoum, and Hala Mulki for trying to help with finding the individual annotator labels for some of their datasets. Lastly, we appreciate the efforts of the anonymous ARR reviewers, action editors, and area chairs. Thanks for the insightful discussions and valuable suggestions. This work was supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh, School of Informatics. References Muhammad Abdul-Mageed, AbdelRahim Elmadany, Chiyu Zhang, El Moatez Billah Nagoudi, Houda Bouamor, and Nizar Habash. 2023. NADI 2023: The fourth nuanced Arabic dialect identification shared task. In Proceedings of ArabicNLP 2023, pages 600– 613, Singapore (Hybrid). Association for Computational Linguistics. Ibrahim Abu Farha and Walid Magdy. 2020. From Arabic sentiment analysis to sarcasm detection: The ArSarcasm dataset. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Process782 ing Tools, with a Shared Task on Offensive Language Detection, pages 32–39, Marseille, France. European Language Resource Association. Ibrahim Abu Farha and Walid Magdy. 2022. The effect of Arabic dialect familiarity on data annotation. In Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP), pages 399– 408, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Ibrahim Abu Farha, Silviu Vlad Oprea, Steven Wilson, and Walid Magdy. 2022. SemEval-2022 task 6: iSarcasmEval, intended sarcasm detection in English and Arabic. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022), pages 802–814, Seattle, United States. Association for Computational Linguistics. Ibrahim Abu Farha, Wajdi Zaghouani, and Walid Magdy. 2021. Overview of the WANLP 2021 shared task on sarcasm and sentiment detection in Arabic. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 296–305, Kyiv, Ukraine (Virtual). Association for Computational Linguistics. Azalden Alakrot, Liam Murray, and Nikola S. Nikolov. 2018. Dataset construction for the detection of antisocial behaviour in online communication in Arabic. Procedia Computer Science, 142:174–181. Arabic Computational Linguistics. Basma Alharbi, Hind Alamro, Manal Alshehri, Zuhair Khayyat, Manal Kalkatawi, Inji Ibrahim Jaber, and Xiangliang Zhang. 2021. Asad: A twitter-based benchmark Arabic sentiment analysis dataset. Israa Alsarsour, Esraa Mohamed, Reem Suwaileh, and Tamer Elsayed. 2018. DART: A large dataset of dialectal Arabic tweets. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Yousef Altaher, Ali Fadel, Mazen Alotaibi, Zaid Alyazidi, et al. 2022. Masader Plus: A new interface for exploring +500 Arabic NLP datasets. arXiv preprint arXiv:2208.00932. Nora Saleh Alturayeif, Hamzah Abdullah Luqman, and Moataz Aly Kamaleldin Ahmed. 2022. Mawqif: A multi-label Arabic dataset for target-specific stance detection. In Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP), pages 174–184, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Zaid Alyafeai, Maraim Masoud, Mustafa Ghaleb, and Maged S. Al-shaibani. 2021. Masader: Metadata sourcing for Arabic text and speech data resources. As-Said Muhámmad Badawi. 1973. Mustawayat alarabiyya al-muasira fi Misr. Dar al-maarif. A. Bergman and Mona Diab. 2022. Towards responsible natural language annotation for the varieties of Arabic. In Findings of the Association for Computational Linguistics: ACL 2022, pages 364–371, Dublin, Ireland. Association for Computational Linguistics. Business for Social Responsibility. 2022. Human rights due diligence of meta’s impacts in Israel and Palestine in may 2021. https://about.fb.com/wpcontent/uploads/2022/09/Human-Rights-DueDiligence-of-Metas-Impacts-in-Israel-andPalestine-in-May-2021.pdf. Shammur Absar Chowdhury, Hamdy Mubarak, Ahmed Abdelali, Soon-gyo Jung, Bernard J. Jansen, and Joni Salminen. 2020. A multi-platform Arabic news comment dataset for offensive language detection. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 6203–6212, Marseille, France. European Language Resources Association. AbdelRahim A. Elmadany, Hamdy Mubarak, and Walid Magdy. 2018. An Arabic speech-act and sentiment corpus of tweets. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). European Language Resources Association (ELRA). The 3rd Workshop on Open-Source Arabic Corpora and Processing Tools, OSACT3 ; Conference date: 08-05-2018. Joseph L. Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378. Nizar Y. Habash. 2010. Introduction to Arabic natural language processing, 1 edition, volume 3 of Synthesis Lectures on Human Language Technologies. Morgan and Claypool Publishers. Amr Keleg, Sharon Goldwater, and Walid Magdy. 2023. ALDi: Quantifying the Arabic level of dialectness of text. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 10597–10611, Singapore. Association for Computational Linguistics. Amr Keleg and Walid Magdy. 2023. Arabic dialect identification under scrutiny: Limitations of single-label classification. In Proceedings of ArabicNLP 2023, pages 385–398, Singapore (Hybrid). Association for Computational Linguistics. Giovanni Molina, Fahad AlGhamdi, Mahmoud Ghoneim, Abdelati Hawwari, Nicolas ReyVillamizar, Mona Diab, and Thamar Solorio. 2016. Overview for the second shared task on language identification in code-switched data. In Proceedings of the Second Workshop on Computational Approaches to Code Switching, pages 40–49, Austin, Texas. Association for Computational Linguistics. Hamdy Mubarak and Kareem Darwish. 2016. Demographic surveys of Arab annotators on CrowdFlower. In Proceedings of ACM WebSci16 Workshop “Weaving Relations of Trust in Crowd Work: Transparency and Reputation across Platforms. 783 Hamdy Mubarak, Kareem Darwish, and Walid Magdy. 2017. Abusive language detection on Arabic social media. In Proceedings of the First Workshop on Abusive Language Online, pages 52–56, Vancouver, BC, Canada. Association for Computational Linguistics. Mahmoud Nabil, Mohamed Aly, and Amir Atiya. 2015. ASTD: Arabic sentiment tweets dataset. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2515–2519, Lisbon, Portugal. Association for Computational Linguistics. Helene Olsen, Samia Touileb, and Erik Velldal. 2023. Arabic dialect identification: An in-depth error analysis on the MADAR parallel corpus. In Proceedings of ArabicNLP 2023, pages 370–384, Singapore (Hybrid). Association for Computational Linguistics. Nedjma Ousidhoum, Zizheng Lin, Hongming Zhang, Yangqiu Song, and Dit-Yan Yeung. 2019. Multilingual and multi-aspect hate speech analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4675– 4684, Hong Kong, China. Association for Computational Linguistics. Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017. SemEval-2017 task 4: Sentiment analysis in Twitter. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 502– 518, Vancouver, Canada. Association for Computational Linguistics. Maarten Sap, Swabha Swayamdipta, Laura Vianna, Xuhui Zhou, Yejin Choi, and Noah A. Smith. 2022. Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5884–5906, Seattle, United States. Association for Computational Linguistics. Alexandra N Uma, Tommaso Fornaciari, Dirk Hovy, Silviu Paun, Barbara Plank, and Massimo Poesio. 2021. Learning from disagreement: A survey. Journal of Artificial Intelligence Research, 72:1385–1470. Howard Wainer, Marc Gessaroli, and Monica Verdi. 2006. Visual revelations. CHANCE, 19(1):49–52. A Detailed Description of the Datasets We noticed some discrepancies between the number of samples reported in the papers and the number of samples in the corresponding raw datasets. Despite following any filtration steps described in the papers, some of the datasets had more samples than the ones in the publicly released version, as indicated in Table A1. Additionally, the ArSarcasmv1, Mawqif (Stance Task), Mawqif (Sentiment/Sarcasm Tasks), and ASAD had 516, 170, 151, 191 samples with less than 3 annotations respectively, that we decided to discard from our analysis. Conversely, we decided to discard the MLMA dataset (Ousidhoum et al., 2019) for which the authors shared with us some of the raw annotations files. The number of samples in these files was too small compared to the number of samples in the public dataset with majority-vote labels. We also discarded another dataset, for which there was a significant discrepancy between the released dataset and the paper’s description of the dataset. B Code-mixing between MSA and DA Researchers distinguish between Modern Standard Arabic (MSA), and Dialectal Arabic (DA) (Habash, 2010). However, MSA and DA do not exist in isolation, and Arabic speakers sometimes code-mix between terms that can be considered to belong to MSA and others considered to be part of a variety of DA. Notably, some terms can be considered to belong to both MSA and a variety of DA, and even using the surrounding context may not be enough for disambiguation (Molina et al., 2016). Badawi (1973) recognizes five levels of Arabic used in Egypt, that can be categorized according to the amount of code-mixing in addition to the dialectness of the terms/phrases used. The SentenceALDi model, developed by Keleg et al. (2023), estimates the level of dialectness of Arabic sentences, which provides an automatic proxy to distinguish between Arabic sentences according to how they diverge from MSA. We used the Sentence-ALDi model to study the relation between the ALDi score and the agreement between the annotators for 15 Arabic datasets. C Discussion about the Analysis As described in §2, each dataset’s samples were split into 10 bins of equal width according to their respective ALDi scores. Afterward, the correlation between each bin’s midpoint ALDi score and the percentage of samples having full agreement % full agree was computed. For each bin, % full agree represents the Maximum Likelihood Estimation (MLE) for the probability that all the annotators agree on the same label for the samples of this bin. Inability to use Interannotator Agreement metrics for some datasets Automated metrics such as Fleiss’ Kappa (Fleiss, 1971) attempt to measure the Interannotator Agreement (IAA) while accounting for the random agreement/disagreement 784 Dataset Task (# labels) Labels Distribution of Majority-vote Labels Dataset/Paper Discrepancy Deleted Comments Dataset (DCD) (Mubarak et al., 2017) Offensive (3) Confidence Offensive (80.31%) Clean (17.76%) Obscene (1.58%) No Majority (0.35%) MPOLD (Chowdhury et al., 2020) Offensive (2) Individual Non-Offensive (83.12%) Offensive (16.88%) YouTube Cyberbullying Offensive (2) Individual Not (61.38%) HateSpeech (38.62%) (YTCB) (Alakrot et al., 2018) ASAD (Alharbi et al., 2021) Sentiment (3) Individual Neutral (67.83%) Negative (15.33%) Positive (15.19%) No Majority (1.65%) The authors shared with us the raw annotation file of which we analyze 100,484 samples with three annotations or more, as opposed to the 95,000 in the released dataset. ArSAS (Elmadany et al., 2018) Sentiment (4) Confidence Negative (35.38%) Neutral (33.45%) Positive (20.51%) No Majority (6.07%) Mixed (4.59%) Speech Act (6) Confidence Expression (55.07%) Assertion (38.63%) Question (3.32%) No Majority (1.81%) Request (0.67%) Recommendation (0.31%) Miscellaneous (0.18%) ArSarcasm-v1 (Abu Farha and Magdy, 2020) Dialect (5) Individual msa (67.56%) egypt (19.37%) No Majority (5.83%) gulf (3.61%) levant (3.46%) magreb (0.18%) The samples in the raw annotation artifact shared by the authors has 10,641 samples, as opposed to the 10,547 samples in the released dataset. Sarcasm (2) Individual False (84.24%) True (15.7%) No Majority (0.06%) Sentiment (3) Individual neutral (49.45%) negative (32.57%) positive (14.58%) No Majority (3.4%) Mawqif (Alturayeif et al., 2022) Sarcasm (2) Individual No (95.97%) Yes (3.78%) No Majority (0.25%) The authors annotated the same samples for sentiment/sarcasm and stance separately. This was done across 8 different annotation jobs (4 each), for which the authors shared the raw annotation files with us. The number of samples in these files is 4,093 for sentiment/sarcasm and 4,079 for stance, of which 3,942 and 3,909 have three or more annotations. The released dataset is reported to have 4,100 samples. Sentiment (3) Individual Positive (41.15%) Negative (31.46%) Neutral (22.68%) No Majority (4.72%) Stance (3) Individual Favor (60.5%) Against (27.65%) None (7.7%) No Majority (4.14%) iSarcasm’s test set (Abu Farha et al., 2022) Dialect (5) Individual msa (32.29%) nile (31.36%) gulf (16.5%) No Majority (15.79%) levant (2.21%) maghreb (1.86%) The dataset having the individual annotator labels is released as an artifact accompanying the following paper (Abu Farha and Magdy, 2022). Sarcasm (2) Individual 0 (82.07%) 1 (17.93%) DART (Alsarsour et al., 2018) Dialect (5) Proportion GLF (24.27%) EGY (21.69%) IRQ (21.64%) LEV (16.22%) MGH (16.18%) Table A1: A detailed description of the distribution of the majority-vote labels and the data/paper discrepancies in the datasets with individual annotator labels included in our study. Note 1: No Majority means that multiple labels have the same majority number of votes for Individual/Proportion labels, and Confidence < 0.5 otherwise. Note 2: Some of the samples of the ASAD, ArSarcasm-v1, Mawqif datasets have more than 3 annotations, despite the fact the former two are supposed to have only three annotations per sample. between annotators. In principle, it might be possible to perform a version of our analysis using Fleiss’ Kappa rather than % full agree as the dependent variable. However, computing Fleiss’ Kappa would require knowledge of the individual annotations for each sample. Such annotations are not available for the ArSAS (Sentiment/Speech Act), DART, and DCD datasets as described in Table A1. Since we wanted to include as many datasets as possible, we used % full agree instead. Logistic regression as an alternative analysis tool Binning the data leads to a loss of analytical information which might impact the results of the analysis, especially if implausible bins’ boundaries are used (Wainer et al., 2006). Logistic regression with binary outcomes is an alternative analysis that alleviates the limitations of binning. Each sample has a continuous ALDi score as the independent variable, and a binary outcome Full Annotator Agreement (Yes/No). After fitting a logistic regression model to predict the binary outcome, the coefficient of the ALDi variable measures the impact of ALDi on the odds of full agreement. If this coefficient is negative, then the odds of full annotator agreement decrease as the ALDi score increases. Figure C1 demonstrates the probability of full agreement of each dataset, in addition to the coefficient of the ALDi score with its 95% confi785 dence interval. For the 8 non-DI datasets with CoefALDi < −0.2, the coefficients can be considered to be statistically significant since the confidence interval does not include zero. Both analysis tools (correlation analysis and logistic regression) achieve similar results. The same 8 non-DI datasets—ASAD, ArSarcasm-v1 (Sentiment/Sarcasm), Mawqif (Sentiment/Sarcasm/Stance), iSarcasm, and YTCB—have significantly strong negative correlation coefficients as in Figure 1, and statistically significant coefficients for the ALDi variable which are less than -0.2. However, binning the data allows for visualizing the % full agreement as a scatter plot, which can reveal whether the relation between ALDi and the agreement is linear or not, in addition to having a visual way for determining how well the best-fitting line models the data. Impact of data skewness MSA samples are overrepresented in some of the considered datasets. However, this is generally unproblematic for the analysis, so we opted not to discard the MSA samples. For the method described in Section 2, the samples of each bin are independently used to estimate the MLE of full agreement between annotators. Therefore, the over-representation of MSA samples in some datasets does not impact our analysis. D Trends by Class Label As mentioned in §4, Figures D2, D3, D4, D5, and D6 visualize the impact of ALDi on the annotator agreement after splitting the samples according to their majority-vote labels. We acknowledge that the number of samples in the bins for some classes is not enough to draw concrete conclusions (e.g., samples with high ALDi scores for the Neutral class of the ArSAS, and Mawqif datasets as per Figure D3). E The Rising Trend of ArSAS The ArSAS dataset stands out as a dataset with a rising trend for the Speech Act Detection task and a falling trend for the Sentiment Analysis task. Samples of ArSAS were jointly annotated for their sentiment and speech act. Despite having 6 different speech acts, which would arguably make speech act detection harder than sentiment analysis, the Assertion and Expression classes represent 95% of the samples. Looking at their respective trends shown in Figure D5, the two acts show two different behaviors. Most of the assertive samples have ALDi scores <0.2 (arguably, all are MSA ones). Moreover, the number of Assestion samples with high ALDi scores is not enough to estimate the % full agree for their respective bins. Conversely, the Expression act shows higher agreement as the ALDi score increases. The creators of ArSAS noticed that most of the Assertion samples were annotated as Neutral, while most of the Expression samples had polarized sentiment (mostly Negative). The annotators might have treated the Assertion class as the act for Objective sentences, while treating Expression as the act for Subjective sentences. This is arguably easier than sentiment analysis which might explain why annotators agree more on the Speech Act label than the Sentiment label for the ArSAS dataset. Further analysis is required to explain the trends of this dataset. 786 Sentiment Analysis 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.71 0.47 ASAD -1.1 -1.0 -4 2 Coef of ALDi = -1.03 * 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.53 0.48 ArSarcasm-v1 -0.4 -0.1 -4 2 Coef of ALDi = -0.23 * 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.49 0.23 Mawqif -1.4 -1.0 -4 2 Coef of ALDi = -1.2 * 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.55 0.52 ArSAS -0.2 -0.1 -4 2 Coef of ALDi = -0.14 * Sarcasm Detection 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.74 0.36 iSarcasm -1.9 -1.3 -4 2 Coef of ALDi = -1.63 * 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.78 0.5 ArSarcasm-v1 -1.4 -1.1 -4 2 Coef of ALDi = -1.24 * 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.95 0.44 Mawqif -3.4 -2.8 -4 2 Coef of ALDi = -3.1 * Speech Act Detection 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.68 0.71 ArSAS 0.0 0.2 -4 2 Coef of ALDi = 0.14 * Offensive Text Classification and Hate Speech Detection 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.6 0.63 DCD 0.1 0.2 -4 2 Coef of ALDi = 0.14 * 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.76 0.82 MPOLD 0.1 0.6 -4 2 Coef of ALDi = 0.34 * 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.78 0.66 YTCB -0.7 -0.5 -4 2 Coef of ALDi = -0.61 * Stance Detection 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.68 0.43 Mawqif -1.3 -0.8 -4 2 Coef of ALDi = -1.04 * Arabic Dialect Identification 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.12 0.27 iSarcasm 0.6 1.4 -4 2 Coef of ALDi = 0.99 * 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.56 0.34 ArSarcasm-v1 -1.0 -0.8 -4 2 Coef of ALDi = -0.91 * 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.48 0.82 DART 1.5 1.8 -4 2 Coef of ALDi = 1.63 * Figure C1: For each dataset, plots show the estimated probability of full agreement according to each dataset’s fitted logistic regression model. Under each plot, the coefficient of ALDi with its 95% confidence interval is visualized. Nearly all datasets (marked with *) have confidence intervals that do not include zero, meaning the effect of ALDi is statistically significant at p < 0.05. Negative coefficients indicate that higher ALDi scores predict lower agreement. 787 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -23.9 252 5306 ArSarcasm-v1 (False), = -0.87 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 16.47 78 506 ArSarcasm-v1 (True), = 0.62 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -31.38 42 2274 Mawqif (No), = -0.97 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -6.71 6 23 Mawqif (Yes), = -0.16 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -34.71 47 402 iSarcasm (0), = -0.8 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 15.25 10 66 iSarcasm (1), = 0.46 Figure D2: The trends for the classes of the Saracasm Detection datasets. Statistically significant correlation coefficients (ρ) are marked with *. 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 10.38 663 3385 ASAD (Negative), = 0.91 * 0.20.40.60.8 ALDi 0 20 40 60 80 100 % full agree m = -32.66 3679 29303 ASAD (Neutral), = -0.98 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 13.57 935 3908 ASAD (Positive), = 0.82 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -4.59 52 408 ArSAS (Mixed), = -0.35 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 7.83 392 2924 ArSAS (Negative), = 0.76 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -48.66 29 5873 ArSAS (Neutral), = -0.89 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 2.82 169 2272 ArSAS (Positive), = 0.17 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 9.8 133 1618 ArSarcasm-v1 (negative), = 0.65 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -31.2 97 3344 ArSarcasm-v1 (neutral), = -0.85 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 3.35 62 702 ArSarcasm-v1 (positive), = 0.16 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -9.5 32 458 Mawqif (Negative), = -0.69 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -17.17 17 500 Mawqif (Neutral), = -0.55 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -63.59 2 1244 Mawqif (Positive), = -0.89 * Figure D3: The trends for the classes of the Sentiment Analysis datasets. Statistically significant correlation coefficients (ρ) are marked with *. 788 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -13.95 470 1787 YTCB (Not), = -0.93 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -0.82 209 1486 YTCB (HateSpeech), = -0.13 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -10.61 86 4143 DCD (Clean), = -0.83 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 36.64 29 126 DCD (Obscene), = 0.78 * 0.20.40.60.8 ALDi 0 20 40 60 80 100 % full agree m = -2.42 565 15505 DCD (Offensive), = -0.32 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 9.13 188 955 MPOLD (Non-Offensive), = 0.65 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 6.32 42 158 MPOLD (Offensive), = 0.24 Figure D4: The trends for the classes of the Offensive Text Classification and Hate Speech datasets. Statistically significant correlation coefficients (ρ) are marked with *. 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -55.48 8 6868 ArSAS (Assertion), = -0.81 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 13.93 663 4549 ArSAS (Expression), = 0.89 * Figure D5: The trends for the Assertion and Expression labels of the ArSAS dataset, which represent 95% of the dataset samples. Statistically significant correlation coefficients (ρ) are marked with *. 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -7.52 26 383 Mawqif (Against), = -0.35 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -18.96 7 158 Mawqif (None), = -0.61 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -19.25 22 1652 Mawqif (Favor), = -0.67 * Figure D6: The trends for the classes of Mawqif’s Stance dataset. Statistically significant correlation coefficients (ρ) are marked with *. 789
2024_short
69
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 75–84 August 11-16, 2024 ©2024 Association for Computational Linguistics Isotropy, Clusters, and Classifiers Timothee Mickus♡ Stig-Arne Grönroos♡♠ Joseph Attieh♡ ♡University of Helsinki, ♠Silo.AI, Finland [email protected] Abstract Whether embedding spaces use all their dimensions equally, i.e., whether they are isotropic, has been a recent subject of discussion. Evidence has been accrued both for and against enforcing isotropy in embedding spaces. In the present paper, we stress that isotropy imposes requirements on the embedding space that are not compatible with the presence of clusters—which also negatively impacts linear classification objectives. We demonstrate this fact both empirically and mathematically and use it to shed light on previous results from the literature. 1 Introduction Recently, there has been much discussion centered around whether vector representations used in NLP do and should use all dimensions equally. This characteristic is known as isotropy: In an isotropic embedding model, every direction is equally probable, ensuring uniform data representation without directional bias. At face value, such a characteristic would appear desirable: Naively, one could argue that an anisotropic embedding space would be overparametrized, since it can afford to use some dimensions inefficiently. The debate surrounding isotropy was initially sparked by Mu and Viswanath (2018), who highlighted that isotropic static representations fared better on common lexical semantics benchmarks, and Ethayarajh (2019), who stressed that contextual embeddings are anisotropic. Since then, evidence has been accrued both for and against enforcing isotropy on embeddings. In the present paper, we demonstrate that this conflicting evidence can be accounted for once we consider how isotropy relates to embedding space geometry. Strict isotropy, as assessed by IsoScore (Rudman et al., 2022), requires the absence of clusters, and thereby also conflicts with linear classification objectives. This echoes previous empirical studies connecting isotropy and cluster structures (Ait-Saada and Nadif, 2023, a.o.). In the present paper, we formalize this connection mathematically in Section 2. We then empirically verify our mathematical approach in Section 3, discuss how this relation sheds light on earlier works focusing on anisotropy in Section 4, and conclude with directions for future work in Section 5. 2 Some conflicting optimization objectives We can show that isotropy—as assessed by IsoScore (Rudman et al., 2022)—impose requirements that conflict with cluster structures—as assessed by silhouette scores (Rousseeuw, 1987)—as well as linear classifier objectives. Notations. In what follows, let D be a multiset of points in a vector space, Ωa set of labels, and ℓ: D →Ωa labeling function that associates a given data-point in D to the relevant label. Without loss of generality, let us further assume that D is PCA-transformed. Let us also define the following constructs for clarity of exposition: Dω = {d : ℓ(d) = ω} sign(ω, ω′) = ( −1 if ω = ω′ +1 otherwise Simply put, Dω is the subset of points in D with label ω, whereas the sign function helps delineate terms that need to be maximized (inter-cluster) vs. terms that need to be minimized (intra-cluster). 2.1 Silhouette objective for clustering We can consider whether the groups as defined by ℓare in fact well delineated by the Euclidean distance, i.e., whether they form natural clusters. This is something that can be assessed through silhouette scores, which involve a separation and a cohesion score for each data-point. The cohesion score consists in computing the average distance 75 between the data-point and other members of its group, whereas separation consists in computing the minimum cohesion score the data-point could have received with any other label than the one it was assigned to. More formally, let: cost(d, S) = 1 |S| X d′∈S sX i (di −d′i)2 then we can define the silhouette for one sample as coh(d) = cost d, Dℓ(d) \ {d}  sep(d) = min ω′∈Ω\{ℓ(d)} cost (d, Dω′) silhouette(d) = sep(d) −coh(d) max{sep(d), coh(d)} Or in other words, the silhouette score is maximized when separation cost (sep) is maximized and cohesion cost (coh) is minimized. Hence, to maximize the silhouette score across the whole dataset D, one needs to (i) maximize all inter-cluster distances, and (ii) minimize all intra-cluster distances. We can therefore define a maximization objective for the entire set D: X d∈D X d′∈D sign(ℓ(d), ℓ(d′)) sX i (di −d′i)2 which, due to the monotonicity of the square root in R+, will have the same optimal argument D∗as the simpler objective OS OS = X d∈D X d′∈D sign(ℓ(d), ℓ(d′)) X i di −d′i 2 (1) 2.2 Incompatibility with IsoScore How does the objective in (1) conflict with isotropy requirements? Assessments of isotropy such as IsoScore generally rely on the variance vector. As we assume D to be PCA transformed, the covariance matrix is diagonalized, and we can obtain variance for each individual component through pairwise squared distances (Zhang et al., 2012): V(D)i = 1 2|D|2 X d∈D X d′∈D di −d′i 2 In IsoScore, this variance vector is then normalized to the length of the ⃗1 vector of all ones, before computing the distance between the two: v u u tX i ∥⃗1∥2 ∥V(D)∥2 V(D)i −1 !2 ca cb r Figure 1: Relation between angle and chord. This distance is taken as an indicator of isotropy defect, i.e., isotropic spaces will minimize it. Given the normalization applied to the variance vector, the defect is computed as the distance between two points on a hyper-sphere. Hence it is conceptually simpler to think of this distance as an angle measurement: Remark that as the cosine between V(D) and ⃗1 increases, the isotropy defect decreases. A diagram illustrating this relation is provided in Figure 1: For a given reference point r and two comparison points ca and cb, we can observe that the shortest chord (from r to ca) also corresponds to the smallest angle. More formally, let ˜v = ∥⃗1∥2 ∥V(D)∥2 V(D) be the renormalized observed variance vector. We can note that both ˜v and the ideal variance vector ⃗1 are points on the hyper-sphere centered at the origin and of radius ∥⃗1∥2. As such, the defect is then equal to the distance between two points on a circle, i.e., the length of the chord between the renormalized observed variance vector and the ideal variance vector—which can be computed by simple trigonometry means, as 2∥⃗1∥2 sin (α/2), with α the angle between ˜v and ⃗1. This can be converted to the more familiar cosine by applying a trigonometry identity (given that 0 ≤α ≤π/4): ∥˜v −⃗1∥2 = 2∥⃗1∥2 p 1 −cos2(α/2) 1 4d∥˜v −⃗1∥2 2 −1 = −cos2(α/2) where d is the dimension of the vectors in our point cloud. Hence we can exactly relate the isotropic defect (squared) to the cosine (squared) of the angle between ideal and observed variance vectors. By monotonicity arguments, we can simplify this as follows: To maximize isotropy, we have to maximize the objective OI OI = cos  ⃗1, V (D)  ∝ X d∈D X d′∈D X i di −d′i 2 (2) This intuitively makes sense: Ignoring vector norms, we have to maximize all distances between 76 every pair of data-points to ensure all dimensions are used equally, i.e., spread data-points out evenly on a hyper-sphere. However, in the general case, it is not possible to maximize both the isotropy objective in (2) and the silhouette score objective in (1): Intra-cluster pairwise distances must be minimized for optimal silhouette scores, but must be maximized for optimal isotropy scores. In fact, the two objectives can only be jointly maximized in the degenerate case where no two data-points in D are assigned the same label.1 2.3 Relation to linear classifiers Informally, latent representations need to form clusters corresponding to the labels in order to optimize a linear classification objective. Consider that in classification problems (i) any data-point d is to be associated with a particular label ℓ(d) = ωi and dissociated from other labels Ω\{ℓ(d)}, and (ii) association scores are computed using a dot product between the latent representation to be classified and the output projection matrix, where each column vector cω corresponds to a different class label ω. As such, for any point d to be associated with its label ℓ(d), one has to maximize ⟨d, cℓ(d)⟩= 1 2 ∥d∥2 2 + ∥cℓ(d)∥2 2 −∥d −cℓ(d)∥2 2  In other words, one must either augment the norm of d or cℓ(d), or minimize the distance between d and cℓ(d). Note however that this does not factor in the other classes ω′ ∈Ω\ {ℓ(d)} from which d should be dissociated, i.e., where we must minimize the above quantity. To account for the other classes, the global objective OC to maximize can be defined as OC = − X d∈D X ω∈Ω sign (ω, ℓ(d)) ⟨d, cω⟩ = − X d∈D |Ω| −2 2 ∥d∥2 2 − X ω∈Ω |D| −2|Dω| 2 ∥cω∥2 2 + 1 2 X d∈D X ω∈Ω sign (ω, ℓ(d)) X i (di −cω i )2 (3) where the weights |Ω| −2 and |D| −2|Dω| stem from counting how many other vectors a given data or class vector is associated with or dissociated from: we have one label to associate with any datapoint d, and |Ω| −1 to dissociate it from; whereas 1Hence some NLP applications and tasks need not be impeded by isotropy constrains, e.g., linear analogies that rely on vector offsets are a prima facie compatible with isotropy. a class vector cω should be associated with the corresponding subset Dω and dissociated from the rest of the dataset (viz. D \ Dω).2 Focusing on the last line of Equation (3), we find that maximizing classification objectives entails minimizing the distance between a latent representation d and the vector for its label cℓ(d), and maximizing its distance to all other class vectors. It is reminiscent of the silhouette score in Equation (1): In particular any optimum for OC is an optimum for OS, since it entails D∗such that ∀d, d′ ∈D∗ ℓ(d) = ℓ(d′) ⇐⇒d = d′ (4) Informally: The cluster associated with a label should collapse to a single point. Therefore the isotropic objective OI in Equation (2) is equally incompatible with the learning objective OC of a linear classifier. In summary, (i) point clouds cannot both contain well-defined clusters and be isotropic; and (ii) linear classifiers should yield clustered and thereby anisotropic representations. 3 Empirical confirmation To verify the validity of our demonstrations in Section 2, we can optimize a set of data-points for a classification task using a linear classifier: We should observe an increase in silhouette scores, and a decrease in IsoScore. Note that we are therefore evaluating the behavior of parameters as they are optimized; i.e., we do not intend to test whether silhouettes and IsoScore behave as expected on held-out data. This both allows us to precisely test the argument laid out in Section 2 and cuts down computational costs significantly. 3.1 Methodology We consider four setups: (i) optimizing SBERT sentence embeddings (Reimers and Gurevych, 2019)3 on the binary polarity dataset of Pang and Lee (2004); (ii) optimizing paired SBERT embeddings3 on the validation split of SNLI (Bowman et al., 2015); (iii) optimizing word2vec embeddings4 on 2The corresponding two sums can be understood as probabilistic priors over the data: The objective entails that the norm of a class vector cω should be proportional to the number of data-points with this label ω, whereas one would expect a uniform distribution for vectors d. These terms cancel out for balanced, binary classification tasks. 3all-MiniLM-L6-v2 4http://vectors.nlpl.eu/repository/, model 222, trained on an English Wikipedia dump of November 2021. 77 Dataset N. items N. params. Pang and Lee (2004) 10 662 4 094 976 through nltk (Bird and Loper, 2004) Bowman et al. (2015) 9 842 4 987 395 from nlp.stanford.edu Mickus et al. (2022b) 11 462 4 341 004 from codwoe.atilf.fr Fellbaum (1998) 2 275 690 326 from github.com/altsoph Table 1: Dataset vs. number of datapoints (N. items) and corresponding number of trainable parameters (N. params.). POS-tagging multi-label classification using the English CoDWoE dataset (Mickus et al., 2022b); and (iv) optimizing word2vec embeddings4 for WordNet supersenses multi-label classification (Fellbaum, 1998; pre-processed by Tikhonov et al., 2023). All these datasets and models are in English and CC-BY or CC-BY-SA.5 Relevant information is available in Table 1; remark we do not split the data as we are interested on optimization behavior. We also replicate and extend these experiments on GLUE in Appendix A. For (i) and (ii), we directly optimize the output embeddings of the SBERT model rather than update the parameters of the SBERT model. In all cases, we compute gradients for the entire dataset, and compute silhouette scores with respect to the target labels and IsoScore over 1000 updates. In multi-label cases (iii) and (iv), we consider distinct label vectors as distinct target assignments when computing silhouette scores. Models are trained using the Adam algorithm (Kingma and Ba, 2014);6 in cases (i) and (ii) we optimize cross-entropy, in cases (iii) and (iv), binary cross-entropy per label. Remark that setups (ii), (iii) and (iv) subtly depart from the strict requirements laid out in Section 2. Training per model requires between 10 minutes and 1 hour on an RTX3080 GPU; much of which is in fact devoted to CPU computations for IsoScore and silhouette scores values. Hyperparameters listed correspond to default PyTorch values (Paszke et al., 2019), no hyperparameter search was carried out. IsoScore is computed with the pip package IsoScore (Rudman et al., 2022) on unpaired embeddings, silhouette scores with scikit-learn (Pedregosa et al., 2011). 5Our use is consistent with the intended use of these resources. We trust the original creators of these resources that they contain no personally identifying data. 6Learning rate of 0.001, β of (0.9, 0.999). 0 200 400 600 800 1000 step 0.2 0.0 0.2 0.4 0.6 silhouette task polarity snli pos supersenses model SBERT W2V (a) Silhouette across training 0 200 400 600 800 1000 step 7 6 5 4 3 2 1 log(isoscore) task polarity snli pos supersenses model SBERT W2V (b) Log-normalized IsoScore across training Figure 2: Evolution of silhouette score and IsoScore across classification optimization (avg. of 5 runs). 0.2 0.0 0.2 0.4 0.6 silhouette 0.0 0.1 0.2 0.3 isoscore task polarity pos snli supersenses model SBERT W2V Figure 3: Relationship between silhouette scores and IsoScore (avg. of 5 runs). 3.2 Results Results of this empirical study are displayed in Section 3.1. Performances with five different random initialization reveal negligible standard deviations (maximum at any step < 0.0054, on average < 0.0008). Our demonstration is validated: Across training to optimize classification tasks, the datapoints become less isotropic and better clustered. We can also see a monotonically decreasing relationship between IsoScore and silhouette scores, which is better exemplified in Figure 3: We find correlations with Pearson’s r of −0.808 for the polarity task, −0.878 for SNLI, −0.947 for POStagging and −0.978 for supersense tagging; Spearman’s ρ are always below −0.998. In summary, we empirically confirm that isotropy requirements conflict with silhouette scores and linear classification objectives. 78 4 Related works How does the connection between clusterability and isotropy that we outlined shed light on the growing literature on anisotropy? While there is currently more evidence in favor of enforcing isotropy in embeddings, the case is not so clear cut that we can discard negative findings, and a vast majority of the positive evidence relies on improper techniques for quantifying isotropy (Rudman et al., 2022). Ethayarajh (2019) stressed that contextual embeddings are effective yet anisotropic. Ding et al. (2022) provides experiments that advise against using isotropy calibration on transformers to enhance performance in specific tasks. Rudman and Eickhoff (2023) finds that anisotropy regularization in fine-tuning appears to be beneficial on a large array of tasks. Lastly, Rajaee and Pilehvar (2021a) find that the contrasts encoded in dominant dimensions can, at times, capture linguistic knowledge. On the other hand, the original study of Mu and Viswanath (2018) found that enforcing isotropy on static embeddings improved performances on semantic similarity, both at the word and sentence level, as well as word analogy. Subsequently, a large section of the literature has focused on this handful of tasks (e.g., Liang et al., 2021; Timkey and van Schijndel, 2021). Isotropy was also found to be helpful beyond these similarity tasks: Haemmerl et al. (2023) report that isotropic spaces perform much better on cross-lingual tasks, and Jung et al. (2023) stress its benefits for dense retrieval. These are all applications that require graded ranking judgments, and therefore are generally hindered by the presence of clusters—such clusters would for instance introduce large discontinuities in cosine similarity scores. To take Haemmerl et al. (2023) as an example, note that languagespecific clusters are antithetical to the success of cross-lingual transfer applications. It stands to reason that isotropy can be found beneficial in such cases, although the exact experimental setup will necessarily dictate whether it is boon or bane: For instance Rajaee and Pilehvar (2021b) tested finetuning LLMs as Siamese networks to optimize performance on sentence-level similarity, and found enforcing isotropy to hurt performances—here, we can conjecture that learning to assign inputs to specific clusters is a viable solution in their case. The literature has previously addressed the topic of isotropy and clustering. Rajaee and Pilehvar (2021a) advocated for enhancing the isotropy on a cluster-level rather than on a global-level. Cai et al. (2021) confirmed the presence of clusters in the embedding space with local isotropy properties. Ait-Saada and Nadif (2023) investigated the correlation between isotropy and clustering tasks and found that fostering high anisotropy yields highquality clustering representations. The study presented here provides a mathematical explanation for these empirical findings. 5 Conclusion We argued that isotropy and cluster structures are antithetical (Section 2), verified that this argument holds on real data (Section 3), and used it to shed light on earlier results (Section 4). This result however opens novel and interesting directions of research: If anisotropic spaces implicitly entail cluster structures, then what is the structure we observe in our modern, highly anisotropic large language models? Prior results suggest that this structure is in part linguistic in nature (Rajaee and Pilehvar, 2021a), but further confirmation is required. Another topic we intend to pursue in future work concerns the relation between non-classification tasks and isotropy: Isotropy constraints have been found to be useful in problems that are not well modeled by linear classification, e.g. word analogy or sentence similarity. Our present work does not yet offer a thorough theoretical explanation why. Acknowledgments This work is part of the FoTran project, funded by the European Research Council (ERC) under the EU’s Horizon 2020 research and innovation program (agreement №771113). We also thank the CSC-IT Center for Science Ltd., for computational resources. Limitations The present paper leaves a number of important problems open. Idealized conditions. Our discussion in Section 2 points out optima that are incompatible, but says nothing of the behavior of models trained until convergence on held out data. In fact, enforcing isotropy could be argued to be a reasonable regularization strategy in that it would lead latent representations to not be tied to a specific classification 79 structure. Relatedly, a natural point of criticism to raise is whether our reasoning will hold for deep classifiers with non-linearities: Most (if not all) modern deep-learning classification approaches rely on nonlinear activation functions across multiple layers of computations. The present demonstration has indeed yet to be expanded to account for such more common cases. Insofar neural architectures trained on classification objectives are concerned, we strongly conjecture their output embeddings would tend to be anisotropic. The anisotropy of inner representations appears to be a more delicate question: For Transformers, there has been extensive work showcasing that their structure is for the most part additive (Ferrando et al., 2022a,b; Modarressi et al., 2022; Mickus et al., 2022a; Oh and Schuler, 2023; Yang et al., 2023; Mickus and Vázquez, 2023), and we therefore expect anisotropy to spread to bottom layers to some extent. For architectures based on warping random distributions such as normalizing flows (Kobyzev et al., 2021), GANs (Goodfellow et al., 2014), or diffusion models (Ho et al., 2020), the fact that (part of) their input is random and isotropic likely limits how anisotropic their inner representations are. Thoroughness of the mathematical framework. The mathematical formalism is not thorough. For the sake of clarity and given page limitations, we do not include a formal demonstration that the linear classification optimum necessarily satisfies the clustering objective. Likewise, when discussing isotropy in Equation (2), we ignore the cosine denominator. Choice of objectives. Our focus on silhouette scores and linear classifier objectives may seem somewhat restrictive. Our use of the silhouette score in the present derivation is motivated by two facts. First, our interest is in how the point cloud will cluster along the provided labels—this rules out any external evaluation metric comparing predicted and gold label, such as ARI (Hubert and Arabie, 1985) or purity scores. Second, we can also connect silhouette scores to a broader family of clustering metrics such as the Dunn index (Dunn, 1974), the Cali´nski–Harabasz index (Cali´nski and Harabasz, 1974) or the Davies–Bouldin index (Davies and Bouldin, 1979). Silhouette scores have the added benefit of not relying on centroids in their formulation, making their relation to the variance vector V(D) more immediate. We conjecture that these other criteria could be accounted for by means of triangular inequalities, as they imply the same optimum layout D∗as Equation (4). As for our focus on the linear classifier objective, we stress this objective is a straightforward default approach; but see Appendix B for a discussion of triplet loss within a similar framework as sketched here. References Mira Ait-Saada and Mohamed Nadif. 2023. Is anisotropy truly harmful? a case study on text clustering. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1194–1203, Toronto, Canada. Association for Computational Linguistics. Steven Bird and Edward Loper. 2004. NLTK: The natural language toolkit. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, pages 214–217, Barcelona, Spain. Association for Computational Linguistics. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Xingyu Cai, Jiaji Huang, Yuchen Bian, and Kenneth Church. 2021. Isotropy in the contextual embedding space: Clusters and manifolds. In International Conference on Learning Representations. Tadeusz Cali´nski and Jerzy Harabasz. 1974. A dendrite method for cluster analysis. Communications in Statistics, 3(1):1–27. David L. Davies and Donald W. Bouldin. 1979. A cluster separation measure. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI1(2):224–227. Yue Ding, Karolis Martinkus, Damian Pascual, Simon Clematide, and Roger Wattenhofer. 2022. On isotropy calibration of transformer models. In Proceedings of the Third Workshop on Insights from Negative Results in NLP, pages 1–9, Dublin, Ireland. Association for Computational Linguistics. J. C. Dunn. 1974. Well-separated clusters and optimal fuzzy partitions. Journal of Cybernetics, 4(1):95– 104. Kawin Ethayarajh. 2019. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In 80 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Hong Kong, China. Association for Computational Linguistics. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Bradford Books. Javier Ferrando, Gerard I. Gállego, Belen Alastruey, Carlos Escolano, and Marta R. Costa-jussà. 2022a. Towards opening the black box of neural machine translation: Source and target interpretations of the transformer. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8756–8769, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Javier Ferrando, Gerard I. Gállego, and Marta R. Costajussà. 2022b. Measuring the mixing of contextual information in the transformer. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8698–8714, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc. Katharina Haemmerl, Alina Fastowski, Jindˇrich Libovický, and Alexander Fraser. 2023. Exploring anisotropy and outliers in multilingual language models for cross-lingual semantic sentence similarity. In Findings of the Association for Computational Linguistics: ACL 2023, pages 7023–7037, Toronto, Canada. Association for Computational Linguistics. Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems, volume 33, pages 6840–6851. Curran Associates, Inc. Lawrence Hubert and Phipps Arabie. 1985. Comparing partitions. Journal of Classification, 2(1):193–218. Euna Jung, Jungwon Park, Jaekeol Choi, Sungyoon Kim, and Wonjong Rhee. 2023. Isotropic representation can improve dense retrieval. In Advances in Knowledge Discovery and Data Mining, pages 125– 137, Cham. Springer Nature Switzerland. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. Ivan Kobyzev, Simon J.D. Prince, and Marcus A. Brubaker. 2021. Normalizing flows: An introduction and review of current methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(11):3964–3979. Yuxin Liang, Rui Cao, Jie Zheng, Jie Ren, and Ling Gao. 2021. Learning to remove: Towards isotropic pretrained BERT embedding. In Artificial Neural Networks and Machine Learning – ICANN 2021, pages 448–459, Cham. Springer International Publishing. Timothee Mickus, Denis Paperno, and Mathieu Constant. 2022a. How to dissect a Muppet: The structure of transformer embedding spaces. Transactions of the Association for Computational Linguistics, 10:981–996. Timothee Mickus, Kees Van Deemter, Mathieu Constant, and Denis Paperno. 2022b. Semeval-2022 task 1: CODWOE – comparing dictionaries and word embeddings. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022), pages 1–14, Seattle, United States. Association for Computational Linguistics. Timothee Mickus and Raúl Vázquez. 2023. Why bother with geometry? on the relevance of linear decompositions of transformer embeddings. In Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 127–141, Singapore. Association for Computational Linguistics. Ali Modarressi, Mohsen Fayyaz, Yadollah Yaghoobzadeh, and Mohammad Taher Pilehvar. 2022. GlobEnc: Quantifying global token attribution by incorporating the whole encoder layer in transformers. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 258–271, Seattle, United States. Association for Computational Linguistics. Jiaqi Mu and Pramod Viswanath. 2018. All-but-the-top: Simple and effective postprocessing for word representations. In International Conference on Learning Representations. Byung-Doh Oh and William Schuler. 2023. Token-wise decomposition of autoregressive language model hidden states for analyzing model predictions. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 10105–10117, Toronto, Canada. Association for Computational Linguistics. Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 271–278, Barcelona, Spain. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: an 81 imperative style, high-performance deep learning library. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, Red Hook, NY, USA. Curran Associates Inc. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. Sara Rajaee and Mohammad Taher Pilehvar. 2021a. A cluster-based approach for improving isotropy in contextual embedding space. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 575–584, Online. Association for Computational Linguistics. Sara Rajaee and Mohammad Taher Pilehvar. 2021b. How does fine-tuning affect the geometry of embedding space: A case study on isotropy. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3042–3049, Punta Cana, Dominican Republic. Association for Computational Linguistics. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Peter J. Rousseeuw. 1987. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics, 20:53–65. William Rudman and Carsten Eickhoff. 2023. Stable anisotropic regularization. William Rudman, Nate Gillman, Taylor Rayne, and Carsten Eickhoff. 2022. IsoScore: Measuring the uniformity of embedding space utilization. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3325–3339, Dublin, Ireland. Association for Computational Linguistics. Alexey Tikhonov, Lisa Bylinina, and Denis Paperno. 2023. Leverage points in modality shifts: Comparing language-only and multimodal word representations. In Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023), pages 11–17, Toronto, Canada. Association for Computational Linguistics. William Timkey and Marten van Schijndel. 2021. All bark and no bite: Rogue dimensions in transformer language models obscure representational quality. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4527–4546, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Sen Yang, Shujian Huang, Wei Zou, Jianbing Zhang, Xinyu Dai, and Jiajun Chen. 2023. Local interpretation of transformer based on linear decomposition. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 10270–10287, Toronto, Canada. Association for Computational Linguistics. Yuli Zhang, Huaiyu Wu, and Lei Cheng. 2012. Some new deformation formulas about variance and covariance. In 2012 Proceedings of International Conference on Modelling, Identification and Control, pages 987–992. A Supplementary experiments on GLUE We reproduce experiments described in Section 3 on GLUE tasks (Wang et al., 2018).7 We train our models on the provided training sets—hence we only consider tasks for which there is a training set (all but ax) and that correspond to a classification problem (all but stsb, a regression task); we remove all datapoints where no label is provided. Given our earlier results, we limit training to 250 updates; we directly update sentence-bert output embeddings by computing gradients for the entire training set all at once. We compute IsoScore and silhouette scores after every update; to alleviate computational costs, they are evaluated on random samples of 20, 000 items whenever the training set is larger than this (samples are performed separately for each update). We test three different publicly available pretrained SBERT models: all-mpnet-base-v2 (referred to as “mpnet” in what follows), all-distilroberta-v1 (viz. “roberta”) and all-MiniLM-L6-v2 (viz. “miniLM”). Training details otherwise match those of Section 3; see Table 2 for further information on the number of datapoints and parameter counts of all models considered. Corresponding results are depicted in Figure 4. While there is some variation across models and 7From huggingface.co. 82 Dataset N. items N. params. miniLM mpnet roberta cola 8 551 3 277 058 6 554 114 6 554 114 mnli 392 702 199 380 483 398 760 963 398 760 963 mrpc 3 668 2 709 506 5 419 010 5 419 010 qnli 104 743 42 617 090 85 234 178 85 234 178 qqp 363 846 189 649 154 379 298 306 379 298 306 rte 2 490 1 738 370 3 476 738 3 476 738 sst2 67 349 25 720 322 51 440 642 51 440 642 wnli 635 356 738 713 474 713 474 Table 2: Supplementary experiments on GLUE: Dataset vs. number of datapoints (N. items) and corresponding number of trainable parameters (N. params.). 0 50 100 150 200 250 step 0.0 0.2 0.4 0.6 silhouette task cola mnli mrpc qnli qqp rte sst2 wnli model miniLM mpnet roberta (a) Silhouette across training 0 50 100 150 200 250 step 7 6 5 4 3 2 1 log(isoscore) task cola mnli mrpc qnli qqp rte sst2 wnli model miniLM mpnet roberta (b) Log-normalized IsoScore across training Figure 4: Supplementary experiments on GLUE: Evolution of silhouette score and IsoScore across classification optimization (avg. of 5 runs). GLUE tasks, all the setups considered display the same trend: Silhouette score increases and IsoScore decreases across training. We can quantify this trend by computing correlation scores between IsoScore and silhouette scores. Corresponding correlations are listed in Table 3: As is obvious, we find consistent and pronounced anticorrelations in all setups, with Pearson’s r always below −0.838 and Spearman’s ρ always below −0.966. This further consolidates our earlier conclusions in Section 3. B Relation to triplet loss To underscore some of the limitations of our approach, we can highlight a connection with the triplet loss, which is often used to learn clusters. setup r ρ miniLM cola −0.882 91 −0.999 96 mnli −0.852 17 −0.999 38 mrpc −0.939 73 −0.996 62 qnli −0.911 88 −0.985 88 qqp −0.928 90 −0.996 66 rte −0.926 48 −0.999 85 sst2 −0.845 51 −0.999 97 wnli −0.896 90 −0.999 87 mpnet cola −0.872 99 −0.999 98 mnli −0.844 58 −0.999 20 mrpc −0.924 56 −0.999 70 qnli −0.905 06 −0.966 50 qqp −0.915 83 −0.995 04 rte −0.913 48 −0.999 80 sst2 −0.838 64 −0.999 95 wnli −0.890 77 −0.999 94 roberta cola −0.871 37 −0.999 99 mnli −0.838 65 −0.999 20 mrpc −0.918 83 −0.998 49 qnli −0.899 18 −0.969 38 qqp −0.911 15 −0.994 24 rte −0.915 15 −0.999 41 sst2 −0.841 03 −0.999 95 wnli −0.890 20 −0.999 91 Table 3: Supplementary experiments on GLUE: Correlations (Pearson’s r and Spearman’s ρ) of IsoScore and silhouette scores in GLUE task It is defined for a triple of points da, dp, dn where ℓ(da) = ℓ(dp) ̸= ℓ(dn) as Lapn = max (∥da −dp∥2 −∥da −dn∥2, 0) = max (∥da −dp∥2, ∥da −dn∥2) −∥da −dn∥2 ≥∥da −dp∥2 −∥da −dn∥2 = X dc∈{dp,dn} −sign (ℓ(da) , ℓ(dc)) ∥da −dc∥2 The objective across the entire dataset D is thus: OT = X ω∈Ω X da∈Dω X dp∈Dω\{da} X dn∈D\Dω −Lapn ≤ X ω∈Ω X da∈Dω X dp∈Dω\{da} X dn∈D\Dω X dc∈{dp,dn} sign (ℓ(da) , ℓ(dc)) ∥da −dc∥2 = X d∈D X d′∈D signwgt ℓ(d) , ℓ d′ ∥d −d′∥2 (5) 83 using a weighted variant of our original sign function: signwgt(ω, ω′) = ( |Dω| −|D| if ω = ω′ |Dω| −1 otherwise Remark that this is in fact an upper bound on both the silhouette objective as defined in Equation (1) and the triplet objective OT. However, as they are to be maximized, the above does not entail that models trained with a triplet loss will necessarily develop anisotropic representations. 84
2024_short
7
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 778–789 August 11-16, 2024 ©2024 Association for Computational Linguistics Estimating the Level of Dialectness Predicts Interannotator Agreement in Multi-dialect Arabic Datasets Amr Keleg, Walid Magdy, Sharon Goldwater Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh [email protected], {wmagdy,sgwater}@inf.ed.ac.uk Abstract On annotating multi-dialect Arabic datasets, it is common to randomly assign the samples across a pool of native Arabic speakers. Recent analyses recommended routing dialectal samples to native speakers of their respective dialects to build higher-quality datasets. However, automatically identifying the dialect of samples is hard. Moreover, the pool of annotators who are native speakers of specific Arabic dialects might be scarce. Arabic Level of Dialectness (ALDi) was recently introduced as a quantitative variable that measures how sentences diverge from Standard Arabic. On randomly assigning samples to annotators, we hypothesize that samples of higher ALDi scores are harder to label especially if they are written in dialects that the annotators do not speak. We test this by analyzing the relation between ALDi scores and the annotators’ agreement, on 15 public datasets having raw individual sample annotations for various sentence-classification tasks. We find strong evidence supporting our hypothesis for 11 of them. Consequently, we recommend prioritizing routing samples of high ALDi scores to native speakers of each sample’s dialect, for which the dialect could be automatically identified at higher accuracies. 1 Introduction Arabic is spoken natively by over 420 million people and is an official language of 24 countries (Bergman and Diab, 2022), making it an important language for NLP systems. However, NLP for Arabic faces a major challenge in that usergenerated text is typically a mixture of Modern Standard Arabic (MSA)—the standardized variant that is taught in schools and used in official communications and newspapers—and regional variants of Dialectal Arabic (DA), which are used in everyday communications, including both speech and social media text (Habash, 2010).1 While MSA can be largely understood by most Arabic speakers, the different variants of DA are not always fully mutually intelligible. Despite this mutual unintelligibility, a common practice when developing datasets for multi-dialect Arabic NLP is to randomly recruit annotators without regard to their dialect. However, routing dialectal content to speakers of a different dialect for annotation or moderation can present real problems. For example, it has been shown to contribute to unjust online content moderation of DA (Business for Social Responsibility, 2022), and racially biased toxicity annotation in American English varieties (Sap et al., 2022). Two recent studies of multi-dialect DA annotation showed that for annotating hate speech or sarcasm, respectively, annotators were more lenient (for hate speech) and more accurate (for sarcasm) when annotating sentences in their native dialect (Bergman and Diab, 2022; Abu Farha and Magdy, 2022). The authors of both studies made the same recommendation for creating new Arabic datasets, namely to first identify the dialect of each sample and then route it to appropriate annotators. This recommendation is theoretically appealing, but presents practical difficulties since automatic dialect identification (DI) is challenging (AbdulMageed et al., 2023), and existing systems assume a single correct label when in fact some texts can be natural in different dialects (Keleg and Magdy, 2023; Olsen et al., 2023). Moreover, the representation of native speakers of the different Arabic dialects on crowdsourcing sites might be skewed (Mubarak and Darwish, 2016). Therefore, recruiting native speakers of some Arabic dialects might be challenging, given the tough conditions of the countries in which these dialects are spoken. 1Refer to §B of the Appendix for a further discussion about the relationship between MSA and DA. 778 In this paper, we address these challenges by building on recent work by Keleg et al. (2023), who presented a system for estimating Arabic Level of Dialectness (ALDi)—i.e., the degree to which a sentence diverges from MSA, on a scale from 0 to 1. We hypothesize that as sentences with low ALDi scores do not diverge much from MSA, they can still be understood and accurately annotated by most Arabic speakers, while this will be less true for sentences with high ALDi scores. If our hypothesis holds, then annotation can be made more efficient while maintaining accuracy, by routing samples with low ALDi scores to speakers of any dialect. Only high-ALDi samples need to be routed to native speakers of the appropriate dialect. We test our hypothesis by investigating the impact of ALDi score on interannotator agreement for 15 publicly released datasets annotated for 6 different sentence-classification tasks.2 We confirm that for most tasks and datasets, higher ALDi scores correlate with lower annotator agreement. A notable exception is the dialect identification (DI) task, where higher ALDi scores correlate with higher agreement, presumably because it is easier to identify a single dialect for sentences that are strongly dialectal. This finding is encouraging for annotation routing, since automatic DI systems may also have higher accuracy on these sentences. We conclude that a combination of automatic ALDi scoring, followed by DI and annotator routing only for high-ALDi sentences, is a promising strategy for annotating multi-dialect Arabic datasets. 2 Methodology Data We study the impact of ALDi scores on the annotators’ agreement for publicly released Arabic datasets. We analyze datasets satisfying the following criteria: • Language: Mixture of MSA and DA. • Variation: Targeting multiple variants of DA. • Annotators: Speakers of different variants of DA that are randomly assigned to the samples. • Tasks Setup: Sentence-level classification. • Released Labels: Individual annotator labels or the percentage of annotators agreeing on the majority-vote label.3 2Instructions to replicate the experiments can be accessed through https://github.com/AMR-KELEG/ALDi-and-IAA 3For some datasets, the percentage of annotators agreeing on the majority vote is weighted by their performance on the annotation quality-assurance test samples. This distinction is irrelevant to our study, where we only consider whether all annotators agreed or not. We searched for datasets on Masader, a community-curated catalog of Arabic datasets (Alyafeai et al., 2021; Altaher et al., 2022). Each dataset on Masader has a metadata field for the variants of Arabic included. We discarded the datasets that only included MSA samples, and manually inspected the remaining 151. After identifying 28 potential datasets that satisfy the criteria above, we contacted the authors of the datasets that do not have the individual annotations publicly released. Eventually, we had 15 datasets to analyze, listed in Table 1, covering: Offensive Text Classification, Hate Speech Detection, Sarcasm Detection, Sentiment Analysis, Speech Act Detection, Stance Detection, and Dialect Identification. Analysis For each dataset, we compute the Arabic Level of Dialectness (ALDi) score for each annotated sample (sentence) using the SentenceALDi model (Keleg et al., 2023), which returns a score from 0 (MSA/non-dialectal) to 1 (strongly dialectal). To investigate the effect of ALDi on annotator agreement, we bin the samples by their ALDi score into 10 bins of width 0.1. We compute % full agree, the percentage of samples in that bin for which all the annotators agreed on a single label. We employ Pearson’s correlation coefficient to analyze the relation between ALDi (represented by each bin’s midpoint ALDi score) and % full agree, and also report the slope of the best-fitting line as a measure of the effect size.4 As aforementioned, our initial hypothesis is that % full agree negatively correlates with high ALDi scores. 3 Results and Discussion We use scatter plots to visualize the relation between % full agree and ALDi on the studied datasets, as shown in Figure 1. Additionally, the histograms of samples across the different bins indicate the dialectal content within the dataset. As per Table 1, 6 datasets out of the 15 have more than 50% of the samples with ALDi scores less than 0.1, which are expected to be written in MSA. However, we found that the overall trends depicted 4The exact values of the slopes and correlation coefficients depend on the number of bins. However, we got similar qualitative results on using 4 or 20 equal-width bins. 10 bins are enough to check if trends are non-linear while keeping a reasonable number of samples in the smallest bins. We also fitted logistic regression (logreg) models using ALDi as a continuous variable and a binary outcome Full Agreement (Yes/No) for each sample. Both analysis tools reveal similar patterns (See Appendix §C) but the binning method provides useful additional visualization. 779 Dataset Task (# labels) %ALDi<0.1 Description Deleted Comments Dataset (DCD) (Mubarak et al., 2017) Offensive (3) 62.57% About 32K deleted comments from aljazeera.com. Confidence scores for the majority vote of 3 annotations are provided. MPOLD (Chowdhury et al., 2020) Offensive (2) 27.82% 4000 sentences interacting with news sources, sampled from Twitter, Facebook, and YouTube, annotated three times. YouTube Cyberbullying Offensive (2) 10.24% 15,050 comments and replies to 9 YouTube videos labeled by 3 annotators (Iraqi, Egyptian, Libyan). (YTCB) (Alakrot et al., 2018) ASAD (Alharbi et al., 2021) Sentiment (3) 35.63% 95,000 tweets with a skewed representation toward the Gulf area and Egypt. ArSAS (Elmadany et al., 2018) Sentiment (4) 57.45% 21,064 tweets related to a pre-specified set of entities or events, with confidence scores for the majority votes across three annotations per sample. Speech Act (6) ArSarcasm-v1 Dialect (5) 57.44% 10,547 tweets, sampled from two different Sentiment Analysis datasets: ATSD (Nabil et al., 2015), SemEval2017 (Rosenthal et al., 2017), reannotated for Sentiment, Dialect, and Sarcasm. (Abu Farha and Magdy, 2020) Sarcasm (2) Sentiment (4) Mawqif Sarcasm (2) 58.04% 4,121 tweets about "COVID-19 vaccine", "digital transformation", or "women empowerment" annotated separately for stance and sentiment/sarcasm till the label confidence reaches 0.7 (min. 3 annotators) or 7 annotators label the sample. (Alturayeif et al., 2022) Sentiment (3) 58.04% Stance (3) 57.99% iSarcasm’s test set Dialect (5) 30.5% 200 sarcastic sentences provided by crowdsourced authors and 1200 non-sarcastic tweets from ArSarcasm-v2 (Abu Farha et al., 2021) reannotated 5 times. (Abu Farha et al., 2022) Sarcasm (2) DART (Alsarsour et al., 2018) Dialect (5) 0.8% 24,279 tweets with distinctive dialectal terms annotated three times for the dialectal region. Samples of complete disagreement are not in the released dataset. Table 1: The datasets included in our study. All datasets have three annotations per sample, except for iSarcasm (5 annotations/sample) and Mawqif (3 or more annotations/sample). For the labels used in each dataset and the proportion of each label, see Table A1. For some datasets, there is a discrepancy between the number of samples listed in the paper and the raw data files with individual labels (See §A of the Appendix). Sentiment Analysis 0.2 0.4 0.6 0.8 ALDi 35 45 55 65 75 % full agree m = -25.01 5427 35802 ASAD ( = -0.99) * 0.2 0.4 0.6 0.8 ALDi 25 35 45 55 65 % full agree m = -6.08 341 5816 ArSarcasm-v1 ( = -0.71) * 0.2 0.4 0.6 0.8 ALDi 5 15 25 35 45 % full agree m = -28.75 57 2288 Mawqif ( = -0.91) * 0.2 0.4 0.6 0.8 ALDi 25 35 45 55 65 % full agree m = -2.05 804 12101 ArSAS ( = -0.25) Sarcasm Detection 0.2 0.4 0.6 0.8 ALDi 30 40 50 60 70 % full agree m = -32.69 58 427 iSarcasm ( = -0.83) * 0.2 0.4 0.6 0.8 ALDi 40 50 60 70 80 % full agree m = -24.59 341 5816 ArSarcasm-v1 ( = -0.92) * 0.2 0.4 0.6 0.8 ALDi 50 60 70 80 90 % full agree m = -40.94 57 2288 Mawqif ( = -0.98) * Speech Act Detection 0.2 0.4 0.6 0.8 ALDi 45 55 65 75 85 % full agree m = 6.79 804 12101 ArSAS ( = 0.78) * Offensive Text Classification and Hate Speech Detection 0.2 0.4 0.6 0.8 ALDi 35 45 55 65 75 % full agree m = -3.21 716 19831 DCD ( = -0.4) 0.2 0.4 0.6 0.8 ALDi 50 60 70 80 90 % full agree m = 10.67 230 1113 MPOLD ( = 0.64) * 0.2 0.4 0.6 0.8 ALDi 45 55 65 75 85 % full agree m = -11.83 679 3273 YTCB ( = -0.92) * Stance Detection 0.2 0.4 0.6 0.8 ALDi 30 40 50 60 70 % full agree m = -22.48 58 2267 Mawqif ( = -0.88) * Arabic Dialect Identification 0.2 0.4 0.6 0.8 ALDi 0 10 20 30 40 % full agree m = 23.27 58 427 iSarcasm ( = 0.69) * 0.2 0.4 0.6 0.8 ALDi 15 25 35 45 55 % full agree m = -1.95 341 5816 ArSarcasm-v1 ( = -0.06) 0.2 0.4 0.6 0.8 ALDi 40 50 60 70 80 % full agree m = 38.45 194 8542 DART ( = 0.97) * Figure 1: Scatter plots showing the relationship between binned ALDi scores (x-axis) and the percentage of samples with full annotator agreement (y-axis). The histogram represents the # of samples per bin (with min and max values for any bin labeled on the right-hand axis). The slope of the best-fitting line (m) is shown, and to enable visual comparison of slopes, all plots have the same y-axis scale (possibly shifted up or down). Note: Statistically significant (p<0.05) correlation coefficients (ρ) are marked with *. in Figure 1 will not be affected if we discard these samples with low ALDi scores and only focus on the rest. For non-DI tasks, ALDi negatively correlates with agreement. Inspecting the trends depicted in Figure 1, strong negative Pearson’s correlation 780 Dataset Sample Translation of the Underlined Cue ALDi Individual Labels YTCB Hatespeech Detection ­rK A§ –y xA At ¯ ¤ Tn§A• M¤ ŠA• rŒm˜Lkrˆ A› you insect 0.98 HateSpeech(3x) A§ ”r`˜ Ty`yJ Tylˆ x¤d ¨˜ rtl˜ ­¤d ¤r rmˆ r A§ rˆ you scabies 0.81 HateSpeech(3x) ¨k§ yl˜ L fˆ ™bl ¯ œZA• P’Až  A•A› ¢hhhhhhhhhhhhhh¡ wnˆ Gulf’s colt 0.98 HateSpeech(3x) ArSarcasm-v1 Sentiment Analysis ¢›®• d` H ¨nb`§ ™b’  A• Ÿ CA› ¨k§C db˜A Šwl§ ‘rq› ¢tys I found him disgusting 0.81 Negative(2x) Positive(1x) xCdm˜A# ŠwtA§ ^\??? šAfV L› ¯¤ š¤ šAfVA# ¢hhhhhhhhhhhhhhhhhhh¡ hahaha 0.94 Positive(2x) Negative(1x) Table 2: Five qualitative samples of high ALDi scores. The underlined segments represent the cues that the annotators might have used to choose a label even if they do not fully understand the sentence. Despite the presence of these cues, the annotators still disagreed on labeling the last two samples. We only provide translations for the underlined cue segments. coefficients exist for 8 out of the 12 datasets for the non-DI tasks (sentiment analysis, sarcasm, hate speech, and stance detection). Both the trends (quantified by the slope m) and the correlation coefficients for most of the tasks indicate that the percentage of samples for which all the annotators assign the same label decreases as the ALDi scores increase, often by a large margin.5 We notice different trends for DI that we elaborate on below. For DI, agreement is lowest for mid-range ALDi scores (if MSA is a possible label) or low ALDi scores (if it is not). By definition, MSA sentences have an ALDi of 0, and normally the ALDi estimation model assigns them very low scores. For the ArSarcasm-v1 and iSarcasm datasets, the set of labels for the DI task includes MSA (i.e., some sentences in these datasets are not dialectal). For both datasets, one notices high percentages of agreement scores for the bin having ALDi scores ∈[0, 0.1] (generally agreeing that the label is MSA). The percentages decrease for the few succeeding bins, before rising again for the bins with high ALDi scores. Sentences of high ALDi scores (e.g., ∈[0.8, 1]) are expected to have multiple dialectal cues, which increases the chance of attributing them to a single dialect. For sentences of intermediate ALDi scores, annotators can agree a sentence is not in MSA, however, they would struggle to determine the dialect of the sentence, which is manifested as having lower percentages of the full agreement for these bins. 5Refer to Appendix §E for a possible explanation of the unexpected trends of the ArSAS dataset. The authors of the DART dataset do not include MSA in the labels since they curated sentences with distinctive dialectal terms. This explains the low percentage of full agreement for the bin of ALDi scores ∈[0, 0.1], unlike the other two DI datasets. However, the pattern of having higher full agreement percentages for bins with higher ALDi scores still holds. 4 Analysis of Trends by Class Label A more nuanced analysis of the non-DI datasets can be done by splitting the samples according to their majority-vote labels (See Appendix §D). The declining trend for agreement as ALDi scores increase is consistently salient for the negative class of the different tasks (i.e., Not Sarcastic, Neutral (sentiment), Non-offensive). One explanation is that agreeing on one of these labels requires fully understanding the sample, which is expected to be harder for non-native speakers of the dialects of sentences with high ALDi scores. Moreover, the general usage of MSA for formal communications, and DA for more personal ones might be biasing the annotators. Further controlled analysis is required to investigate these intuitive explanations. Conversely, the presence of specific words might be a strong cue for the label of the overall sentence, which might be useful for the positive classes (i.e., Sarcastic, Positive/Negative, Obscene/Offensive). Consequently, annotators might be able to agree on a label for a sentence even if it contains some unintelligible segments. Table 2 shows five examples, in which the annotators can use cues to label the 781 whole sentence. Consider the first offensive sentence in the Table. The presence of the MSA insult ­rK A§ (you insect) is enough to guess that the sentence is offensive, even if the remaining segment is not fully intelligible. Lastly, note that agreeing on a label does not imply it is accurate, especially when relying on cues for annotation. 5 Conclusion and Recommendation Factors such as task subjectivity and vague guidelines could cause disagreement between annotators. For Arabic, we demonstrate that the Arabic level of dialectness of a sentence (ALDi), automatically estimated using the Sentence-ALDi model (Keleg et al., 2023), is an additional overlooked factor. Analyzing 15 datasets, we find strong evidence of a negative correlation between ALDi and the full annotator agreement scores for 8 of the 12 nonDialect Identification datasets. Moreover, for the 3 Dialect Identification datasets, we find that annotators have higher agreement scores for samples of higher ALDi scores, which by definition would have more dialectal features. The combination of more dialectal features in a sentence is more probable to be distinctive of a specific dialect. Previous research recommended routing samples to native speakers of the different Arabic dialects for higher annotation quality. Our analysis indicates that a large proportion of 6 datasets are samples with ALDi scores < 0.1, which are expected to be MSA samples that can be routed to speakers of any Arabic dialect. Moreover, the lower agreement scores for samples with high ALDi scores show that extra care should be given to these samples. Dataset creators should prioritize routing high-ALDi samples to native speakers of the dialects of these samples, for which the dialects can be automatically identified at higher accuracy as these samples have more dialectal cues. Limitations The trends we report validate our hypothesis. However, more thorough analyses need to be done to understand how ALDi affects each task given its unique nature. Knowing the demographic information about the annotators might have allowed for revealing deeper insights into how speakers of specific Arabic dialects understand samples from other dialects. However, this would have required running a controlled experiment re-annotating the 15 datasets, which we hope future work will attempt. Another potential extension to this work is to analyze the interannotator disagreement on annotating dialectal data for token-level tasks. To the best of our knowledge, all the publicly available token-level Arabic datasets are built by carefully selecting samples written in specific dialects and recruiting native speakers of each of these dialects to perform the annotation, after closely training them. However, even if a multi-dialect token-level dataset is annotated by randomly assigning the samples to speakers of different dialects, the analysis would require a new model to estimate the level of dialectness on the token level, since the Sentence-ALDi model used here works at the sentence level. Lastly, we acknowledge that there are multiple reasons for the annotators to disagree, which include the task’s subjectivity, the annotators’ background, and their worldviews (Uma et al., 2021). However, these factors would have less impact on the annotators’ disagreement if a sample is not fully intelligible. Acknowledgments This work could not have been done without the help of the datasets’ creators who have kindly agreed to share the individual labels for their datasets’ samples. Thanks, Ibrahim Abu Farha, Nora Alturayeif, and Manal Alshehri. We also thank Hamdy Mubarak, Nuha Albadi, Nedjma Ousidhoum, and Hala Mulki for trying to help with finding the individual annotator labels for some of their datasets. Lastly, we appreciate the efforts of the anonymous ARR reviewers, action editors, and area chairs. Thanks for the insightful discussions and valuable suggestions. This work was supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh, School of Informatics. References Muhammad Abdul-Mageed, AbdelRahim Elmadany, Chiyu Zhang, El Moatez Billah Nagoudi, Houda Bouamor, and Nizar Habash. 2023. NADI 2023: The fourth nuanced Arabic dialect identification shared task. In Proceedings of ArabicNLP 2023, pages 600– 613, Singapore (Hybrid). Association for Computational Linguistics. Ibrahim Abu Farha and Walid Magdy. 2020. From Arabic sentiment analysis to sarcasm detection: The ArSarcasm dataset. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Process782 ing Tools, with a Shared Task on Offensive Language Detection, pages 32–39, Marseille, France. European Language Resource Association. Ibrahim Abu Farha and Walid Magdy. 2022. The effect of Arabic dialect familiarity on data annotation. In Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP), pages 399– 408, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Ibrahim Abu Farha, Silviu Vlad Oprea, Steven Wilson, and Walid Magdy. 2022. SemEval-2022 task 6: iSarcasmEval, intended sarcasm detection in English and Arabic. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022), pages 802–814, Seattle, United States. Association for Computational Linguistics. Ibrahim Abu Farha, Wajdi Zaghouani, and Walid Magdy. 2021. Overview of the WANLP 2021 shared task on sarcasm and sentiment detection in Arabic. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 296–305, Kyiv, Ukraine (Virtual). Association for Computational Linguistics. Azalden Alakrot, Liam Murray, and Nikola S. Nikolov. 2018. Dataset construction for the detection of antisocial behaviour in online communication in Arabic. Procedia Computer Science, 142:174–181. Arabic Computational Linguistics. Basma Alharbi, Hind Alamro, Manal Alshehri, Zuhair Khayyat, Manal Kalkatawi, Inji Ibrahim Jaber, and Xiangliang Zhang. 2021. Asad: A twitter-based benchmark Arabic sentiment analysis dataset. Israa Alsarsour, Esraa Mohamed, Reem Suwaileh, and Tamer Elsayed. 2018. DART: A large dataset of dialectal Arabic tweets. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Yousef Altaher, Ali Fadel, Mazen Alotaibi, Zaid Alyazidi, et al. 2022. Masader Plus: A new interface for exploring +500 Arabic NLP datasets. arXiv preprint arXiv:2208.00932. Nora Saleh Alturayeif, Hamzah Abdullah Luqman, and Moataz Aly Kamaleldin Ahmed. 2022. Mawqif: A multi-label Arabic dataset for target-specific stance detection. In Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP), pages 174–184, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Zaid Alyafeai, Maraim Masoud, Mustafa Ghaleb, and Maged S. Al-shaibani. 2021. Masader: Metadata sourcing for Arabic text and speech data resources. As-Said Muhámmad Badawi. 1973. Mustawayat alarabiyya al-muasira fi Misr. Dar al-maarif. A. Bergman and Mona Diab. 2022. Towards responsible natural language annotation for the varieties of Arabic. In Findings of the Association for Computational Linguistics: ACL 2022, pages 364–371, Dublin, Ireland. Association for Computational Linguistics. Business for Social Responsibility. 2022. Human rights due diligence of meta’s impacts in Israel and Palestine in may 2021. https://about.fb.com/wpcontent/uploads/2022/09/Human-Rights-DueDiligence-of-Metas-Impacts-in-Israel-andPalestine-in-May-2021.pdf. Shammur Absar Chowdhury, Hamdy Mubarak, Ahmed Abdelali, Soon-gyo Jung, Bernard J. Jansen, and Joni Salminen. 2020. A multi-platform Arabic news comment dataset for offensive language detection. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 6203–6212, Marseille, France. European Language Resources Association. AbdelRahim A. Elmadany, Hamdy Mubarak, and Walid Magdy. 2018. An Arabic speech-act and sentiment corpus of tweets. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). European Language Resources Association (ELRA). The 3rd Workshop on Open-Source Arabic Corpora and Processing Tools, OSACT3 ; Conference date: 08-05-2018. Joseph L. Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378. Nizar Y. Habash. 2010. Introduction to Arabic natural language processing, 1 edition, volume 3 of Synthesis Lectures on Human Language Technologies. Morgan and Claypool Publishers. Amr Keleg, Sharon Goldwater, and Walid Magdy. 2023. ALDi: Quantifying the Arabic level of dialectness of text. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 10597–10611, Singapore. Association for Computational Linguistics. Amr Keleg and Walid Magdy. 2023. Arabic dialect identification under scrutiny: Limitations of single-label classification. In Proceedings of ArabicNLP 2023, pages 385–398, Singapore (Hybrid). Association for Computational Linguistics. Giovanni Molina, Fahad AlGhamdi, Mahmoud Ghoneim, Abdelati Hawwari, Nicolas ReyVillamizar, Mona Diab, and Thamar Solorio. 2016. Overview for the second shared task on language identification in code-switched data. In Proceedings of the Second Workshop on Computational Approaches to Code Switching, pages 40–49, Austin, Texas. Association for Computational Linguistics. Hamdy Mubarak and Kareem Darwish. 2016. Demographic surveys of Arab annotators on CrowdFlower. In Proceedings of ACM WebSci16 Workshop “Weaving Relations of Trust in Crowd Work: Transparency and Reputation across Platforms. 783 Hamdy Mubarak, Kareem Darwish, and Walid Magdy. 2017. Abusive language detection on Arabic social media. In Proceedings of the First Workshop on Abusive Language Online, pages 52–56, Vancouver, BC, Canada. Association for Computational Linguistics. Mahmoud Nabil, Mohamed Aly, and Amir Atiya. 2015. ASTD: Arabic sentiment tweets dataset. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2515–2519, Lisbon, Portugal. Association for Computational Linguistics. Helene Olsen, Samia Touileb, and Erik Velldal. 2023. Arabic dialect identification: An in-depth error analysis on the MADAR parallel corpus. In Proceedings of ArabicNLP 2023, pages 370–384, Singapore (Hybrid). Association for Computational Linguistics. Nedjma Ousidhoum, Zizheng Lin, Hongming Zhang, Yangqiu Song, and Dit-Yan Yeung. 2019. Multilingual and multi-aspect hate speech analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4675– 4684, Hong Kong, China. Association for Computational Linguistics. Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017. SemEval-2017 task 4: Sentiment analysis in Twitter. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 502– 518, Vancouver, Canada. Association for Computational Linguistics. Maarten Sap, Swabha Swayamdipta, Laura Vianna, Xuhui Zhou, Yejin Choi, and Noah A. Smith. 2022. Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5884–5906, Seattle, United States. Association for Computational Linguistics. Alexandra N Uma, Tommaso Fornaciari, Dirk Hovy, Silviu Paun, Barbara Plank, and Massimo Poesio. 2021. Learning from disagreement: A survey. Journal of Artificial Intelligence Research, 72:1385–1470. Howard Wainer, Marc Gessaroli, and Monica Verdi. 2006. Visual revelations. CHANCE, 19(1):49–52. A Detailed Description of the Datasets We noticed some discrepancies between the number of samples reported in the papers and the number of samples in the corresponding raw datasets. Despite following any filtration steps described in the papers, some of the datasets had more samples than the ones in the publicly released version, as indicated in Table A1. Additionally, the ArSarcasmv1, Mawqif (Stance Task), Mawqif (Sentiment/Sarcasm Tasks), and ASAD had 516, 170, 151, 191 samples with less than 3 annotations respectively, that we decided to discard from our analysis. Conversely, we decided to discard the MLMA dataset (Ousidhoum et al., 2019) for which the authors shared with us some of the raw annotations files. The number of samples in these files was too small compared to the number of samples in the public dataset with majority-vote labels. We also discarded another dataset, for which there was a significant discrepancy between the released dataset and the paper’s description of the dataset. B Code-mixing between MSA and DA Researchers distinguish between Modern Standard Arabic (MSA), and Dialectal Arabic (DA) (Habash, 2010). However, MSA and DA do not exist in isolation, and Arabic speakers sometimes code-mix between terms that can be considered to belong to MSA and others considered to be part of a variety of DA. Notably, some terms can be considered to belong to both MSA and a variety of DA, and even using the surrounding context may not be enough for disambiguation (Molina et al., 2016). Badawi (1973) recognizes five levels of Arabic used in Egypt, that can be categorized according to the amount of code-mixing in addition to the dialectness of the terms/phrases used. The SentenceALDi model, developed by Keleg et al. (2023), estimates the level of dialectness of Arabic sentences, which provides an automatic proxy to distinguish between Arabic sentences according to how they diverge from MSA. We used the Sentence-ALDi model to study the relation between the ALDi score and the agreement between the annotators for 15 Arabic datasets. C Discussion about the Analysis As described in §2, each dataset’s samples were split into 10 bins of equal width according to their respective ALDi scores. Afterward, the correlation between each bin’s midpoint ALDi score and the percentage of samples having full agreement % full agree was computed. For each bin, % full agree represents the Maximum Likelihood Estimation (MLE) for the probability that all the annotators agree on the same label for the samples of this bin. Inability to use Interannotator Agreement metrics for some datasets Automated metrics such as Fleiss’ Kappa (Fleiss, 1971) attempt to measure the Interannotator Agreement (IAA) while accounting for the random agreement/disagreement 784 Dataset Task (# labels) Labels Distribution of Majority-vote Labels Dataset/Paper Discrepancy Deleted Comments Dataset (DCD) (Mubarak et al., 2017) Offensive (3) Confidence Offensive (80.31%) Clean (17.76%) Obscene (1.58%) No Majority (0.35%) MPOLD (Chowdhury et al., 2020) Offensive (2) Individual Non-Offensive (83.12%) Offensive (16.88%) YouTube Cyberbullying Offensive (2) Individual Not (61.38%) HateSpeech (38.62%) (YTCB) (Alakrot et al., 2018) ASAD (Alharbi et al., 2021) Sentiment (3) Individual Neutral (67.83%) Negative (15.33%) Positive (15.19%) No Majority (1.65%) The authors shared with us the raw annotation file of which we analyze 100,484 samples with three annotations or more, as opposed to the 95,000 in the released dataset. ArSAS (Elmadany et al., 2018) Sentiment (4) Confidence Negative (35.38%) Neutral (33.45%) Positive (20.51%) No Majority (6.07%) Mixed (4.59%) Speech Act (6) Confidence Expression (55.07%) Assertion (38.63%) Question (3.32%) No Majority (1.81%) Request (0.67%) Recommendation (0.31%) Miscellaneous (0.18%) ArSarcasm-v1 (Abu Farha and Magdy, 2020) Dialect (5) Individual msa (67.56%) egypt (19.37%) No Majority (5.83%) gulf (3.61%) levant (3.46%) magreb (0.18%) The samples in the raw annotation artifact shared by the authors has 10,641 samples, as opposed to the 10,547 samples in the released dataset. Sarcasm (2) Individual False (84.24%) True (15.7%) No Majority (0.06%) Sentiment (3) Individual neutral (49.45%) negative (32.57%) positive (14.58%) No Majority (3.4%) Mawqif (Alturayeif et al., 2022) Sarcasm (2) Individual No (95.97%) Yes (3.78%) No Majority (0.25%) The authors annotated the same samples for sentiment/sarcasm and stance separately. This was done across 8 different annotation jobs (4 each), for which the authors shared the raw annotation files with us. The number of samples in these files is 4,093 for sentiment/sarcasm and 4,079 for stance, of which 3,942 and 3,909 have three or more annotations. The released dataset is reported to have 4,100 samples. Sentiment (3) Individual Positive (41.15%) Negative (31.46%) Neutral (22.68%) No Majority (4.72%) Stance (3) Individual Favor (60.5%) Against (27.65%) None (7.7%) No Majority (4.14%) iSarcasm’s test set (Abu Farha et al., 2022) Dialect (5) Individual msa (32.29%) nile (31.36%) gulf (16.5%) No Majority (15.79%) levant (2.21%) maghreb (1.86%) The dataset having the individual annotator labels is released as an artifact accompanying the following paper (Abu Farha and Magdy, 2022). Sarcasm (2) Individual 0 (82.07%) 1 (17.93%) DART (Alsarsour et al., 2018) Dialect (5) Proportion GLF (24.27%) EGY (21.69%) IRQ (21.64%) LEV (16.22%) MGH (16.18%) Table A1: A detailed description of the distribution of the majority-vote labels and the data/paper discrepancies in the datasets with individual annotator labels included in our study. Note 1: No Majority means that multiple labels have the same majority number of votes for Individual/Proportion labels, and Confidence < 0.5 otherwise. Note 2: Some of the samples of the ASAD, ArSarcasm-v1, Mawqif datasets have more than 3 annotations, despite the fact the former two are supposed to have only three annotations per sample. between annotators. In principle, it might be possible to perform a version of our analysis using Fleiss’ Kappa rather than % full agree as the dependent variable. However, computing Fleiss’ Kappa would require knowledge of the individual annotations for each sample. Such annotations are not available for the ArSAS (Sentiment/Speech Act), DART, and DCD datasets as described in Table A1. Since we wanted to include as many datasets as possible, we used % full agree instead. Logistic regression as an alternative analysis tool Binning the data leads to a loss of analytical information which might impact the results of the analysis, especially if implausible bins’ boundaries are used (Wainer et al., 2006). Logistic regression with binary outcomes is an alternative analysis that alleviates the limitations of binning. Each sample has a continuous ALDi score as the independent variable, and a binary outcome Full Annotator Agreement (Yes/No). After fitting a logistic regression model to predict the binary outcome, the coefficient of the ALDi variable measures the impact of ALDi on the odds of full agreement. If this coefficient is negative, then the odds of full annotator agreement decrease as the ALDi score increases. Figure C1 demonstrates the probability of full agreement of each dataset, in addition to the coefficient of the ALDi score with its 95% confi785 dence interval. For the 8 non-DI datasets with CoefALDi < −0.2, the coefficients can be considered to be statistically significant since the confidence interval does not include zero. Both analysis tools (correlation analysis and logistic regression) achieve similar results. The same 8 non-DI datasets—ASAD, ArSarcasm-v1 (Sentiment/Sarcasm), Mawqif (Sentiment/Sarcasm/Stance), iSarcasm, and YTCB—have significantly strong negative correlation coefficients as in Figure 1, and statistically significant coefficients for the ALDi variable which are less than -0.2. However, binning the data allows for visualizing the % full agreement as a scatter plot, which can reveal whether the relation between ALDi and the agreement is linear or not, in addition to having a visual way for determining how well the best-fitting line models the data. Impact of data skewness MSA samples are overrepresented in some of the considered datasets. However, this is generally unproblematic for the analysis, so we opted not to discard the MSA samples. For the method described in Section 2, the samples of each bin are independently used to estimate the MLE of full agreement between annotators. Therefore, the over-representation of MSA samples in some datasets does not impact our analysis. D Trends by Class Label As mentioned in §4, Figures D2, D3, D4, D5, and D6 visualize the impact of ALDi on the annotator agreement after splitting the samples according to their majority-vote labels. We acknowledge that the number of samples in the bins for some classes is not enough to draw concrete conclusions (e.g., samples with high ALDi scores for the Neutral class of the ArSAS, and Mawqif datasets as per Figure D3). E The Rising Trend of ArSAS The ArSAS dataset stands out as a dataset with a rising trend for the Speech Act Detection task and a falling trend for the Sentiment Analysis task. Samples of ArSAS were jointly annotated for their sentiment and speech act. Despite having 6 different speech acts, which would arguably make speech act detection harder than sentiment analysis, the Assertion and Expression classes represent 95% of the samples. Looking at their respective trends shown in Figure D5, the two acts show two different behaviors. Most of the assertive samples have ALDi scores <0.2 (arguably, all are MSA ones). Moreover, the number of Assestion samples with high ALDi scores is not enough to estimate the % full agree for their respective bins. Conversely, the Expression act shows higher agreement as the ALDi score increases. The creators of ArSAS noticed that most of the Assertion samples were annotated as Neutral, while most of the Expression samples had polarized sentiment (mostly Negative). The annotators might have treated the Assertion class as the act for Objective sentences, while treating Expression as the act for Subjective sentences. This is arguably easier than sentiment analysis which might explain why annotators agree more on the Speech Act label than the Sentiment label for the ArSAS dataset. Further analysis is required to explain the trends of this dataset. 786 Sentiment Analysis 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.71 0.47 ASAD -1.1 -1.0 -4 2 Coef of ALDi = -1.03 * 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.53 0.48 ArSarcasm-v1 -0.4 -0.1 -4 2 Coef of ALDi = -0.23 * 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.49 0.23 Mawqif -1.4 -1.0 -4 2 Coef of ALDi = -1.2 * 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.55 0.52 ArSAS -0.2 -0.1 -4 2 Coef of ALDi = -0.14 * Sarcasm Detection 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.74 0.36 iSarcasm -1.9 -1.3 -4 2 Coef of ALDi = -1.63 * 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.78 0.5 ArSarcasm-v1 -1.4 -1.1 -4 2 Coef of ALDi = -1.24 * 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.95 0.44 Mawqif -3.4 -2.8 -4 2 Coef of ALDi = -3.1 * Speech Act Detection 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.68 0.71 ArSAS 0.0 0.2 -4 2 Coef of ALDi = 0.14 * Offensive Text Classification and Hate Speech Detection 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.6 0.63 DCD 0.1 0.2 -4 2 Coef of ALDi = 0.14 * 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.76 0.82 MPOLD 0.1 0.6 -4 2 Coef of ALDi = 0.34 * 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.78 0.66 YTCB -0.7 -0.5 -4 2 Coef of ALDi = -0.61 * Stance Detection 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.68 0.43 Mawqif -1.3 -0.8 -4 2 Coef of ALDi = -1.04 * Arabic Dialect Identification 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.12 0.27 iSarcasm 0.6 1.4 -4 2 Coef of ALDi = 0.99 * 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.56 0.34 ArSarcasm-v1 -1.0 -0.8 -4 2 Coef of ALDi = -0.91 * 0.2 0.4 0.6 0.8 ALDi 0 0.25 0.50 0.75 1 P (Full Agree) 0.48 0.82 DART 1.5 1.8 -4 2 Coef of ALDi = 1.63 * Figure C1: For each dataset, plots show the estimated probability of full agreement according to each dataset’s fitted logistic regression model. Under each plot, the coefficient of ALDi with its 95% confidence interval is visualized. Nearly all datasets (marked with *) have confidence intervals that do not include zero, meaning the effect of ALDi is statistically significant at p < 0.05. Negative coefficients indicate that higher ALDi scores predict lower agreement. 787 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -23.9 252 5306 ArSarcasm-v1 (False), = -0.87 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 16.47 78 506 ArSarcasm-v1 (True), = 0.62 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -31.38 42 2274 Mawqif (No), = -0.97 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -6.71 6 23 Mawqif (Yes), = -0.16 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -34.71 47 402 iSarcasm (0), = -0.8 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 15.25 10 66 iSarcasm (1), = 0.46 Figure D2: The trends for the classes of the Saracasm Detection datasets. Statistically significant correlation coefficients (ρ) are marked with *. 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 10.38 663 3385 ASAD (Negative), = 0.91 * 0.20.40.60.8 ALDi 0 20 40 60 80 100 % full agree m = -32.66 3679 29303 ASAD (Neutral), = -0.98 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 13.57 935 3908 ASAD (Positive), = 0.82 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -4.59 52 408 ArSAS (Mixed), = -0.35 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 7.83 392 2924 ArSAS (Negative), = 0.76 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -48.66 29 5873 ArSAS (Neutral), = -0.89 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 2.82 169 2272 ArSAS (Positive), = 0.17 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 9.8 133 1618 ArSarcasm-v1 (negative), = 0.65 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -31.2 97 3344 ArSarcasm-v1 (neutral), = -0.85 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 3.35 62 702 ArSarcasm-v1 (positive), = 0.16 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -9.5 32 458 Mawqif (Negative), = -0.69 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -17.17 17 500 Mawqif (Neutral), = -0.55 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -63.59 2 1244 Mawqif (Positive), = -0.89 * Figure D3: The trends for the classes of the Sentiment Analysis datasets. Statistically significant correlation coefficients (ρ) are marked with *. 788 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -13.95 470 1787 YTCB (Not), = -0.93 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -0.82 209 1486 YTCB (HateSpeech), = -0.13 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -10.61 86 4143 DCD (Clean), = -0.83 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 36.64 29 126 DCD (Obscene), = 0.78 * 0.20.40.60.8 ALDi 0 20 40 60 80 100 % full agree m = -2.42 565 15505 DCD (Offensive), = -0.32 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 9.13 188 955 MPOLD (Non-Offensive), = 0.65 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 6.32 42 158 MPOLD (Offensive), = 0.24 Figure D4: The trends for the classes of the Offensive Text Classification and Hate Speech datasets. Statistically significant correlation coefficients (ρ) are marked with *. 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -55.48 8 6868 ArSAS (Assertion), = -0.81 * 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = 13.93 663 4549 ArSAS (Expression), = 0.89 * Figure D5: The trends for the Assertion and Expression labels of the ArSAS dataset, which represent 95% of the dataset samples. Statistically significant correlation coefficients (ρ) are marked with *. 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -7.52 26 383 Mawqif (Against), = -0.35 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -18.96 7 158 Mawqif (None), = -0.61 0.2 0.4 0.6 0.8 ALDi 0 20 40 60 80 100 % full agree m = -19.25 22 1652 Mawqif (Favor), = -0.67 * Figure D6: The trends for the classes of Mawqif’s Stance dataset. Statistically significant correlation coefficients (ρ) are marked with *. 789
2024_short
70
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 790–801 August 11-16, 2024 ©2024 Association for Computational Linguistics Linear-time Minimum Bayes Risk Decoding with Reference Aggregation Jannis Vamvas and Rico Sennrich Department of Computational Linguistics, University of Zurich {vamvas,sennrich}@cl.uzh.ch Abstract Minimum Bayes Risk (MBR) decoding is a text generation technique that has been shown to improve the quality of machine translations, but is expensive, even if a sampling-based approximation is used. Besides requiring a large number of sampled sequences, it requires the pairwise calculation of a utility metric, which has quadratic complexity. In this paper, we propose to approximate pairwise metric scores with scores calculated against aggregated reference representations. This changes the complexity of utility estimation from O(n2) to O(n), while empirically preserving most of the quality gains of MBR decoding. We release our source code.1 1 Introduction The idea of generating translations by maximizing a metric of translation quality (Kumar and Byrne, 2004) has recently been revived in the context of neural machine translation. In sampling-based MBR decoding (Eikema and Aziz, 2020), many hypotheses are sampled from the model distribution, and their expected utility is estimated using Monte Carlo (MC) sampling. This approach has been shown to improve translation quality compared to beam search, especially when neural metrics are used for utility estimation (Freitag et al., 2022). Estimating utility through MC sampling has quadratic complexity in the number of samples, which limits practical application. Previous work suggested pruning the number of samples based on a cheaper metric or a smaller number of references (Eikema and Aziz, 2022; Cheng and Vlachos, 2023). In this paper, we propose reference aggregation, an alternative efficiency technique that exploits the fact that most common metrics represent text sequences in averageable form, e.g., as n-gram statistics or as embeddings. Specifically, 1https://github.com/ZurichNLP/mbr we combine representations of the references into an aggregate reference representation, which we then use for utility estimation. Our proposed approximation still relies on MC sampling, but on a lower level: Rather than computing an MC estimate of the expected utility, we compute an MC estimate of the “true” reference representation in the feature space of the given utility metric. Since this estimate only needs to be computed once, our approach has linear complexity in the number of sampled hypotheses and references. We report empirical results for four translation directions and two utility metrics: CHRF (Popovi´c, 2015), which is based on character n-gram overlap, and COMET (Rei et al., 2020), a neural network trained with examples of human translation quality judgments. For CHRF, we find that reference aggregation reduces the time needed for computing the utility of 1024 samples by 99.5%, without affecting translation quality. For COMET, metric accuracy does decrease with aggregation, but to a lesser extent than with simply reducing the number of references. Depending on the COMET model, computation time is reduced by 95–99%, which makes reference aggregation an efficient method for hypothesis pruning with COMET. 2 Background and Related Work Sampling-based MBR (Eikema and Aziz, 2020) selects a translation hyp∗out of a set of translation hypotheses hyp1, . . . , hypn ∈hyps by maximizing (expected) utility: hyp∗= arg max hyp ∈hyps utility(hyp). (1) The set of hypotheses is sampled from the model distribution p(hyp|src). Eikema and Aziz (2020) propose to approximate the utility using MC sampling: sample a set of pseudo-references refs = {ref1, . . . , refm} ∼p(ref|src) from the model and 790 calculate a metric against each sampled reference: utility(hyp) ≈1 m X ref ∈refs metric(hyp, ref). (2) For machine translation, typical such metrics are CHRF (Popovi´c, 2015) and BLEU (Papineni et al., 2002), which are based on n-gram statistics, or neural metrics such as COMET (Rei et al., 2020) and BLEURT (Sellam et al., 2020). A line of research has focused on improving the efficiency of sampling-based MBR. Eikema and Aziz (2022) propose coarse-to-fine MBR, which prunes the hypotheses based on a cheaper metric, and N-by-S MBR, which uses fewer references than hypotheses. Cheng and Vlachos (2023) propose confidence-based pruning, where the number of hypotheses is iteratively reduced based on an increasing number of references. Jinnai and Ariu (2024) interpret sampling-based MBR as an instance of medoid identification and apply an established approximation algorithm to this problem. A line of work uses MBR outputs as a training reward, avoiding the inefficiency of MBR during deployment (Finkelstein et al., 2023; Yang et al., 2023). Finally, alternative reranking approaches that do not require pairwise comparisons have been proposed (Fernandes et al., 2022). Several other works investigate the aggregation of reference representations to develop a faster variant of MBR decoding. DeNero et al. (2009) perform reference aggregation in the context of statistical machine translation (SMT). Since SMT does not afford random sampling of pseudo-references, they aggregate references from translation forests or k-best lists. Our study shows the effectiveness of reference aggregation from sampled pseudoreferences, and for neural metrics such as COMET. Furthermore, concurrent to our work, Deguchi et al. (2024) propose to aggregate the sentence embeddings of COMET, and use k-means to group the references into multiple clusters. 3 Reference Aggregation Our approach is based on the observation that most metrics that are commonly used for MBR make use of feature representations that can be aggregated. For example, the n-gram statistics used by CHRF can be aggregated by averaging the counts of the n-grams across all references; and the sentence embeddings used by COMET can be aggregated by calculating an average sentence embedding. For simplicity, we re-use the above notation, where hyp is a hypothesis and ref is a reference, but we now assume that they are represented in an averageable form. We then combine the set of references refs into an aggregate representation ref: ref = 1 m X ref ∈refs ref. (3) We approximate the expected the utility of a sampled hypothesis by calculating a single metric score against this aggregate representation: utility(hyp) ≈metric(hyp, ref). (4) Like with standard sampling-based MBR, it is possible to interpret this approximation as MC sampling: By averaging over representations of sampled references, we estimate a representation of the “true” reference, which we then use for approximating the expected utility of each sampled hypothesis. Importantly, the computational complexity of our approach is in O(|hyps| + |refs|) rather than O(|hyps| · |refs|); see Appendix D for a discussion. 3.1 Application to chrF Metric CHRF (Popovi´c, 2015) is defined as an F-score over character n-grams: CHRFβ = (1 + β2) · CHRP · CHRR β2 · CHRP + CHRR , (5) where CHRP = |hyp ∩ref| |hyp| and CHRR = |hyp ∩ref| |ref| , and the parameter β controls the relative importance of precision and recall. The representations hyp and ref are bags of n-grams, i.e., objects that map each n-gram to its count in the string. We apply reference aggregation to CHRF by averaging the counts of n-grams across all references: ref = 1 m ] ref ∈refs ref, (6) where U is an operation that sums up the counts of each n-gram. We then approximate the expected utility of a hypothesis by calculating CHRFβ(hyp, ref). Appendix A provides a more formal definition of reference aggregation for CHRF. 791 Accuracy of efficiency methods with CHRF as utility metric n 1 n 2 n 4 n 8 .. . 1 0% 50% 100% Number of effective references (log scale) Top-20 acc. EN–DE Reference aggregation N-by-S n 1 n 2 n 4 n 8 ... 1 DE–EN n 1 n 2 n 4 n 8 ... 1 EN–RU n 1 n 2 n 4 n 8 ... 1 RU–EN Accuracy of efficiency methods with COMET-22 as utility metric n 1 n 2 n 4 n 8 .. . 1 0% 50% 100% Number of eff. references (log scale) Top-20 acc. EN–DE Reference aggregation N-by-S CHRF n 1 n 2 n 4 n 8 ... 1 DE–EN n 1 n 2 n 4 n 8 ... 1 EN–RU n 1 n 2 n 4 n 8 ... 1 RU–EN Figure 1: How accurately do MBR efficiency methods approximate standard MBR? In this validation experiment on newstest21, we gradually increase efficiency by using fewer references for pairwise utility estimation – either by subsampling the references (N-by-S; Eikema and Aziz, 2022) or by aggregating their representations using partial aggregation (Section 3.3). We report top-20 accuracy, which describes how often an efficiency method ranks the correct hypothesis (as selected by standard MBR) among the top 20 hypotheses. An efficiency method with a high top-20 accuracy could be used for pruning the number of hypotheses to 20 before standard MBR is applied. 3.2 Application to COMET Metric COMET (Rei et al., 2020) is a pre-trained Transformer model (Vaswani et al., 2017) that has been fine-tuned to predict human judgments of translation quality. In this paper, we focus on the Estimator model architecture, which directly estimates a quality score given a hypothesis, a reference and the source sequence. COMET separately encodes these three inputs into fixed-size embeddings: hyp, ref, src = emb(hyp), emb(ref), emb(src). The three embeddings are then fed into a feedforward module, which outputs a scalar score: comet(hyp) = score(hyp, ref, src). (7) We apply reference aggregation to COMET by averaging the reference embeddings: ref = 1 m X ref ∈refs emb(ref), (8) calculating a single score per hypothesis: comet(hyp) ≈score(hyp, ref, src). (9) 3.3 Partial Aggregation To better understand the loss of accuracy incurred by aggregation, we experiment with partial aggregation, where we vary the number of references that are combined into an average. Given m references and a desired number of references s that should effectively be used for pairwise utility estimation, we partition the set of references into s subsets and create an aggregate reference for each subset. Appendix B presents a formal description of partial aggregation. 3.4 Aggregate-to-fine MBR Analogously to coarse-to-fine MBR (Eikema and Aziz, 2022), we evaluate an aggregate-to-fine MBR approach. Specifically, we use the aggregate reference to prune the number of hypotheses to 20 in a first step. In a second step, we use standard MBR to select the best hypothesis from the pruned set. A formal description is provided in Appendix C. 4 Experimental Setup Data We use newstest21 (Akhbardeh et al., 2021) as validation data and newstest22 (Kocmi et al., 2022) as test data. Generation Parameters As baselines, we evaluate beam search with a beam size of 5 and epsilon sampling (Hewitt et al., 2022) with ϵ = 0.02. For MBR, we generate 1024 samples per segment using epsilon sampling and re-use the same samples as references. While this approach does not guarantee 792 EN–DE DE–EN EN–RU RU–EN Avg. Time (utility / total) Beam search (size 5) 76.16 72.56 68.50 75.47 73.17 - / 0.2 s Epsilon sampling (ϵ = 0.02) 73.39 69.70 65.79 72.13 70.25 - / 0.2 s MBR with CHRF metric – standard MBR 76.03 72.73 69.52 75.51 73.44 15.0 s / 19.8 s – reference aggregation 75.95 72.79 69.46 75.45 73.41 0.1 s / 4.9 s – aggregate-to-fine MBR 76.02 72.80 69.54 75.47 73.46 0.4 s / 5.2 s MBR with COMET-22 metric – standard MBR 77.64 73.57 72.40 76.11 74.93 23.1 s / 27.9 s – reference aggregation 77.21 73.36 72.05 76.05 74.67 1.1 s / 5.9 s – aggregate-to-fine MBR 77.54 73.52 72.29 76.13 74.87 1.5 s / 6.3 s Table 1: Test results on newstest22, using BLEURT-20 for automatic evaluation. We use 1024 samples/references for MBR. In the last column, we report the average time needed for translating a segment, measuring (a) the time needed for utility estimation only, and (b) the total, end-to-end time needed for translation. Underline: no significant BLEURT difference to standard MBR; bold: significantly better than standard MBR (bootstrap test, p < 0.05). that the estimation of the expected utility is unbiased (Eikema and Aziz, 2022), it has empirically been found to work well (Freitag et al., 2023). Models We use open-source NMT models trained for the EN–DE, DE–EN, EN–RU and RU– EN translation directions (Ng et al., 2019).2 The authors provide an ensemble of four models per direction, but we restrict our experiments to one single model per direction. We use the Fairseq codebase (Ott et al., 2019) for model inference. Metrics For estimating the utilities with CHRF, we use a custom implementation of CHRF3 that is equivalent to SacreBLEU (Post, 2018) with default settings4. As COMET model, we use COMET-22 (Rei et al., 2022a); because this model was not trained on annotations of newstest21 or newstest22, a train–test overlap can be ruled out. We estimate wall-clock time based on a part of the segments, using a system equipped with an NVIDIA GeForce RTX 3090 and an AMD EPYC 7742 64-core processor. 2The models were trained with a label smoothing of ϵ = 0.1 (Szegedy et al., 2016), which is a common choice in NMT. Some previous studies of MBR trained custom models without label smoothing (e.g., Eikema and Aziz, 2020). We argue that this is only necessary if unbiased utility estimates are sought through ancestral sampling, and should be less of a concern with epsilon sampling. 3https://github.com/jvamvas/fastChrF 4chrF2|#:1|case:mixed|eff:yes|nc:6|nw:0|space:no|v:2.0.0 5 Results 5.1 Validation results Figure 1 evaluates how accurately MBR efficiency methods approximate standard MBR. We report top-20 accuracy, motivated by the idea of coarse-to-fine MBR: any method with perfect top-20 accuracy could be used for pruning the hypothesis set to 20 without affecting quality. Results for top-1 accuracy are reported in Appendix I.5 For CHRF, we observe that reference aggregation is Pareto superior to N-by-S, maintaining nearperfect top-20 accuracy even if a single aggregate reference is used. For COMET, reference aggregation causes some loss of accuracy, but outperforms N-by-S if the number of effective references is ≤16, where efficiency is highest. In addition, we find that reference aggregation approximates standard (pairwise) COMET much better than using CHRF as a coarse metric does, providing a clear motivation for aggregate-to-fine MBR as an alternative to coarse-to-fine MBR. 5.2 Test results In Table 1, we report test results for newstest22, focusing on a comparison between fast baseline algorithms (beam search and sampling) and MBR (with or without reference aggregation). We perform an automatic evaluation using BLEURT-20 (Sellam et al., 2020), chosen because it is unrelated to the utility metrics we use for MBR. CHRF and 5Accuracy was proposed by Cheng and Vlachos (2023) as an evaluation metric for MBR efficiency methods. 793 COMET scores are reported in Appendix F. The results show that reference aggregation narrows the efficiency gap between MBR and beam search while preserving most of the quality gain of standard MBR. Reference aggregation speeds up utility estimation by 99.5% for CHRF and 95.1% for COMET-22, reducing the total time needed for translation by 75.5% and 78.8%, respectively. Using an aggregate-to-fine approach has a lower loss of quality and still reduces the total translation time by 73.6–77.4%. Reference aggregation is thus a successful strategy to overcome the quadratic complexity of MBR. However, it is still slower than beam search, as the cost of sampling is now the dominant factor. Future work could focus on sampling efficiency, e.g., by using fewer hypotheses, improved caching, or speculative sampling approaches (Leviathan et al., 2023; Chen et al., 2023). 6 Conclusion We proposed reference aggregation, a technique that boosts the efficiency of MBR decoding by shifting the MC sampling from the utility estimation to the reference representation. Experiments on machine translation showed that reference aggregation speeds up utility estimation by up to 99.5% while minimally affecting translation quality. This reduces the gap to beam search and makes MBR more practical for large-scale applications. Limitations This work has two main limitations: 1. Reference aggregation requires a utility metric based on averageable representations. 2. For trained metrics, the effectiveness of aggregation needs to be evaluated empirically. We have demonstrated that reference aggregation is a viable technique for MBR with CHRF and COMET, leading to a considerable speed-up with minor quality losses. In the case of CHRF, reference aggregation entails a slight modification of the metric definition, but is otherwise exact and not an approximation. We thus expect that reference aggregation could be applied in a straightforward manner to other lexical overlap metrics such as CHRF++ (Popovi´c, 2017) and BLEU (Papineni et al., 2002). For COMET, which is a trained metric, reference aggregation involves the averaging of fixedsize sentence embeddings. We empirically studied the loss of accuracy incurred by this averaging and found that there is a favorable trade-off between speed and accuracy for the COMET models we evaluated. We recommend that future work validates the effectiveness of reference aggregation for other trained metrics. While CHRF and COMET are among the most commonly used metrics for MBR, previous work has also proposed metrics that are not based on averageable reference representations. For example, BLEURT (Sellam et al., 2020), a trained metric that was shown to be effective for MBR (Freitag et al., 2022), is based on a cross-encoder architecture that creates a joint representation for each hypothesis–reference pair. Future work could investigate in what form, if at all, reference aggregation can be applied to cross-encoder architectures. Finally, this work studies MBR decoding with a classical sequence-to-sequence NMT model and in the context of sentence-level MT. While MBR decoding has also been successfully applied to MT with large language models (Farinhas et al., 2023), more research is needed on MBR decoding with large language models, especially on the document level. Acknowledgments We thank Clara Meister and Bryan Eikema for helpful discussions and feedback. This work was funded by the Swiss National Science Foundation (project MUTAMUR; no. 213976). References Farhad Akhbardeh, Arkady Arkhangorodsky, Magdalena Biesialska, Ondˇrej Bojar, Rajen Chatterjee, Vishrav Chaudhary, Marta R. Costa-jussa, Cristina España-Bonet, Angela Fan, Christian Federmann, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Leonie Harter, Kenneth Heafield, Christopher Homan, Matthias Huck, Kwabena Amponsah-Kaakyire, Jungo Kasai, Daniel Khashabi, Kevin Knight, Tom Kocmi, Philipp Koehn, Nicholas Lourie, Christof Monz, Makoto Morishita, Masaaki Nagata, Ajay Nagesh, Toshiaki Nakazawa, Matteo Negri, Santanu Pal, Allahsera Auguste Tapo, Marco Turchi, Valentin Vydrin, and Marcos Zampieri. 2021. Findings of the 2021 conference on machine translation (WMT21). In Proceedings of the Sixth Conference on Machine Translation, pages 1–88, Online. Association for Computational Linguistics. 794 Chantal Amrhein and Rico Sennrich. 2022. Identifying weaknesses in machine translation metrics through minimum Bayes risk decoding: A case study for COMET. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1125–1141, Online only. Association for Computational Linguistics. Charlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, Laurent Sifre, and John Jumper. 2023. Accelerating large language model decoding with speculative sampling. Julius Cheng and Andreas Vlachos. 2023. Faster minimum Bayes risk decoding with confidence-based pruning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12473–12480, Singapore. Association for Computational Linguistics. Hiroyuki Deguchi, Yusuke Sakai, Hidetaka Kamigaito, Taro Watanabe, Hideki Tanaka, and Masao Utiyama. 2024. Centroid-based efficient minimum bayes risk decoding. In Findings of the Association for Computational Linguistics: ACL 2024, Bangkok, Thailand. Association for Computational Linguistics. John DeNero, David Chiang, and Kevin Knight. 2009. Fast consensus decoding over translation forests. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 567–575, Suntec, Singapore. Association for Computational Linguistics. Bryan Eikema and Wilker Aziz. 2020. Is MAP decoding all you need? the inadequacy of the mode in neural machine translation. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4506–4520, Barcelona, Spain (Online). International Committee on Computational Linguistics. Bryan Eikema and Wilker Aziz. 2022. Sampling-based approximations to minimum Bayes risk decoding for neural machine translation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10978–10993, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. António Farinhas, José de Souza, and Andre Martins. 2023. An empirical study of translation hypothesis ensembling with large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 11956–11970, Singapore. Association for Computational Linguistics. Patrick Fernandes, António Farinhas, Ricardo Rei, José G. C. de Souza, Perez Ogayo, Graham Neubig, and Andre Martins. 2022. Quality-aware decoding for neural machine translation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1396–1412, Seattle, United States. Association for Computational Linguistics. Mara Finkelstein, Subhajit Naskar, Mehdi Mirzazadeh, Apurva Shah, and Markus Freitag. 2023. MBR and QE finetuning: Training-time distillation of the best and most expensive decoding methods. Markus Freitag, Behrooz Ghorbani, and Patrick Fernandes. 2023. Epsilon sampling rocks: Investigating sampling strategies for minimum Bayes risk decoding for machine translation. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 9198–9209, Singapore. Association for Computational Linguistics. Markus Freitag, David Grangier, Qijun Tan, and Bowen Liang. 2022. High quality rather than high model probability: Minimum Bayes risk decoding with neural metrics. Transactions of the Association for Computational Linguistics, 10:811–825. Nuno M. Guerreiro, Ricardo Rei, Daan van Stigt, Luisa Coheur, Pierre Colombo, and André F. T. Martins. 2023. xcomet: Transparent machine translation evaluation through fine-grained error detection. John Hewitt, Christopher Manning, and Percy Liang. 2022. Truncation sampling as language model desmoothing. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3414– 3427, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Yuu Jinnai and Kaito Ariu. 2024. Hyperparameter-free approach for faster minimum bayes risk decoding. Donald E Knuth. 1997. Art of computer programming, Volume 2: Seminumerical algorithms, 3rd edition. Addison-Wesley. Tom Kocmi, Rachel Bawden, Ondˇrej Bojar, Anton Dvorkovich, Christian Federmann, Mark Fishel, Thamme Gowda, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Rebecca Knowles, Philipp Koehn, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshiaki Nakazawa, Michal Novák, Martin Popel, and Maja Popovi´c. 2022. Findings of the 2022 conference on machine translation (WMT22). In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 1–45, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Shankar Kumar and William Byrne. 2004. Minimum Bayes-risk decoding for statistical machine translation. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 169–176, Boston, Massachusetts, USA. Association for Computational Linguistics. 795 Yaniv Leviathan, Matan Kalman, and Yossi Matias. 2023. Fast inference from transformers via speculative decoding. In Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 19274–19286. PMLR. Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR’s WMT19 news translation task submission. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 314–319, Florence, Italy. Association for Computational Linguistics. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Maja Popovi´c. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics. Maja Popovi´c. 2017. chrF++: words helping character n-grams. In Proceedings of the Second Conference on Machine Translation, pages 612–618, Copenhagen, Denmark. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Brussels, Belgium. Association for Computational Linguistics. Ricardo Rei, José G. C. de Souza, Duarte Alves, Chrysoula Zerva, Ana C Farinha, Taisiya Glushkova, Alon Lavie, Luisa Coheur, and André F. T. Martins. 2022a. COMET-22: Unbabel-IST 2022 submission for the metrics shared task. In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 578–585, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Ricardo Rei, Ana C Farinha, José G.C. de Souza, Pedro G. Ramos, André F.T. Martins, Luisa Coheur, and Alon Lavie. 2022b. Searching for COMETINHO: The little metric that could. In Proceedings of the 23rd Annual Conference of the European Association for Machine Translation, pages 61–70, Ghent, Belgium. European Association for Machine Translation. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881–7892, Online. Association for Computational Linguistics. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. Guangyu Yang, Jinghong Chen, Weizhe Lin, and Bill Byrne. 2023. Direct preference optimization for neural machine translation with minimum bayes risk decoding. 796 A Formal Definition of Reference Aggregation for ChrF The CHRF metric (Popovi´c, 2015) is a harmonic mean of precision and recall scores: CHRFβ = (1 + β2) · CHRP · CHRR β2 · CHRP + CHRR . (10) Internally, CHRF converts hypotheses and references into bags of character n-grams. Such bags can be represented as multisets (Knuth, 1997, Section 4.6.3) or as (sparse) vectors. We will use vector notation in this formal definition, which allows us to define reference aggregation with standard vector operations. Let hyp ∈R|V| and ref ∈R|V| be bags representing a hypothesis and a reference, where V is the vocabulary of all character n-grams up to maximum order n, and the entries hypj and ref j are the counts of n-gram j ∈V in the hypothesis and reference, respectively. For a given n-gram order i ∈{1, . . . , n}, precision and recall are defined as: CHRPi(hyp, ref) = P j∈Vi min(hypj, ref j) P j∈Vi hypj , (11) CHRRi(hyp, ref) = P j∈Vi min(hypj, ref j) P j∈Vi ref j , (12) where Vi is the set of all character n-grams of order i. Overall precision and recall are calculated as the arithmetic mean of the precision and recall scores for each n-gram order: CHRP(hyp, ref) = 1 n n X i=1 CHRPi(hyp, ref), (13) CHRR(hyp, ref) = 1 n n X i=1 CHRRi(hyp, ref). (14) When CHRF is used as a utility metric in a standard MBR setting, the expected utility of a hypothesis is estimated based on a set {ref (1), . . . , ref (m)} of m references: utility(hyp) = 1 m m X k=1 CHRFβ(hyp, ref (k)). (15) In contrast, reference aggregation first calculates the arithmetic mean of the reference bags: ref = [ 1 m m X k=1 ref (k) 1 , . . . , 1 m m X k=1 ref (k) |V|], (16) and estimates the utility as: utilityagg(hyp) = CHRFβ(hyp, ref). (17) Note that the only mathematical difference between pairwise calculation of chrF and using the aggregate reference is that the F-score is averaged across sentences in the pairwise calculation, and computed over the global precision and recall with reference aggregation. B Formal Definition of Partial Aggregation We conceptualize partial aggregation as follows: 1. The set of individual references contains m references. 2. We randomly partition the set of references into s groups of equal size. 3. Each group is combined into an average reference representation, resulting in s aggregate references ref (i), . . . , ref (s). The expected utility of each sampled hypothesis is then approximated as the average metric score over all aggregate references: utility(hyp) ≈1 s s X i=1 metric(hyp, ref (i)). (18) Like with N-by-S MBR, the parameter s can be seen as the number of effective references that determines the computational complexity of the utility estimation. The case s = m corresponds to standard MBR, where each sampled hypothesis is compared to each reference in a pairwise fashion. The case s = 1 corresponds to the full aggregation approach, where a single aggregate reference is created from all references. 797 C Formal Definition of Aggregate-to-fine MBR Aggregate-to-fine MBR is a special case of coarseto-fine MBR (Eikema and Aziz, 2022), which uses a cheap proxy utility function to prune the number of hypotheses. In the case of aggregate-to-fine MBR, the proxy utility function is based on an aggregate reference representation. The general definition of coarse-to-fine MBR is as follows: Given the original set of sampled hypotheses ¯H(x) and a proxy utility function uproxy, coarse-to-fine MBR selects a subset of T hypotheses: ¯HT (x) := top-T hyp∈¯ H(x) uproxy(hyp). (19) In the second step, the utility of each hypothesis in the pruned set is estimated using the fine-grained utility function utarget: yC2F := arg max hyp∈¯ HT (x) utarget(hyp). (20) When experimenting with aggregate-to-fine MBR, we re-use the same utility metric for both steps, but first with an aggregate reference and then with the full set of references: uproxy(hyp) = metric(hyp, ref), (21) utarget(hyp) = 1 m X ref∈refs metric(hyp, ref). (22) Note that using the same metric in both steps is not strictly necessary, but has the advantage that the features (e.g., embeddings) only need to be computed once. D Complexity Analysis Generally, reference aggregation reduces the complexity of utility estimation from O(nm) to O(n + m), where n is the number of hypotheses and m is the number of references. The exact complexity depends on the specifics of the utility metric. Here, we provide a more detailed analysis for CHRF and COMET. Above, we stated that utility estimation with these metrics usually has two stages: feature extraction and scoring. The feature extraction stage is not affected by reference aggregation, and previous work has already remarked that reference features can be extracted once and re-used for all hypotheses (Amrhein and Sennrich, 2022). If the reference set is identical to the set of hypotheses, the feature extraction stage is in O(n), otherwise O(n + m). The scoring stage of CHRF is dominated by the element-wise minimum function in Eqs. 11 and 12 (or, if the bags of n-grams are represented as multisets, by the intersection operation hyp ∩ref). Because this operation is performed separately for each hypothesis–reference pair, the complexity is in O(nm). Reference aggregation reduces the complexity to O(n + m), given that the aggregate reference can be computed once and then re-used for all hypotheses.6 The same analysis applies to COMET. With standard MBR, Eq. 7 is evaluated for each hypothesis–reference pair; with reference aggregation, it is only evaluated once for each hypothesis. The aggregate reference embeddings can be computed once and re-used for all hypotheses. In practice, the runtime of utility estimation is affected by additional factors. There may be duplicates among the samples, so the number of scores that effectively need to be computed can vary. In addition, most aspects of utility estimation can be computed in parallel, which makes the effective runtime highly implementation-dependent. 6For CHRF, reference aggregation can result in an aggregate bag of n-grams that is larger that the bags of the individual references; in the theoretical worst case, where all the references are disjoint, even in an aggregate bag that is m times larger. However, this is a highly unlikely scenario in practice, since different translations of the same source will have substantial overlap, and even if |ref| ≫|ref|, the cost of intersection only depends on |hyp|, assuming that a constanttime hash table is used to check whether each item in hyp is contained in ref. 798 E Data Statistics # Segments # Samples per segment # Unique samples per segment newstest21 EN–DE 1002 1024 874.2 newstest21 DE–EN 1000 1024 716.9 newstest21 EN–RU 1002 1024 896.7 newstest21 RU–EN 1000 1024 727.3 newstest22 EN–DE 2037 1024 697.5 newstest22 DE–EN 1984 1024 671.4 newstest22 EN–RU 2037 1024 750.2 newstest22 RU–EN 2016 1024 726.3 Table 2: Statistics for the datasets used in this paper. We sample 1024 hypotheses per source segment using epsilon sampling and find that most of the samples are unique. F Extended Test Results CHRF Cometinho COMET-22 XCOMET-XL BLEURT-20 Beam search (size 5) 58.6 56.0 84.3 92.2 73.2 Epsilon sampling (ϵ = 0.02) 52.6 45.3 81.9 89.4 70.3 MBR with CHRF metric – standard MBR 59.8 58.3 84.5 91.8 73.4 – reference aggregation 59.8 58.2 84.5 91.7 73.4 – aggregate-to-fine MBR 59.8 58.3 84.5 91.8 73.5 MBR with Cometinho metric – standard MBR 57.5 65.1 85.1 92.5 74.0 – reference aggregation 57.8 64.5 85.0 92.4 73.9 – aggregate-to-fine MBR 57.5 65.0 85.1 92.5 74.0 MBR with COMET-22 metric – standard MBR 57.3 60.8 87.1 93.7 74.9 – reference aggregation 57.7 60.8 86.8 93.4 74.7 – aggregate-to-fine MBR 57.4 60.8 87.0 93.7 74.9 Coarse-to-fine MBR – standard CHRF to COMET-22 59.3 60.1 85.8 93.0 74.4 – aggregate CHRF to COMET-22 59.4 60.2 85.8 93.0 74.4 Table 3: Extended results on newstest22 with 1024 samples/references for MBR. In this table, we include Cometinho (Rei et al., 2022b) as utility metric, which is a distilled COMET model. Furthermore, as an additional evaluation metric, we report XCOMET-XL (Guerreiro et al., 2023). We average the evaluation scores across the four translation directions. Underline: no significant difference to standard MBR; bold: significantly better than standard MBR (bootstrap test, p < 0.05). 799 G Test Results with 256 Samples EN–DE DE–EN EN–RU RU–EN Avg. Time (utility / total) Beam search (size 5) 76.16 72.56 68.50 75.47 73.17 - / 0.2 s Epsilon sampling (ϵ = 0.02) 73.39 69.70 65.79 72.13 70.25 - / 0.2 s MBR with CHRF metric – standard MBR 75.90 72.66 69.27 75.60 73.36 0.8 s / 2.1 s – reference aggregation 75.83 72.69 69.19 75.53 73.31 < 0.1 s / 1.3 s – aggregate-to-fine MBR 75.90 72.67 69.29 75.58 73.36 0.1 s / 1.4 s MBR with COMET-22 metric – standard MBR 77.44 73.38 72.15 76.07 74.76 1.6 s / 2.9 s – reference aggregation 77.18 73.24 71.85 75.98 74.56 0.3 s / 1.6 s – aggregate-to-fine MBR 77.42 73.36 71.98 76.05 74.70 0.4 s / 1.7 s Table 4: Version of Table 1 that uses 256 samples/references for MBR. H Effect of Larger Beam Size Beam size EN–DE DE–EN EN–RU RU–EN Avg. 5 76.16 72.56 68.50 75.47 73.17 10 76.20 72.57 67.92 75.51 73.05 15 76.19 72.53 68.10 75.48 73.08 20 76.18 72.54 67.84 75.49 73.01 25 76.19 72.50 67.82 75.46 72.99 Table 5: Increasing the beam size to values larger than 5 does not tend to improve translation quality of beam search on newstest22 in terms of BLEURT-20. 800 I Top-1 Accuracy of Efficiency Methods Utility metric: CHRF n 1 n 2 n 4 n 8 .. . 1 0% 50% 100% Number of effective references (log scale) Top-1 accuracy EN–DE Reference aggregation N-by-S n 1 n 2 n 4 n 8 ... 1 DE–EN n 1 n 2 n 4 n 8 ... 1 EN–RU n 1 n 2 n 4 n 8 ... 1 RU–EN Utility metric: COMET-22 n 1 n 2 n 4 n 8 .. . 1 0% 50% 100% Number of eff. references (log scale) Top-1 accuracy EN–DE Reference aggregation N-by-S CHRF n 1 n 2 n 4 n 8 ... 1 DE–EN n 1 n 2 n 4 n 8 ... 1 EN–RU n 1 n 2 n 4 n 8 ... 1 RU–EN Figure 2: Top-1 accuracy of MBR efficiency methods on newstest21, analogous to Figure 1. J Validation Results for Cometinho Top-20 accuracy n 1 n 2 n 4 n 8 .. . 1 0% 50% 100% Number of eff. references (log scale) Top-20 accuracy EN–DE Reference aggregation N-by-S CHRF n 1 n 2 n 4 n 8 ... 1 DE–EN n 1 n 2 n 4 n 8 ... 1 EN–RU n 1 n 2 n 4 n 8 ... 1 RU–EN Top-1 accuracy n 1 n 2 n 4 n 8 .. . 1 0% 50% 100% Number of eff. references (log scale) Top-1 accuracy EN–DE Reference aggregation N-by-S CHRF n 1 n 2 n 4 n 8 ... 1 DE–EN n 1 n 2 n 4 n 8 ... 1 EN–RU n 1 n 2 n 4 n 8 ... 1 RU–EN Figure 3: Accuracy of MBR efficiency methods on newstest21 when using the Cometinho model (Rei et al., 2022b) as utility metric. 801
2024_short
71
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 802–812 August 11-16, 2024 ©2024 Association for Computational Linguistics Cleaner Pretraining Corpus Curation with Neural Web Scraping Zhipeng Xu1, Zhenghao Liu1*, Yukun Yan2, Zhiyuan Liu2, Ge Yu1 and Chenyan Xiong3 1Department of Computer Science and Technology, Northeastern University, China 2Department of Computer Science and Technology, Institute for AI, Tsinghua University, China Beijing National Research Center for Information Science and Technology, China 3Language Technologies Institute, Carnegie Mellon University, United States Abstract The web contains large-scale, diverse, and abundant information to satisfy the informationseeking needs of humans. Through meticulous data collection, preprocessing, and curation, webpages can be used as a fundamental data resource for language model pretraining. However, when confronted with the progressively revolutionized and intricate nature of webpages, rule-based/feature-based web scrapers are becoming increasingly inadequate. This paper presents a simple, fast, and effective Neural web Scraper (NeuScraper) to help extract primary and clean text contents from webpages. Experimental results show that NeuScraper surpasses the baseline scrapers by achieving more than a 20% improvement, demonstrating its potential in extracting higher-quality data to facilitate the language model pretraining. All of the code is available at https: //github.com/OpenMatch/NeuScraper. 1 Introduction Large Language Models (LLMs) have shown impressive performance in various NLP tasks as the size of models scaling up (Chowdhery et al., 2023; Touvron et al., 2023; Achiam et al., 2023; Zhao et al., 2023). However, recent findings in scaling laws indicate that both model size and training data should be scaled proportionally (Hoffmann et al., 2022), posing a significant challenge in acquiring sufficiently large pretraining datasets or even raising concerns about data scarcity (Penedo et al., 2024; Villalobos et al., 2022). To curate more data for pretraining, researchers pay more attention to collecting more valuable data from the Web. The web-crawled datasets, such as CommonCrawl, have been widely used for pretraining, facilitating the development of language models (Wenzek et al., 2020; Radford et al., 2019; * indicates corresponding author. Raffel et al., 2020; Penedo et al., 2024). Nevertheless, prior research has demonstrated that, even after aggressive cleaning, the quality of pre-extracted text provided by CommonCrawl still fails to reach the expected (Raffel et al., 2020; Gao et al., 2021; Penedo et al., 2024). The reason lies in that advertisements, banners, hyperlinks, and other harmful content are usually mixed within the primary content of the page, thereby only extracting these primary contents brings lots of noise to pretraining (Gibson et al., 2005; Vogels et al., 2018). Web scrapers provide opportunities to extract valuable content from the raw HTML pages (Barbaresi, 2021). However, rule-based and heuristic scrapers have notable limitations. On the one hand, web pages are becoming increasingly sophisticated, requiring more intricate underlying code to deal with the page layout (Butkiewicz et al., 2011). In this case, maintaining the scraper rules is timeconsuming and requires much human effort. On the other hand, HTML functions as a markup language, enabling web designers to customize web pages according to individual preferences. Consequently, web pages frequently lack complete standardization, which may mislead the rule-based web scrapers (Hantke and Stock, 2022). In this paper, we present a simple, fast, and effective Neural Web Scraper (NeuScraper) designed to extract primary content from webpages. NeuScraper employs a shallow neural architecture and integrates layout information for efficient scraping. Our experiments demonstrate that NeuScraper surpasses baseline scrapers, achieving a 20% improvement in performance and generating a higher-quality corpus for language model pretraining. Notably, NeuScraper shows the potential of high processing speeds when utilized on GPU. The easy-to-use and open-source tool, NeuScraper, can facilitate the creation of largescale corpora for pretraining. 802 <html> <head> <title> Macintosh-Wikipedia <body> <div> <span> <h1> Macintosh-128K <span> From Wikipedia … <li> <div> 27 Languages <span> <span> Article 1 2 3 4 5 Textual Sequence Modeling of Webpages Macinto… 1 27 Lang… 2 Article 3 Macinto… 4 From… 5 … … n Web Scraping with Neural Method NeuScraper … 1 1 0 2 0 3 1 4 1 5 … n … Figure 1: The Pipeline of Primary Content Extraction Using NeuScraper (Neural Web Scraper). 2 Related Work Leveraging web scrapers for extraction provides a promising way to extract high-quality content from webpages. Such a web scraping task is usually defined as text extraction, boilerplate removal, template removal, or generic web extraction in different webpage processing pipelines (Finn et al., 2001; Rahman et al., 2001; Vieira et al., 2006), which is distinguished from the web information extraction task that extracts the entities from webpages (Li et al., 2022; Wang et al., 2022). The web scrapers can be divided into rule-based and feature-based methods. Rule-based web scrapers start from web wrappers, which often need manual designs or a wrapper induction system for producing (Muslea et al., 1999; Crescenzi et al., 2001). The web wrappers usually need to be tailored for each webpage, which is not feasible to process large-scale webpages (Guo et al., 2010). A more conventional approach is to create a Document Object Model (DOM) tree, which assists in building a rule-based scraper (Gupta et al., 2003; Guo et al., 2010) or help the comparison of webpages (Yi et al., 2003). Additionally, the work also incorporates tag cumulative distributions (Finn et al., 2001), text density (Sun et al., 2011), and tag ratios (Weninger et al., 2010) to benefit the content extraction from webpages. Except for these rule-based methods, some scrapers use feature-based approaches to better extract the primary contents from webpages. Specifically, they divide the webpage into several blocks using rules built based on the HTML tags or DOM tree. Then they extract dozens to hundreds of handcrafted features from these blocks, such as markup, text/document features (Spousta et al., 2008), linguistic, structural & visual features (Bauer et al., 2007) and DOM tree-based features (Vogels et al., 2018). These features can be fed into SVM (Bauer et al., 2007; Kohlschütter et al., 2010), conditional random fields (Spousta et al., 2008), logistic regressions (Peters and Lecocq, 2013) or convolutional neural network (Vogels et al., 2018) to classify whether the texts in the block are the primary content of the webpages. 3 Neural Web Scraper This section introduces our Neural Web Scraper (NeuScraper) to extract primary contents from webpages. We first introduce the sequence modeling method of webpages (Sec. 3.1) and then describe our neural-based web scraper (Sec. 3.2). 3.1 Textual Sequence Modeling of Webpages As shown in Figure 1, the primary content extraction task aims to extract the content from the highlighted areas, which consists of clean text and represents the main information of the webpage. To facilitate the web scraping with NeuScraper, we convert the HTML code into textual sequences. Previous work (Bauer et al., 2007) has demonstrated the effectiveness of both structural and visual features in helping to identify primary contents. Thus, to preserve webpage layout information, we rely on the DOM tree structure to transform webpages into textual sequences. Specifically, we employ the BeautifulSoup41 toolkit to build the DOM tree for each webpage, conduct the depthfirst traversal on the tree and regard the visited order as additional location information to represent the nodes. During this process, only the nodes that contain plant texts, table nodes (tagged with <table>), and list nodes (tagged with <ol>, <ul> or <dl>) are reserved to produce the final textual sequences X = {x1, x2, ..., xn}, where n denotes the number of the reserved DOM nodes. After processing, the web scraping task primarily involves determining whether the node xi contains the primary content of the webpage for evaluation. 1https://pypi.org/project/beautifulsoup4/ 803 3.2 Web Scraping with the Neural Method In this subsection, we introduce our neural modeling method to build the web scraper. To process the textual sequences X = {x1, x2, ..., xn}, we build a hierarchical architecture for node-level prediction. Specifically, to guarantee the efficiency of NeuScraper, we use the first layer of the XLMRoberta (Conneau et al., 2020) model to encode the text representation xi of the i-th DOM node as the 768-dimensional node representation hi: hi = XLMRoberta-Layer1(xi), (1) where hi is the representation of the “[CLS]” token. Then we feed these node representations H = {h1, h2, ..., hn} into a 3-layer transformer model (Vaswani et al., 2017) with 8 attention heads to get the encoded node representations: ˆhi = Transformer(Linear(hi)), (2) where the linear layer projects hi to 256dimensional embeddings for efficient modeling. Following previous work (Overwijk et al., 2022), the DOM nodes can be categorized into six kinds of labels yk, including primary content, heading, title, paragraph, table, and list. Then we calculate the label prediction probability P(yk i = 1|xi) of the k-th category label yk i of the i-th node: P(yk i = 1|xi) = Sigmoid(MLP(ˆhi)) (3) Finally, NeuScraper is trained using the loss L: L = 6 X k=1 n X i=1 CrossEntropy(P(yk i |xi), Yk i ), (4) where Yk i is the ground truth label. Yk i is a binary label and Yk i = 1 indicates that the i-th DOM node belongs to the k-th label category. During inference, we only consider the primary content label to extract the texts from webpages. 4 Experimental Methodology In this section, we describe the datasets, baselines, evaluation metrics and implementation details. Dataset. We use ClueWeb22 (Overwijk et al., 2022) dataset in experiments. The content extraction labels of ClueWeb22 were generated from the production system of a commercial search engine. The labels are not available for general web scraping tools, because they are annotated with more expensive signals of page rendering and visualization. More details are shown in Appendix A.2. Method Evaluation Metrics Latency Acc. Prec. Rec. F1 (ms) htmlparser 40.73 40.65 98.95 57.63 19.01 bs4 41.29 40.96 99.94 58.10 12.65 html2text 40.44 39.35 85.40 53.88 15.85 boilerpipe 66.48 66.79 35.27 46.16 11.05 jusText 62.58 72.62 13.08 22.17 10.91 lxml 64.62 61.48 35.22 44.78 10.96 inscriptis 45.35 42.48 96.43 58.98 14.99 readability 68.47 72.84 36.04 48.22 12.36 trafilatura 70.70 66.57 56.42 61.08 11.95 NeuScraper 86.35 80.77 87.29 83.90 11.39 Table 1: Overall Performance. We use ClueWeb22 to evaluate the content extraction effectiveness of different web scrapers. More details are shown in Appendix A.2. Baseline. The scraping baselines consist of nine open-sourced web scrapers, including basic HTML manipulators (html2text and inscriptis (Weichselbraun, 2021)), generic webpage parsers (beautifulsoup4, lxml and htmlparser), rule-based scrapers (jusText and readability) and machine learning-based scraper (boilerpipe (Kohlschütter et al., 2010)). trafilatura (Barbaresi, 2021) is our main baseline, which combines different rules and heuristic methods. Evaluation Metrics. The accuracy, precision, recall, and F1 score, are used to evaluate the effectiveness in extracting primary contents. Furthermore, we use different scrapers to produce the web corpus and pretrain language models. The quality of scraping can be demonstrated by the results of standard downstream tasks. Implementation Details. NeuScraper is trained for 30 epochs using the AdamW optimizer with a batch size of 1024. Learning rate adjustments followed the cosine decay schedule, with a warm-up phase spanning the initial 5% of iterations and the peak rate fixed at 6e-4. To accommodate memory and computational speed limitations, the maximum length of node sequences was chunked to 384. 5 Evaluation Result In this section, we first show the effectiveness of different scrapers in extracting primary contents from the raw webpages. Subsequently, we evaluate the quality of the extracted data and utilize it to pretrain language models of varying scales. 804 Size Method BLIMP ARC-e ARC-c SWAG WinoG SciQ Lambada LogiQA AVG ClueWeb22 160M htmlparser 70.87 41.16 17.23 32.24 49.88 66.10 16.96 22.58 39.63 trafilatura 73.46 42.46 18.25 34.08 48.61 69.20 18.10 22.11 40.78 NeuScraper 74.01 42.84 18.43 34.14 51.46 69.00 17.58 21.50 41.12 410M htmlparser 74.24 42.63 18.77 34.45 49.80 70.80 22.35 22.42 41.93 trafilatura 77.84 45.28 20.56 37.29 52.32 72.90 23.77 21.96 43.99 NeuScraper 76.71 47.34 20.47 37.00 50.74 74.40 26.76 24.42 44.73 CommonCrawl 160M htmlparser 58.38 29.71 18.77 28.85 50.27 38.60 5.16 19.66 31.17 trafilatura 69.72 34.72 18.51 32.04 49.56 56.90 11.70 23.96 37.13 NeuScraper 69.27 36.15 18.43 32.61 51.77 60.50 15.48 20.73 38.12 410M htmlparser 61.30 28.28 17.23 29.36 50.35 41.00 6.50 20.73 31.84 trafilatura 72.66 36.74 20.13 33.91 51.30 55.40 16.08 21.35 38.44 NeuScraper 74.42 39.30 18.60 34.77 50.03 61.40 20.66 21.81 40.12 Table 2: Effectiveness of Pythia Pretraining Using the Extracted Data from Different Scrapers. We pretrained Pythia models of different sizes on ClueWeb22 and CommonCrawl respectively. More details are shown in Appendix A.3. 5.1 Overall Performance The effectiveness of baseline scrapers and our NeuScraper in extracting primary contents from the raw webpages is shown in Table 1. Among all baseline scrapers, the trafilatura exhibits the highest performance, showcasing its effectiveness in content extraction through its cascade of rulebased filters and content heuristic methods. Our NeuScraper surpasses all traditional web scrapers and achieves over a 20% improvement. It illustrates the effectiveness of our NeuScraper in learning the schemes of the primary contents, generalizing its advances to handle various layouts of pages and extracting high-quality texts from them. Notably, with the GPU support and distributed computation, NeuScraper achieves competitive scraping latency. 5.2 Effectiveness of the Cleaned Web Data in Language Model Pretraining This part evaluates the effectiveness of language models pretrained on the web data. As shown in Table 2, we utilize different scrapers to handle the webpages sourced from ClueWeb22 and CommonCrawl, and leverage the extracted data to pretrain Pythia models (Biderman et al., 2023). The evaluation results demonstrate that employing the NeuScraper for webpage processing enhances the performance of language models in downstream tasks. It is noteworthy that the NeuScraper represents a data-driven scraping approach, circumventing the need for building sophisticated rules and conducting intricate feature engineering to deal with the continuously evolving HTML layouts. Wikitext Lambada Dataset 0 50 100 150 200 250 Perplexity 108.0 143.5 85.0 90.6 84.3 75.8 htmlparser trafilatura NeuScraper (a) ClueWeb22. Wikitext Lambada Dataset 0 200 400 600 800 1000 1200 1400 Perplexity 571.1 859.6 188.5 346.4 117.8 181.1 htmlparser trafilatura NeuScraper (b) CommonCrawl. Figure 2: The Effectiveness of Language Models Trained on Web Data to Reproduce the Target Corpora. Lower perplexity indicates more proficiency in language models for reproducing. 5.3 Evaluation on the Quality of Extracted Data Using NeuScraper In this subsection, we aim to estimate the quality of extracted data using NeuScraper. The evaluation results are shown in Figure 2. It is apparent that if two corpora are of comparable quality, their n-gram distributions should exhibit similarity. Thus, we use the language models pretrained on web data (the same as Sec. 5.2) to ask these language models to reproduce the target corpora, such as Wikitext (Merity et al., 2017) and Lambada (Radford et al., 2019). The perplexity is used to evaluate the effectiveness of the language models pretrained on web data in replicating the target corpora. The lower perplexity indicates the language model is more proficient to the target corpora, showing the pretrained data and target data have more overlaps and are more similar. The evaluation results reveal that the utilization 805 Method Evaluation Metrics Latency Acc. Prec. Rec. F1 (ms) CPU 86.35 80.77 87.29 83.90 55.25 + qint8 86.37 80.70 87.48 83.95 42.22 + quint8 86.39 80.68 87.56 83.98 41.48 GPU 86.35 80.77 87.29 83.90 11.39 Table 3: Quantization Performance of NeuScraper on ClueWeb22. We further quantized NeuScraper to accelerate its inference on the CPU. of extracted content from some simple scrapers, such as htmlparser, significantly impacts the effectiveness of language models, which causes an increase of more than 20 points in perplexity due to the noise derived from webpages. Compared with the trafilatura, NeuScraper decreases the perplexity by over ten points, showing its capability to yield higher-quality data for pretraining through learning to extract primary content. 5.4 Model Quantization for NeuScraper In this subsection, we quantize the model of NeuScraper via onnxruntime2 to evaluate its efficiency in resource-constrained scenarios. As shown in Table 3, we utilize qint8 and quint8 to quantize our NeuScraper. The qint8 quantizes model parameters or layer outputs to signed 8-bit integers, while quint8 quantizes them to unsigned 8-bit integers, reducing model size and improving computational efficiency. Benefiting from quantization, NeuScraper accelerates by 25% with no loss of performance compared to the original model. While processing is still 4-5x slower compared to GPUs, it also provides a potential way to scrap in low-resource scenarios via NeuScraper. 6 Conlusion This paper proposes NeuScraper, which employs a shallow neural architecture to clean the webpages. The experimental results show the effectiveness of NeuScraper. The open-sourced and easy-used web scraper may facilitate the research on language model pretraining. Limitation To guarantee efficiency, NeuScraper needs the powerful parallelism of GPUs to achieve highspeed web scraping. In addition, for large-scale pretraining corpus processing, a high throughput 2https://onnxruntime.ai storage medium is required to ensure inference efficiency due to the frequent data swapping between the storage medium and GPU. Acknowledgments This work is partly supported by the Natural Science Foundation of China under Grant (No. 62206042, No. 62137001, and No. 62272093), the Joint Funds of Natural Science Foundation of Liaoning Province (No. 2023-MSBA-081), and the Fundamental Research Funds for the Central Universities under Grant (No. N2416012). References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. ArXiv preprint, abs/2303.08774. Adrien Barbaresi. 2021. Trafilatura: A web scraping library and command-line tool for text discovery and extraction. In Proceedings of ACL, pages 122–131, Online. Daniel Bauer, Judith Degen, Xiaoye Deng, Priska Herger, Jan Gasthaus, Eugenie Giesbrecht, Lina Jansen, Christin Kalina, Thorben Kräger, Robert Märtin, et al. 2007. Fiasco: Filtering the internet by automatic subtree classification, osnabruck. In Proceedings of WAC3, pages 111–121. Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. 2023. Pythia: A suite for analyzing large language models across training and scaling. In Proceedings of ICML, pages 2397–2430. Michael Butkiewicz, Harsha V Madhyastha, and Vyas Sekar. 2011. Understanding website complexity: measurements, metrics, and implications. In Proceedings of IMC, pages 313–328. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2023. Palm: Scaling language modeling with pathways. JMLR, (240):1–113. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of ACL, pages 8440–8451, Online. Valter Crescenzi, Giansalvatore Mecca, Paolo Merialdo, et al. 2001. Roadrunner: Towards automatic data 806 extraction from large web sites. In Proceedings of VLDB, pages 109–118. Aidan Finn, Nicholas Kushmerick, and Barry Smyth. 2001. Fact or fiction: Content classification for digital libraries. In Proceedings of DELOS. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. 2021. The pile: An 800gb dataset of diverse text for language modeling. ArXiv preprint, abs/2101.00027. David Gibson, Kunal Punera, and Andrew Tomkins. 2005. The volume and evolution of web page templates. In Proceedings of WWW, pages 830–839. Yan Guo, Huifeng Tang, Linhai Song, Yu Wang, and Guodong Ding. 2010. Econ: an approach to extract content from web news page. In Proceedings of APWEB, pages 314–320. Suhit Gupta, Gail E. Kaiser, David Neistadt, and Peter Grimm. 2003. Dom-based content extraction of HTML documents. In Proceedings of WWW, pages 207–214. Florian Hantke and Ben Stock. 2022. Html violations and where to find them: a longitudinal analysis of specification violations in html. In Proceedings of IMC, pages 358–373. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. ArXiv preprint, abs/2203.15556. Christian Kohlschütter, Peter Fankhauser, and Wolfgang Nejdl. 2010. Boilerplate detection using shallow text features. In Proceedings of WSDM, pages 441–450. Junlong Li, Yiheng Xu, Lei Cui, and Furu Wei. 2022. MarkupLM: Pre-training of text and markup language for visually rich document understanding. In Proceedings of ACL, pages 6078–6087, Dublin, Ireland. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture models. In Proceedings of ICLR. Ion Muslea, Steve Minton, and Craig Knoblock. 1999. A hierarchical approach to wrapper induction. In Proceedings of AGENTS, pages 190–197. Arnold Overwijk, Chenyan Xiong, Xiao Liu, Cameron VandenBerg, and Jamie Callan. 2022. Clueweb22: 10 billion web documents with visual and semantic information. ArXiv preprint, abs/2211.15848. Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Hamza Alobeidli, Alessandro Cappelli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. 2024. The refinedweb dataset for falcon llm: Outperforming curated corpora with web data only. In Proceedings of NeurIPS. Matthew E Peters and Dan Lecocq. 2013. Content extraction using diverse feature sets. In Proceedings of WWW, pages 89–90. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR, 21:140:1–140:67. AFR Rahman, H Alam, R Hartono, et al. 2001. Content extraction from html documents. In Proceedings of WDA, pages 1–4. Miroslav Spousta, Michal Marek, and Pavel Pecina. 2008. Victor: the web-page cleaning tool. In Proceedings of LREC, pages 12–17. Fei Sun, Dandan Song, and Lejian Liao. 2011. DOM based content extraction via text density. In Proceedings of SIGIR, pages 245–254. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. ArXiv preprint, abs/2302.13971. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NeurIPS, pages 5998– 6008. Karane Vieira, Altigran S Da Silva, Nick Pinto, Edleno S De Moura, Joao MB Cavalcanti, and Juliana Freire. 2006. A fast and robust method for web page template detection and removal. In Proceedings of CIKM, pages 258–267. Pablo Villalobos, Jaime Sevilla, Lennart Heim, Tamay Besiroglu, Marius Hobbhahn, and Anson Ho. 2022. Will we run out of data? an analysis of the limits of scaling datasets in machine learning. ArXiv preprint, abs/2211.04325. Thijs Vogels, Octavian-Eugen Ganea, and Carsten Eickhoff. 2018. Web2text: Deep structured boilerplate removal. In Proceedings of ECIR, pages 167–179. Qifan Wang, Yi Fang, Anirudh Ravula, Fuli Feng, Xiaojun Quan, and Dongfang Liu. 2022. Webformer: The web-page transformer for structure information extraction. In Proceedings of WWW, pages 3124–3133. Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2020. BLiMP: The benchmark of linguistic minimal pairs for English. TACL, 8:377–392. 807 Albert Weichselbraun. 2021. Inscriptis–a python-based html to text conversion library optimized for knowledge extraction from the web. ArXiv preprint, abs/2108.01454. Tim Weninger, William H. Hsu, and Jiawei Han. 2010. CETR: content extraction via tag ratios. In Proceedings of WWW, pages 971–980. Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from web crawl data. In Proceedings of LREC, pages 4003– 4012, Marseille, France. Lan Yi, Bing Liu, and Xiaoli Li. 2003. Eliminating noisy information in web pages for data mining. In Proceedings of SIGKDD, pages 296–305. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of EMNLP, pages 93–104, Brussels, Belgium. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. ArXiv preprint, abs/2303.18223. 808 A Appendix A.1 License The terms of use for ClueWeb22 can be found on the Lemur Project website3, while CommonCrawl provides its terms of use on its official website4. All of these licenses and agreements allow their data for academic use. A.2 More Experimental Details of Overall Evaluation In this subsection, we describe further details of the implementation of overall evaluation. Dataset. We randomly selected about 8.28 million webpages from ClueWeb22-B English subset as the training set. To evaluate content extraction performance, we utilized a snapshot extracted from ClueWeb22-B, identified as en0001-01. This particular snapshot comprises 19,013 English webpages along with respective annotations. Notably, it’s imperative to highlight that en0001-01 was excluded from both the training, and validation datasets. Metrics. In our experiments, we convert the web scanning task into a binary classification problem, so we can compute relevant metrics at the node level. However, some previous web scrapers would directly return the primary content without node information. Therefore, we directly check whether the reserved plain text contains the text spans of DOM tree nodes, which are annotated as ground truths in the benchmark. Computing Platform. We conducted the training of NeuScraper on a server equipped with 8× NVIDIA A100-40G GPUs, with the training process spanning approximately 40 hours. For the evaluation of baseline scrapers, we utilized a setup comprising 2× Intel Xeon [email protected] CPUs with multiprocessing. In contrast, the evaluation of NeuScraper was carried out using 8× NVIDIA A100-40 GB GPUs, employing an inference batch size of 256 per GPU. A.3 More Experimental Details on Using Cleaned Web Data for Language Model Pretraining In this subsection, we describe additional details of the evaluation of the effectiveness of the cleaned web data in language model pretraining. 3https://lemurproject.org/clueweb22 4https://commoncrawl.org/terms-of-use Pretraining Corpus. We utilize ClueWeb22B and CommonCrawl CC-MAIN-2023-50 as the source corpus for our pretraining endeavors. For ClueWeb22 , we employ various scrapers to acquire the corpus while ensuring an equivalent number of tokens, thereby pretraining the language model to mirror the performance of each scraper. For CommonCrawl, we used the pipeline from PileCC (Gao et al., 2021), but removed the language model filtering. For various sizes of Pythia models, the corpus from ClueWeb22 consistently contains 13 billion tokens, while the corpus from Common Crawl is fixed at 2.8 billion tokens. Pretraining Details. Our pretraining framework extends from the Lit-GPT5 and we evaluate the performance of pretrained models using the standard lm-evaluation-harness toolkit6. Specifically, for all Pythia models, we employed the AdamW optimizer with a peak learning rate in line with Biderman et al. (2023). The total batch size was set to 480 (with the batch size of 12 per GPU and gradient accumulation being set to 10). For ClueWeb22, the model undergoes training for just one epoch. For CommonCrawl, it is trained across three epochs due to the size of the corpus. All of the models were trained on 4× NVIDIA A100-40G GPUs. Datasets for Evaluation. We choose 8 standard datasets to evaluate the performance of pretrained language models. Some of them are from the Pythia standard benchmark (Biderman et al., 2023), supplemented by SWAG (Zellers et al., 2018) and BLIMP (Warstadt et al., 2020). Baselines. In this experiments, we chose to use htmlparser7 and trafilatura (Barbaresi, 2021) as the main baselines for comparison. htmlparser serves as the text pre-extraction tool for CommonCrawl WET file, while trafilatura has become the state-of-the-art web scraper. A.4 Performance on Multilingual Webpages Thanks to the careful planning of ClueWeb22, which allows us to evaluate the performance of scrapers in different languages. Specifically, we tested on snapshots coded 0001-01 for each language, the results are shown in Table 4. Among all the baseline scrapers, NeuScraper demonstrated excellent performance, even though it was trained only on English data. 5https://github.com/Lightning-AI/lit-gpt 6https://github.com/EleutherAI/ lm-evaluation-harness 7https://htmlparser.sourceforge.net 809 English German Spanish French Italian Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 bs4 41.29 58.10 40.49 57.23 39.34 56.18 40.05 56.84 38.92 55.72 html2text 40.44 53.88 38.91 52.51 37.19 50.28 38.65 51.72 37.57 50.20 boilerpipe 66.48 46.16 66.38 43.63 70.04 51.74 67.83 46.56 69.85 50.56 jusText 62.58 22.17 65.84 42.98 61.25 2.13 60.79 2.63 61.56 0.53 lxml 64.62 44.78 63.47 43.07 67.45 48.82 65.32 45.44 67.12 48.61 inscriptis 45.35 58.98 43.82 57.27 42.74 56.30 42.99 56.19 43.42 56.49 readability 68.47 48.22 70.16 50.17 72.08 54.38 71.10 52.21 72.69 54.85 trafilatura 70.70 61.08 73.84 62.43 73.93 62.14 73.60 62.20 74.49 62.87 NeuScraper 86.35 83.90 79.10 73.02 78.89 71.90 76.58 68.12 78.76 71.33 Chinese Japanese Dutch Portuguese Polish Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 bs4 49.10 65.33 49.95 65.75 36.86 53.51 40.39 57.24 36.95 53.60 html2text 48.29 63.94 50.00 64.74 35.44 48.96 38.57 52.09 36.16 49.26 boilerpipe 61.31 42.44 57.33 30.26 70.01 44.82 67.93 49.14 66.96 36.91 jusText 51.38 0.75 51.26 0.49 65.11 12.84 60.33 3.03 63.60 0.76 lxml 62.22 52.79 60.38 50.16 66.16 41.36 66.59 48.73 65.72 40.01 inscriptis 53.09 66.35 53.76 66.57 40.11 53.69 44.01 57.65 40.51 53.15 readability 67.45 56.61 64.64 50.14 71.54 47.03 70.60 53.26 66.81 42.38 trafilatura 68.57 63.29 71.82 67.08 74.06 59.88 72.64 61.67 71.58 53.02 NeuScraper 74.76 73.99 74.01 73.80 77.70 68.13 77.48 71.28 75.84 64.61 Table 4: Scarping Performance in Different Languages. We tested it on ClueWeb22 in different languages and NeuScraper showed significant improvements over the baseline scrapers. A.5 Case Study In this subsection, we show additional case studies of NeuScraper and trafilatura, our neural web scraper and a previously state-of-the-art web scraper. We first analyze the case in Figure 3, where we use red boxes to indicate the content extracted by the scrapers. This is a college course page that contains some expertise in electrical engineering. When scraping this page, trafilatura loses a lot of textual content compared to our NeuScraper. By checking the raw HTML code, we found that there is an error caused by insufficient standardization of web pages: the paragraph tag “<p>” is used for headings on this page instead of the standard “<h>” tag. This page is readable for humans, but the HTML tag conveys an error that seriously affects the extraction performance of trafilatura. In contrast, our NeuScraper shows great adaptability. It not only extracts most of the paragraph content, but also removes useless information such as phone numbers, e-mails, dates, and so on. Another typical case is interleaved boilerplate and body text, as shown in Figure 4. We use blue boxes to indicate the content extracted by the scraper. In this case, the boilerplate and body text are written in the same way. The boilerplate also uses “<h>” to identify headings and “<p>” for paragraphs, instead of the list surrounded by “<li>” in most cases. Recognizing it is difficult for trafilatura. NeuScraper leverages its ability to recognize latent semantic information to remove the boilerplate in such pages successfully. 810 (a) Trafilatura. (b) NeuScraper. Figure 3: Case#1 of the Primary Content Extraction Results Using Different Scrapers. The extracted parts are highlighted with red boxes. 811 (a) Trafilatura. (b) NeuScraper. Figure 4: Case#2 of the Primary Content Extraction Results Using Different Scrapers. The extracted parts are highlighted with blue boxes. 812
2024_short
72
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 813–822 August 11-16, 2024 ©2024 Association for Computational Linguistics Greed is All You Need: An Evaluation of Tokenizer Inference Methods Omri Uzanβ Craig W. Schmidtκ Chris Tannerκµ Yuval Pinterβ β Department of Computer Science κ Kensho Technologies Ben-Gurion University of the Negev µ Massachusetts Institute of Technology Beer Sheva, Israel Cambridge, MA, USA {omriuz@post, uvp@cs}.bgu.ac.il {craig.schmidt, chris.tanner}@kensho.com Abstract While subword tokenizers such as BPE and WordPiece are typically used to build vocabularies for NLP models, the method of decoding text into a sequence of tokens from these vocabularies is often left unspecified, or ill-suited to the method in which they were constructed. We provide a controlled analysis of seven tokenizer inference methods across four different algorithms and three vocabulary sizes, performed on a novel intrinsic evaluation suite we curated for English, combining measures rooted in morphology, cognition, and information theory. We show that for the most commonly used tokenizers, greedy inference performs surprisingly well; and that SaGe, a recently-introduced contextually-informed tokenizer, outperforms all others on morphological alignment. 1 Introduction Modern NLP systems, including large language models (LLMs), typically involve an initial step of mapping raw input text into sequences of subword tokens. These tokens are selected from a large vocabulary of candidates that were produced from algorithms such as Byte-Pair Encoding (BPE; Sennrich et al., 2016), WordPiece (Schuster and Nakajima, 2012), or UnigramLM (Kudo, 2018). This process, which we refer to as the inference method of tokenization, is critical as it determines how all text is represented and subsequently modeled. Each inference method offers distinct mappings, and we assert that it is not well-understood how these methods differ in performance. Furthermore, popular implementation packages such as Huggingface Tokenizers,1 SentencePiece,2 and SubwordNMT3 often obfuscate or even restrict the choice of inference methods, making it unclear if 1https://huggingface.co/docs/tokenizers 2https://pypi.org/project/sentencepiece 3https://github.com/rsennrich/subword-nmt Tokenizerinference mode Segmentation BPEmerges Ul tr am od ern BPElongest prefix Ultra modern UnigramLMlikelihood U nprecedented UnigramLMlongest prefix Un precedent ed SaGelongest prefix Inc once iva ble SaGelikelihood In conceiv able Table 1: Examples of words being segmented differently by various tokenizers (vocab size 32,000) using different inference modes on the same vocabulary. Each tokenizer’s default mode is provided on top. inference-time decoding is compatible with the algorithm used to learn the tokenizer’s vocabulary. Moreover, it is yet to be determined whether such a match is ideal, or even necessary. In Table 1 we present examples demonstrating how the prescribed inference methods of BPE, UnigramLM, and SaGe (Yehezkel and Pinter, 2023) do not necessarily provide the best segmentation for complex English words, even when good segments are available in the vocabulary. BPE’s outof-the-box algorithm merges the cross-morphemic am sequence at an early stage, preventing the consideration of ultra and modern and condemning the downstream model to work with a representation learned for the first-person present form of ‘to be’. UnigramLM’s ablative algorithm enabled nprecedented (which crosses morpheme boundaries) to remain in its final vocabulary of tokens, while SaGe’s greedy algorithm masks the boundaries of both the prefix In and the suffix able. In all cases, an alternative inference method provides a more morphologically-aligned segmentation over the same vocabulary. Previous work regarding subword tokenization mostly concerns developing vocabulary construction algorithms (Sennrich et al., 2016; Schuster and Nakajima, 2012; Kudo, 2018; Mielke et al., 2021; Yehezkel and Pinter, 2023), finding the optimal 813 vocabulary size (Gowda and May, 2020; GutierrezVasques et al., 2021), building multilingual vocabularies (Liang et al., 2023), and using space positioning in the vocabulary tokens (Gow-Smith et al., 2022; Jacobs and Pinter, 2022). Others analyze the effects of vocabularies, finding intricate relations between algorithm or vocabulary and downstream performance (Bostrom and Durrett, 2020; Cognetta et al., 2024a), information theory (Zouhar et al., 2023; Cognetta et al., 2024b), cognitive plausibility (Beinborn and Pinter, 2023), impact on society (Ovalle et al., 2024), or morphological alignment (Klein and Tsarfaty, 2020; Hofmann et al., 2021, 2022; Gow-Smith et al., 2024; Batsuren et al., 2024). Research concerning inference methods has been more scarce, and includes examination of random effects on BPE merges (Provilkov et al., 2020; Saleva and Lignos, 2023) and application of sophisticated search algorithms (He et al., 2020). As far as we know, there exists no comprehensive study comparing inference methods across a variety of vocabularies and sizes using diverse metrics. In this work, we conduct a controlled experiment isolating the effects of inference methods over four tokenizers, introducing an evaluation suite aggregating intrinsic benchmarks from various theoretical realms.4 We find that greedy inference methods work surprisingly well for all four vocabularies across morphological and information-theoretic metrics. Furthermore, we demonstrate that SaGe yields state-of-the-art performance according to morphological metrics, and that inference methods that minimize token count perform strongest by cognitive metrics. 2 Inference Methods Let V denote a vocabulary of subword tokens and w denote a word (or ‘pretoken’), the output of a pretokenizer. We define s(V, w) := (t1, ..., tk) as a segmentation of w into k subword tokens such that ∀i, ti ∈V and that the concatenation of t1, ..., tk results in w. We use the term segmentation to denote the application of an inference method on a text given a token vocabulary, as well as its result. Current widely-employed tokenization schedules couple together the tokenizer vocabulary with the inference method. However, we advocate for decoupling them, as they are independent pro4We release our code and data at https://github.com/ MeLeLBGU/tokenizers_intrinsic_benchmark. cesses. Specifically, given a fixed token vocabulary produced from pre-training data, one could subsequently use any applicable inference method for the task at hand. Thus, in our experiments, we use various intrinsic metrics to analyze the impact and performance of the several classes of inference methods: Greedy inference methods only consider and produce one token at each step. We test three greedy approaches: Longest prefix, which WordPiece uses by default (Wu et al., 2016), selects the longest token in V that is a prefix of w, and then continues to iteratively segment the remaining text. Longest suffix selects the longest token that is a suffix of w and continues iteratively (Jacobs and Pinter, 2022; Bauwens, 2023). Since this strategy diverges from English Morphology, we consider it an intriguing baseline for assessing the impact of linguistic structure on the inference method. Longest token selects the longest token that is contained in w, adds it to the generated segmentation, and then iteratively segments each remaining character sequence. This was proposed by Hofmann et al. (2022) to approximate words by their k longest tokens. They showed that it preserves morphological structure of words and leads to performance gains on some downstream tasks. Merge rules-based inference methods begin with a word’s character sequence and iteratively apply token-forming merge rules learnt by the tokenizer at the vocabulary creation phase, until none can be applied. This is BPE’s default inference mode.5 In our experiments we test two variants for BPE: The deterministic merge strategy recursively applies the first applicable BPE merge rule by its order in the trained merge list. Dropout (Provilkov et al., 2020) applies each valid merge rule with probability p, leading to a regularization effect where rare tokens surface more often and their embeddings can be better trained. It has been shown to improve machine translation performance. Likelihood-based inference methods use individual likelihood values assigned to tokens in order to find a segmentation for w where the total likelihood is maximized (Kudo, 2018; He et al., 2020). Default uses likelihood values learned during vocabulary construction and considers the likelihood 5While ostensibly also compatible with WordPiece, we found no implementation of the model that provides an ordered list of its merges. 814 Resource Type Size Reference License LADEC Morphological 7,804 Gagné et al. (2019) CC BY-NC 4.0 DEED MorphoLex Morphological 12,029 Sánchez-Gutiérrez et al. (2018) CC BY-NC-SA 4.0 DEED MorphyNet Morphological 219,410 Batsuren et al. (2021) CC BY-SA 3.0 DEED DagoBert Morphological 279,443 Hofmann et al. (2020) Not specified—citation based UniMorph Morphological 143,454 Batsuren et al. (2022) CC BY 4.0 DEED UnBlend Morphological 312 Pinter et al. (2020) GPL-3.0 CompoundPiece Morphological 22,896 Minixhofer et al. (2023) Not specified—citation based Cognitive data Cognitive 55,867 Beinborn and Pinter (2023) MIT tokenization-scorer Information Theory — Zouhar et al. (2023) Not specified—citation based Table 2: Size, Reference and License details of the resources in our benchmark. of a segmentation to be the product of individual likelihoods (from which UnigramLM gets its name). Least tokens assigns a constant likelihood value to all tokens, effectively selecting a segmentation where the number of tokens is minimized. While not suggested so far as a standalone inference method, this obecjtive is proposed for both vocabulary training and inference in the PathPiece algorithm (Schmidt et al., 2024). 3 Intrinsic Benchmark Some analyses of tokenizers rely on training language models or translation models and evaluating their performance on downstream tasks. Using this process to isolate effects of tokenization hyperparameters, such as inference method, is both time- and resource-consuming, as well as unstable due to the introduction of multiple sources of randomness throughout the LM/TM pre-training and fine-tuning phases. Few measures have been introduced that are intrinsic to vocabularies and their direct application to corpora, and fewer still avoid conflating the measures with the objectives used in the vocabulary construction process itself. As a result, the body of work focused on improving tokenization schemes is still relatively small. We create and release a benchmark made to intrinsically evaluate subword tokenizers. We collected word-level datasets and information measures which have been shown, or hypothesized, to correlate with the performance of language models on various downstream tasks. Details on these resources are provided in Table 2. At present, the benchmark is focused on the English language, although corresponding datasets exist for others as well. Morphological alignment It is commonly assumed that, for a given tokenizer, alignment of word segments to morphological gold-standard segmentations is a predictor of the ability of a language model that uses the given tokenizer to represent words, especially ‘complex’ ones that are made up of several roots or contain multiple morphological affixes (Schick and Schütze, 2019; Nayak et al., 2020; Hofmann et al., 2021; Gow-Smith et al., 2022). We follow Gow-Smith et al. (2022) and evaluate our tokenizers’s alignment with morphological annotations found in LADEC (Gagné et al., 2019), MorphoLex (Sánchez-Gutiérrez et al., 2018), MorphyNet (Batsuren et al., 2021), and DagoBert (Hofmann et al., 2020). We augment these datasets with morpheme segmentation data (Batsuren et al., 2022), novel blend structure detection data (Pinter et al., 2020), and compound separation data (Minixhofer et al., 2023). The number of words in each resource can be found in Table 2. We compare the segmentations generated by the tokenizers with each inference method to gold-standard morphological segmentations using the metric introduced by Creutz and Linden (2004), and report the macroaveraged F1 score over the different resources. Cognitive Plausibility We use the benchmark and data from Beinborn and Pinter (2023) to measure the correlation of a tokenizer’s output with the response time and accuracy of human participants in a lexical decision task, predicated on the hypothesis that a good tokenizer struggles with character sequences that humans find difficult, and vice versa. We report the average of the absolute value correlation scores across the four linguistic setups (word/nonword × accuracy/response time). Tokens distribution statistics We report the Rényi efficiency of different segmentations across a corpus (Zouhar et al., 2023). This measure penalizes token distributions dominated by either very high- and/or very low-frequency tokens, and was shown to correlate strongly with BLEU scores for machine translation systems trained on the respective tokenizers. Recent work (Cognetta et al., 815 Vocab Inference Morphological Cognitive Rényi Tokens Decoding method alignment plausibility efficiency per word diff BPE longest prefix .8584 .3266 .4482 1.4273 .0502 longest suffix .6467 .3170 .4482 1.4286 .0417 longest token .8738 .3302 .4474 1.4261 .0484 least tokens .7544 .3321 .4476 1.4237 .0382 det. merges .6309 .3355 .4482 1.4308 — dropout merge .6081 .2925 .4537 1.5793 .1313 WordPiece longest prefix .8488 .3307 .4507 1.4430 — longest suffix .6288 .3198 .4502 1.4435 .0656 longest token .8466 .3332 .4500 1.4411 .0216 least tokens .7342 .3306 .4401 1.4319 .0682 UnigramLM longest prefix .9222 .2858 .3400 1.7577 .1187 longest suffix .7520 .2690 .2897 1.7624 .0516 longest token .8845 .2948 .3040 1.7353 .0406 least tokens .8982 .2953 .2969 1.7219 .0328 likelihood .9149 .2937 .2919 1.7314 — SaGe longest prefix .9606 .2581 .3217 1.9445 — longest suffix .7370 .2471 .2832 1.9615 .1704 longest token .9236 .2671 .3027 1.9236 .0887 least tokens .9125 .2674 .2944 1.8895 .1318 likelihood† .9515 .2664 .2937 1.9156 .1168 Table 3: Intrinsic Benchmark results on a vocab size of 40k. ‘Default’ decoding algorithms (used in vocabulary construction) in italics. Not all methods are applicable to all tokenizers. Decoding diff presents the share of pretokens in the MiniPile test set that are differently tokenized using the method, compared with the default. We present correlation scores for performance over the various metric families in Appendix C. †For SaGe, likelihood is only based on unigram scores obtained before further vocabulary ablation. 2024b) reveals a misalignment between Rényi efficiency and downstream performance in certain cases, reinforcing the necessity of an evaluation suite grounded in diverse domains and disciplines, as advocated in this work. We also measure the average number of tokens per word over a corpus, as a proxy for compression quality (Gallé, 2019). We omit the popular measure of character-length distribution of the tokens in the vocabulary, as it does not vary with segmentation strategy. Lastly, we report the proportion of pretokens that are segmented different from the default across our reference corpus. 4 Experiments We evaluate inference methods for the following tokenizer vocabularies: BPE, UnigramLM, WordPiece and SaGe. We use the train split of the MiniPile (Kaddour, 2023) dataset to construct the tokenizer vocabularies. We train vocabularies of sizes 32,768, 40,960, and 49,152, using the HuggingFace Tokenizers library, with identical pre-tokenization, representing the text at byte level. UnigramLM and SaGe require an initial vocabulary for their top-down algorithms; for the former, we used the default implementation of one million top n-grams, while SaGe was initialized with a 262K-size UnigramLM vocabulary. This initial vocabulary also provided us with token likelihood scores for inference, although a more exact implementation would also incorporate the contextual SaGe objective. Token distribution statistics measurements and decoding diff rates were computed over the test split of the MiniPile dataset. We measure the Rényi efficiency using the tokenization-scorer package6 with α = 2.5. For each tokenizer, all experiments ran within several minutes on a personal laptop computer, highlighting the usefulness of our benchmark as an efficient tool for in-loop hyperparamter tuning. We present the results on our benchmark for the 40K vocabularies in Table 3. Results for other sizes are presented in Appendix A. A breakdown of individual evaluation subsets is provided in Appendix B. Inference methods Within each tokenizer, we find that the default (‘intended’) strategy is often outperformed by others on some measures. We observe a significant difference in morphological alignment when using merge rules-based inference methods. Qualitative analysis showed the findings 6https://github.com/zouharvi/tokenization-scorer 816 illustrated in Table 1, where early merge rules such as ‘i-n’, ‘a-m’, or ‘o-n’ cross morphological boundaries. We notice a similar trend for likelihoodbased inference, where frequently-used tokens possess very high likelihood values, sometimes exceeding those of the gold-standard segments. We find that the least tokens strategy fares well not only on the token count metric, which is mostly by-design, but also on cognitive measures, suggesting an effect of human preference to minimal word segmentation. Finally, we observe that likelihood-based inference performs poorly in terms of Rényi efficieny, contrary to its stated purpose. Dropout, on the other hand, performs well on this measure, in line with its goal. longest suffix performs poorly across the board, possibly due to the suffixing nature of the English language, which has complementarily been shown to affect character-level sequential modeling (Pinter et al., 2019). Notably, all our key observations are consistent across vocabulary sizes, as shown in Appendix A. Inter-tokenizer results Our results align with Bostrom and Durrett (2020)’s finding that BPE is inferior to UnigramLM on morphology alignment. However, we show that some of this gap can be attributed not to the vocabulary but to the inference method. In addition, we find that SaGe is most aligned to morphology by a substantial margin, indicating that its contextualized objective succeeds in retaining meaningful tokens in the vocabulary during ablation. It is important to note that our evaluation is limited to English, a language with relatively low morphological complexity. Previous studies have identified significant tokenization challenges in non-English languages (Mager et al., 2022). Therefore, any definitive conclusions regarding the effectiveness of tokenization methods should ideally encompass a diverse array of languages. BPE and WordPiece, optimized for compression, unsurprisingly perform well above the likelihood-based vocabularies on the information measures. However, we note that this carries over to the cognitive benchmark as well, supporting Beinborn and Pinter (2023)’s findings. Finally, we note that the two likelihood-based vocabularies follow the exact same within-vocab trends, and those for the two information-based vocabularies are also very close. This highlights the consistency and robustness of our benchmark, although some results are relatively close to each other, which can be expected considering that some inference methods do not change much of the token sequences (see rightmost column of Table 3). 5 Conclusion In this work, we curated an aggregated benchmark for intrinsic evaluation of subword tokenizers and used it to show the importance of selecting an inference method suited for a vocabulary given a task. Given its computational efficiency, we hope the benchmark can be used in LM training efforts as a fruitful first step to improve tokenization schemes, or to select inference methods on-line. Concretely, our findings suggest that greedy inference is a good choice, especially for morphologically-motivated tasks, even for tokenizers trained on other objectives. Considering its ease of implementation and faster inference, this is an encouraging finding. In the future, we plan to examine the correlation between our benchmark and various downstream tasks, as well as expand our experimentation to other languages and new algorithms. Limitations Our paper contains evaluation of models in the English language. This was done mostly in order to focus this short paper’s contribution, and to be able to control for as many possibly-confounding variables such as training data. Nevertheless, a more complete followup would have to include attempts to replicate our findings on other languages, aiming for a set as diverse as possible mostly in terms of typology and script. Our evaluation is limited to intrinsic measures. While this makes development of tokenizers easier, we acknowledge that the body of work correlating success on these measures with performance of downstream models on end-tasks is incomplete. Ethical Considerations Details for human annotation for the cognitive benchmark are documented in the source benchmark’s paper (Beinborn and Pinter, 2023), from which we took the data as-is. Acknowledgments We would like to thank Charlie Lovering, Varshini Reddy, and Haoran Zhang for comments on early drafts of this paper. We thank the anonymous reviewers for their comments on our submission. This research was supported in part by the Israel Science Foundation (grant No. 1166/23) and by 817 a Google gift intended for work on Meaningful Subword Text Tokenization. References Khuyagbaatar Batsuren, Gábor Bella, and Fausto Giunchiglia. 2021. MorphyNet: a large multilingual database of derivational and inflectional morphology. In Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 39–48, Online. Association for Computational Linguistics. Khuyagbaatar Batsuren, Omer Goldman, Salam Khalifa, Nizar Habash, Witold Kiera´s, Gábor Bella, Brian Leonard, Garrett Nicolai, Kyle Gorman, Yustinus Ghanggo Ate, Maria Ryskina, Sabrina Mielke, Elena Budianskaya, Charbel El-Khaissi, Tiago Pimentel, Michael Gasser, William Abbott Lane, Mohit Raj, Matt Coler, Jaime Rafael Montoya Samame, Delio Siticonatzi Camaiteri, Esaú Zumaeta Rojas, Didier López Francis, Arturo Oncevay, Juan López Bautista, Gema Celeste Silva Villegas, Lucas Torroba Hennigen, Adam Ek, David Guriel, Peter Dirix, Jean-Philippe Bernardy, Andrey Scherbakov, Aziyana Bayyr-ool, Antonios Anastasopoulos, Roberto Zariquiey, Karina Sheifer, Sofya Ganieva, Hilaria Cruz, Ritván Karahóˇga, Stella Markantonatou, George Pavlidis, Matvey Plugaryov, Elena Klyachko, Ali Salehi, Candy Angulo, Jatayu Baxi, Andrew Krizhanovsky, Natalia Krizhanovskaya, Elizabeth Salesky, Clara Vania, Sardana Ivanova, Jennifer White, Rowan Hall Maudslay, Josef Valvoda, Ran Zmigrod, Paula Czarnowska, Irene Nikkarinen, Aelita Salchak, Brijesh Bhatt, Christopher Straughn, Zoey Liu, Jonathan North Washington, Yuval Pinter, Duygu Ataman, Marcin Wolinski, Totok Suhardijanto, Anna Yablonskaya, Niklas Stoehr, Hossep Dolatian, Zahroh Nuriah, Shyam Ratan, Francis M. Tyers, Edoardo M. Ponti, Grant Aiton, Aryaman Arora, Richard J. Hatcher, Ritesh Kumar, Jeremiah Young, Daria Rodionova, Anastasia Yemelina, Taras Andrushko, Igor Marchenko, Polina Mashkovtseva, Alexandra Serova, Emily Prud’hommeaux, Maria Nepomniashchaya, Fausto Giunchiglia, Eleanor Chodroff, Mans Hulden, Miikka Silfverberg, Arya D. McCarthy, David Yarowsky, Ryan Cotterell, Reut Tsarfaty, and Ekaterina Vylomova. 2022. UniMorph 4.0: Universal Morphology. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 840–855, Marseille, France. European Language Resources Association. Khuyagbaatar Batsuren, Ekaterina Vylomova, Verna Dankers, Tsetsuukhei Delgerbaatar, Omri Uzan, Yuval Pinter, and Gábor Bella. 2024. Evaluating subword tokenization: Alien subword composition and oov generalization challenge. Thomas Bauwens. 2023. BPE-knockout: Systematic review of BPE tokenisers and their flaws with application in Dutch morphology. Master’s thesis, KU Leuven. Lisa Beinborn and Yuval Pinter. 2023. Analyzing cognitive plausibility of subword tokenization. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4478–4486, Singapore. Association for Computational Linguistics. Kaj Bostrom and Greg Durrett. 2020. Byte pair encoding is suboptimal for language model pretraining. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4617–4624, Online. Association for Computational Linguistics. Marco Cognetta, Tatsuya Hiraoka, Naoaki Okazaki, Rico Sennrich, and Yuval Pinter. 2024a. An analysis of bpe vocabulary trimming in neural machine translation. Marco Cognetta, Vilém Zouhar, Sangwhan Moon, and Naoaki Okazaki. 2024b. Two counterexamples to tokenization and the noiseless channel. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 16897–16906, Torino, Italia. ELRA and ICCL. Mathias Creutz and Bo Krister Johan Linden. 2004. Morpheme segmentation gold standards for finnish and english. Christina L. Gagné, Thomas L. Spalding, and Daniel Schmidtke. 2019. Ladec: The large database of english compounds. Behavior Research Methods, 51:2152 – 2179. Matthias Gallé. 2019. Investigating the effectiveness of BPE: The power of shorter sequences. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1375–1381, Hong Kong, China. Association for Computational Linguistics. Edward Gow-Smith, Dylan Phelps, Harish Tayyar Madabushi, Carolina Scarton, and Aline Villavicencio. 2024. Word boundary information isn’t useful for encoder language models. Edward Gow-Smith, Harish Tayyar Madabushi, Carolina Scarton, and Aline Villavicencio. 2022. Improving tokenisation by alternative treatment of spaces. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11430–11443, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Thamme Gowda and Jonathan May. 2020. Finding the optimal vocabulary size for neural machine translation. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3955–3964, Online. Association for Computational Linguistics. Ximena Gutierrez-Vasques, Christian Bentz, Olga Sozinova, and Tanja Samardzic. 2021. From characters 818 to words: the turning point of BPE merges. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3454–3468, Online. Association for Computational Linguistics. Xuanli He, Gholamreza Haffari, and Mohammad Norouzi. 2020. Dynamic programming encoding for subword segmentation in neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3042–3051, Online. Association for Computational Linguistics. Valentin Hofmann, Janet Pierrehumbert, and Hinrich Schütze. 2020. DagoBERT: Generating derivational morphology with a pretrained language model. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3848–3861, Online. Association for Computational Linguistics. Valentin Hofmann, Janet Pierrehumbert, and Hinrich Schütze. 2021. Superbizarre is not superb: Derivational morphology improves BERT’s interpretation of complex words. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3594–3608, Online. Association for Computational Linguistics. Valentin Hofmann, Hinrich Schuetze, and Janet Pierrehumbert. 2022. An embarrassingly simple method to mitigate undesirable properties of pretrained language model tokenizers. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 385–393, Dublin, Ireland. Association for Computational Linguistics. Cassandra L Jacobs and Yuval Pinter. 2022. Lost in space marking. arXiv preprint arXiv:2208.01561. Jean Kaddour. 2023. The minipile challenge for data-efficient language models. arXiv preprint arXiv:2304.08442. Stav Klein and Reut Tsarfaty. 2020. Getting the ##life out of living: How adequate are word-pieces for modelling complex morphology? In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 204–209, Online. Association for Computational Linguistics. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66–75, Melbourne, Australia. Association for Computational Linguistics. Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, and Madian Khabsa. 2023. XLM-V: Overcoming the vocabulary bottleneck in multilingual masked language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 13142–13152, Singapore. Association for Computational Linguistics. Manuel Mager, Arturo Oncevay, Elisabeth Mager, Katharina Kann, and Thang Vu. 2022. BPE vs. morphological segmentation: A case study on machine translation of four polysynthetic languages. In Findings of the Association for Computational Linguistics: ACL 2022, pages 961–971, Dublin, Ireland. Association for Computational Linguistics. Sabrina J Mielke, Zaid Alyafeai, Elizabeth Salesky, Colin Raffel, Manan Dey, Matthias Gallé, Arun Raja, Chenglei Si, Wilson Y Lee, Benoît Sagot, et al. 2021. Between words and characters: a brief history of open-vocabulary modeling and tokenization in nlp. arXiv preprint arXiv:2112.10508. Benjamin Minixhofer, Jonas Pfeiffer, and Ivan Vuli´c. 2023. CompoundPiece: Evaluating and improving decompounding performance of language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 343–359, Singapore. Association for Computational Linguistics. Anmol Nayak, Hariprasad Timmapathini, Karthikeyan Ponnalagu, and Vijendran Gopalan Venkoparao. 2020. Domain adaptation challenges of BERT in tokenization and sub-word representations of outof-vocabulary words. In Proceedings of the First Workshop on Insights from Negative Results in NLP, pages 1–5, Online. Association for Computational Linguistics. Anaelia Ovalle, Ninareh Mehrabi, Palash Goyal, Jwala Dhamala, Kai-Wei Chang, Richard Zemel, Aram Galstyan, Yuval Pinter, and Rahul Gupta. 2024. Tokenization matters: Navigating data-scarce tokenization for gender inclusive language technologies. Yuval Pinter, Cassandra L. Jacobs, and Jacob Eisenstein. 2020. Will it unblend? In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1525–1535, Online. Association for Computational Linguistics. Yuval Pinter, Marc Marone, and Jacob Eisenstein. 2019. Character eyes: Seeing language through characterlevel taggers. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 95–102, Florence, Italy. Association for Computational Linguistics. Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. 2020. BPE-dropout: Simple and effective subword regularization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1882–1892, Online. Association for Computational Linguistics. 819 Jonne Saleva and Constantine Lignos. 2023. What changes when you randomly choose BPE merge operations? not much. In Proceedings of the Fourth Workshop on Insights from Negative Results in NLP, pages 59–66, Dubrovnik, Croatia. Association for Computational Linguistics. Claudia H. Sánchez-Gutiérrez, Hugo Mailhot, S. Hélène Deacon, and Maximiliano A. Wilson. 2018. Morpholex: A derivational morphological database for 70,000 english words. Behavior Research Methods, 50:1568–1580. Timo Schick and Hinrich Schütze. 2019. Rare words: A major problem for contextualized embeddings and how to fix it by attentive mimicking. Craig W. Schmidt, Varshini Reddy, Haoran Zhang, Alec Alameddine, Omri Uzan, Yuval Pinter, and Chris Tanner. 2024. Tokenization is more than compression. Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In 2012 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5149–5152. IEEE. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. Yonghui Wu, Mike Schuster, Z. Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason R. Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Gregory S. Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. ArXiv, abs/1609.08144. Shaked Yehezkel and Yuval Pinter. 2023. Incorporating context into subword vocabularies. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 623–635, Dubrovnik, Croatia. Association for Computational Linguistics. Vilém Zouhar, Clara Meister, Juan Gastaldi, Li Du, Mrinmaya Sachan, and Ryan Cotterell. 2023. Tokenization and the noiseless channel. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5184–5207, Toronto, Canada. Association for Computational Linguistics. A Results on Different Vocabulary Sizes Table 4 presents benchmark results on 32K-sized and 49K-sized vocabularies. B Detailed Results Table 5 breaks down the results (for 40K) on individual morphological datasets composing our benchmark. Table 6 Provides the same for individual cognitive measures. C Inter-Metric Correlations Table 7 presents the Pearson correlation coefficients between the various intrinsic metrics used in the benchmark. These correlations are calculated based on the aggregated results across all vocabulary sizes. 820 Vocab Inference Morphological Cognitive Rényi Tokens Decoding method alignment plausibility efficiency per word diff BPE-32K longest prefix .8727 .3122 .4600 1.4511 .0581 longest suffix .6496 .3018 .4602 1.4530 .0469 longest token .8883 .3152 .4592 1.4498 .0558 least tokens .7607 .3174 .4595 1.4469 .0426 det. merges .6409 .3201 .4603 1.4551 — dropout merge .6149 .2795 .4656 1.6041 .1316 WordPiece-32K longest prefix .7819 .3185 .4630 1.4689 — longest suffix .5084 .3089 .4626 1.4698 .0744 longest token .7764 .3212 .4622 1.4667 .0243 least tokens .7394 .3185 .4508 1.4565 .0769 UnigramLM-32K longest prefix .9278 .2855 .3574 1.7803 .1171 longest suffix .7610 .2679 .2961 1.7838 .0516 longest token .8926 .2930 .3103 1.7534 .0395 least tokens .9077 .2937 .3028 1.7418 .0303 likelihood .9206 .2931 .2985 1.7501 — SaGe-32K longest prefix .9613 .2610 .3454 1.9502 — longest suffix .7449 .2473 .2914 1.9736 .1653 longest token .9348 .2685 .3113 1.9319 .0822 least tokens .9212 .2691 .3035 1.9084 .1247 likelihood .9579 .2679 .3026 1.9246 .1098 BPE-49K longest prefix .8440 .3371 .4391 1.4104 .0444 longest suffix .6438 .3279 .4390 1.4112 .0379 longest token .8637 .3404 .4384 1.4094 .0430 least tokens .7464 .3421 .4385 1.4072 .0351 det. merges .6208 .3461 .4390 1.4137 — dropout merge .5967 .2996 .4446 1.5610 .1310 WordPiece-49K longest prefix .7600 .3398 .4413 1.4245 — longest suffix .5133 .3309 .4407 1.4247 .0589 longest token .7598 .3421 .4406 1.4228 .0194 least tokens .7261 .3401 .4319 1.4145 .0615 UnigramLM-49K longest prefix .9157 .2818 .3467 1.7432 .1190 longest suffix .7449 .2669 .2849 1.7486 .0516 longest token .8750 .2915 .2994 1.7245 .0416 least tokens .8908 .2926 .2924 1.7098 .0345 likelihood .9095 .2911 .2871 1.7201 — SaGe-49K longest prefix .9606 .2566 .3361 1.9414 — longest suffix .7355 .2466 .2783 1.9562 .1735 longest token .9200 .2662 .2975 1.9192 .0912 least tokens .9053 .2662 .2893 1.8947 .1353 likelihood .9455 .2651 .2887 1.9111 .1194 Table 4: Aggregated results on 32K and 49K vocabularies. 821 Vocab Inference Ladec MorphoMorphyDagoUniUnBlend CompoundLex Net Bert Morph Piece BPE longest prefix .9210 .8091 .8511 .8013 .9956 .7404 .8904 longest suffix .9497 .6222 .6524 .7116 .0316 .6095 .9502 longest token .9147 .8125 .8953 .8618 .9705 .7711 .8905 least tokens .9775 .7401 .8303 .8539 .2573 .6489 .9731 det. merges .8160 .6781 .6132 .6195 .3233 .6097 .7568 dropout merge .7666 .6557 .5871 .5953 .3128 .6213 .7178 WordPiece longest prefix .9333 .7625 .9114 .8659 .9963 .5569 .9153 longest suffix .9447 .6005 .6289 .6844 .1059 .4838 .9535 longest token .9275 .7568 .9124 .8765 .9666 .5749 .9112 least tokens .9706 .7132 .8253 .8032 .2670 .5897 .9704 UnigramLM longest prefix .9551 .8800 .9291 .9087 .9973 .8553 .9299 longest suffix .9248 .6387 .8206 .8407 .2777 .8076 .9536 longest token .8855 .7534 .9313 .9378 .9135 .8571 .9130 least tokens .9660 .8015 .9511 .9593 .7218 .9073 .9801 likelihood .9341 .7903 .9645 .9782 .8423 .9205 .9743 SaGe longest prefix .9734 .9422 .9673 .9600 .9973 .9213 .9626 longest suffix .9519 .5996 .7819 .8091 .2403 .8216 .9549 longest token .9420 .8390 .9365 .9418 .9711 .8889 .9457 least tokens .9856 .8394 .9533 .9632 .7269 .9318 .9877 likelihood .9709 .8813 .9809 .9879 .9014 .9492 .9890 Table 5: Results on individual morphological resources. Vocab Inference Words-RT Words-ACC nonwords-RT nonwords-ACC BPE longest prefix −.3136 .4035 .4111 −.1784 longest suffix −.3102 .3890 .3987 −.1699 longest token −.3164 .4086 .4130 −.1828 least tokens −.3146 .4083 .4226 −.1828 det. merges −.3285 .4138 .4163 −.1835 dropout merge −.2562 .3505 .3908 −.1726 WordPiece longest prefix −.3198 .4029 .4119 −.1882 longest suffix −.3132 .3863 .4028 −.1770 longest token −.3226 .4067 .4134 −.1902 least tokens −.3146 .4036 .4201 −.1842 UnigramLM longest prefix −.2292 .3391 .3920 −.1827 longest suffix −.2308 .3235 .3645 −.1572 longest token −.2493 .3590 .3904 −.1804 least tokens −.2394 .3582 .3978 −.1860 likelihood −.2424 .3577 .3926 −.1822 SaGe longest prefix −.1924 .2896 .3752 −.1754 longest suffix −.1895 .2801 .3602 −.1585 longest token −.2079 .3047 .3790 −.1767 least tokens −.1978 .3034 .3864 −.1821 likelihood −.2035 .3043 .3797 −.1780 Table 6: A breakdown of cognitive correlation results across vocabularies and inference methods. Morphological Cognitive Rényi Tokens alignment plausibility efficiency per word Morphological alignment 1 −.5009 −.4799 .5726 Cognitive plausibility — 1 .6470 −.9588 Rényi efficiency — — 1 −.6400 Tokens per word — — — 1 Table 7: Correlations between the different intrinsic metrics. 822
2024_short
73
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 823–841 August 11-16, 2024 ©2024 Association for Computational Linguistics What Do Dialect Speakers Want? A Survey of Attitudes Towards Language Technology for German Dialects Verena BlaschkeÆ U Christoph Purschke× Hinrich SchützeÆ U Barbara PlankÆ☼ U U Center for Information and Language Processing (CIS), LMU Munich, Germany Æ Munich Center for Machine Learning (MCML), Munich, Germany × Department of Humanities, University of Luxembourg, Luxembourg ☼Department of Computer Science, IT University of Copenhagen, Denmark {verena.blaschke, b.plank}@lmu.de Abstract Natural language processing (NLP) has largely focused on modelling standardized languages. More recently, attention has increasingly shifted to local, non-standardized languages and dialects. However, the relevant speaker populations’ needs and wishes with respect to NLP tools are largely unknown. In this paper, we focus on dialects and regional languages related to German – a group of varieties that is heterogeneous in terms of prestige and standardization. We survey speakers of these varieties (N=327) and present their opinions on hypothetical language technologies for their dialects. Although attitudes vary among subgroups of our respondents, we find that respondents are especially in favour of potential NLP tools that work with dialectal input (especially audio input) such as virtual assistants, and less so for applications that produce dialectal output such as machine translation or spellcheckers. 1 Introduction Most natural language processing (NLP) research focuses on languages with many speakers, high degrees of standardization and large amounts of available data (Joshi et al., 2020). Only recently, more NLP projects have started to include local, non-standardized languages and dialects. However, different speakers and cultures have different needs. As recently echoed by multiple researchers, the creation of language technologies (LTs) should take into account what the relevant speaker community finds useful (Bird, 2020, 2022; Liu et al., 2022; Mukhija et al., 2021), and communities can differ from one another in that regard (Lent et al., 2022). In this work, we focus on dialects and regional languages1 closely related to German (for the sake of simplicity, we use ‘dialects’ to refer to these varieties in this paper). With dialect competence generally being in decline in the German-speaking area, 1Our survey also includes responses by speakers of Low German, which is officially recognized as a regional language. Figure 1: Countries and German states in which the respondents’ dialects are spoken, with the number of respective respondents, and the overall age distribution. today, dialect speakers usually also speak Standard German, while dialects often are replaced by regiolects – intermediate varieties between standard and dialect (Kehrein, 2019). Speaker attitudes towards dialects vary greatly (Gärtig et al., 2010, pp. 155–167). Although these dialects are predominantly spoken and only few of them have traditional orthographies, many of them are also used in written, digital contexts (Androutsopoulos, 2003). Accordingly, some NLP datasets based (primarily) on such digital data exist, and a small number is also annotated for NLP tasks (Blaschke et al., 2023). Several recent publications feature LTs for German dialects, such as machine translation (Haddow et al., 2013; Honnet et al., 2018; Lambrecht et al., 2022; Aepli et al., 2023a; Her and Kruschwitz, 2024), speech-to-text (Herms et al., 2016; Nigmatulina et al., 2020; Gerlach et al., 2022) and text-to-speech systems (Gutscher et al., 2023), and slot and intent detection for conversational assistants (van der Goot et al., 2021; Aepli et al., 2023b; Winkler et al., 2024; Abboud and Oz, 2024). To investigate what speaker communities are interested in, we survey dialect speakers from different German-speaking areas (Figure 1). We gather 823 a snapshot of their current attitudes towards LTs to answer the following questions: Q1) Which dialect technologies do respondents find especially useful (§4.2)? Q2) Does this depend on whether the in- or output is dialectal, and on whether the LT works with speech or text data (§4.3)? Q3) How does this reflect relevant sociolinguistic factors (§4.4)? 2 Related Work The closest survey to ours on investigating attitudes of speakers of non-standard language varieties towards LTs is by Lent et al. (2022). They conducted a survey on the actual and desired LT use by speakers of different creoles (N=37). They find that the needs vary from speaker community to speaker community, and that speakers who are also highly proficient in the local high-prestige language are less interested in creole LTs. Of the technologies included in the survey, speech-related technologies (transcription and synthesis) are the most popular; machine translation (MT) and question answering software are also desired by multiple communities, while spellcheckers are controversial. Soria et al. (2018) surveyed speakers of four regional European languages2 about whether and why they use (or do not use) their languages in digital contexts. When asked about the desirability of currently unavailable spellcheckers and MT systems, more respondents judged both as desirable than not, although the exact proportions vary across communities. Millour (2019; 2020, pp. 230, 239) found similar results in surveys of Mauritian Creole (N=144) and Alsatian speakers (N=1,224). Conversely, Way et al. (2022) investigate actual LT use by speakers of different European national languages (91–922 respondents per country). The most commonly used LTs are MT, search engines and spell- or grammar checkers. When respondents do not use specific LTs, this can simply be due to the absence of such technologies for certain languages, but also due to a lack of interest in specific language–LT combinations. Recently, several surveys have also investigated speaker community perspectives regarding LTs for many different indigenous language communities (Mager et al., 2023; Cooper et al., 2024; Dolinska et al., 2024; Tonja et al., 2024). However, these surveys focus on languages with very different sociolinguistic contexts than the ones in our survey and 2Karelian (N=156, Salonen, 2017), Breton (N=202, Hicks, 2017), Basque (N=427, Gurrutxaga Hernaiz and Ceberio Berger, 2017), Sardinian (N=516, Russo and Soria, 2017). that are unrelated to their respective local highresource languages. 3 Questionnaire Our questionnaire is aimed at speakers of German dialects and related regional languages and consists of two main parts: We ask our participants about their dialect, and we ask about their opinions on hypothetical LTs for their dialect. Several of the questions regarding dialect use are inspired by Soria et al. (2018) and Millour (2020), and we choose a similar selection of LTs as Way et al. (2022) (§4.2). For each technology, we provide a brief definition to make the survey accessible to a broad audience (e.g., ‘Speech-to-text systems transcribe spoken language. They are for instance used for automatically generating subtitles or in the context of dictation software.’). We then ask participants to rate on a 5-point Likert scale how useful they would find such a tool for their dialect. We allow respondents to elaborate on their answers in comment fields. The full questionnaire is in Appendix §A. The questionnaire was written in German, and was estimated to take between 10–15 minutes for completion.3 It was online for three weeks in September and October 2023 and got disseminated via word of mouth, social media, mailing lists and by contacting dialect and heritage societies. Our results are hence based on a convenience sample. 4 Results We reached 441 people, 327 of whom are selfreported dialect speakers and finished the entire questionnaire – their responses are presented in the following. Detailed answer distributions are in Appendix §A; correlations are in §B. 4.1 Dialect Background and Attitudes Most of our respondents answer that they have a very good command of their dialect (68%) and acquired it as a mother tongue (71%). Figure 1 shows where the respondents’ dialects are spoken (and their age distribution): mostly in Germany (72%), followed by Switzerland (14%) and Austria (6%).4 Nearly a quarter (24%) each are in 380% of our respondents took <15 min to fill out the entire questionnaire, and 60% even less than 10 min. 4The other varieties are spoken in areas with minority speaker communities: Italy (Bavarian), France (Alemannic), the Netherlands (Low German), and Romania. The geographic distribution of our respondents is not representative of the overall dialect speaker population. 824 their twenties and thirties, almost all others are older. When rating how traditional their dialect is on a scale from 1 (traditional dialect that people from other regions have trouble understanding) to 5 (regionally marked German easily understood by outsiders), the largest group of respondents (35%) indicated a 2 (µ=2.6, σ=1.1). Just over half of our respondents (52%) speak their dialect on a daily basis, and 43% indicate that they would like to use their dialect in all areas of life. Most respondents (70%) value the diversity of their dialect. Nearly two thirds (65%) are opposed to having a standardized orthography for their dialect. Just over half of the respondents (53%) say that their dialect is only spoken and not well-suited for written communication – nevertheless, two thirds (66%) also write their dialect, even if rarely. Many (63%) find it easy to read dialectal texts written by others. Written dialect is commonly used for communicating with others – the most common writing scenarios are text messages (57%, multiple responses possible), followed by letters/emails (26%), social media posts and comments (19% each) – but also for notes to oneself including diary entries (19%). About a third (35%) indicate that they are actively engaged in dialect preservation pursuits (multiple responses possible): 13% as members of dialect preservation societies, 4% as teachers, and 22% in other ways. Write-in comments by the last group point out other language-related professions, but also include speaking the dialect in public or with children as a means of active dialect preservation.5 We compare the opinions of respondents with and without such dialect engagement in §4.4. 14% of our respondents are familiar with at least one LT that already caters to their dialect. Just over half of the respondents (54%) indicate that their dialect being represented by more LTs would make it more attractive to younger generations, and a smaller group (31%) says they would use their dialect more often given suitable LTs. 5The write-in answers contain 13 mentions of speaking the dialect with family members (especially children and grandchildren), 18 mentions of simply speaking the dialect (in public), 14 mentions of carrying out dialect-related research (as a job or a hobby), and 10 mentions of using the dialect in the context of literature or music, with slight overlap between these groups. None of these subgroups are concentrated in any specific area, but instead include respondents from areas where dialects and regional languages have very different statuses (cf. §4.4): Low German speakers as well as other German respondents, Swiss respondents, respondents from countries where German is a minority language, and so on. 0 20 40 60 80 100 % Spellcheckers Search engines MT oth→dial MT deu→dial MT dial→oth MT dial→deu TTS STT dial STT deu Chatbot output Assistant output Chatbot input Assistant input 10 17 9 15 14 19 17 27 25 12 19 23 33 15 26 17 22 22 33 30 31 36 22 29 30 38 13 13 12 10 13 12 14 9 11 18 15 16 8 3 2 2 2 2 1 4 2 2 2 1 2 1 23 19 27 21 21 16 17 16 12 22 19 14 11 36 23 33 30 28 19 18 16 14 24 17 16 9 Useless Cannot judge Rather useful Rather useless Neither/nor Useful 100 80 60 40 20 0 % Figure 2: Opinions on potential language technologies for dialects. STT=speech-to-text, TTS=text-tospeech, dial=dialect, deu=German, oth=other languages, MT=machine translation, cannot judge=skip question. 4.2 Which dialect LTs are deemed useful? Figure 2 shows our respondents’ opinions on LTs (Q1), and Appendix §C presents the average scores per LT when responses are mapped to a numerical scale. While there are diverging opinions on every LT – there is no single technology that (nearly) all respondents consider useful or useless for their dialect – some trends emerge, as we discuss next.6 Overall, the responses are generally correlated with each other: respondents who think positively/negatively of one technology tend to think similarly about others. Nevertheless, some LTs are overall more popular, and some less so: Virtual assistants and chatbots The most clearly favoured LT by dialect speakers in our survey are virtual assistants (such as Siri or Alexa) that can respond to dialectal input (71% in favour, 20% against). Chatbots that can handle dialectal input are less popular, but still deemed useful by a slight majority (52%). Systems that could output dialectal responses are less popular: 48% would 6Of the technologies we included, MT, search engines and spell checkers are the most used LTs in the EU (Way et al., 2022, p. 26). We assume that the tools that people use a lot are also tools they generally deem useful, yet those tools are all ranked relatively low in our results – suggesting that our results reveal attitudes on dialect LTs rather than LTs in general. 825 find virtual assistants that answer in dialect useful, and 34% think so about chatbots. Speech-to-text and text-to-speech When asked about speech-to-text (STT) software, a majority (61%) is in favour of systems that transcribe spoken dialect into written Standard German, and a slightly smaller majority (58%) is in favour of written dialectal output. When it comes to text-to-speech (TTS) systems that synthesize dialect text into a spoken form, the respondents are even more split, with 47% in favour and 35% against. Machine translation We ask for opinions on four different configurations regarding automatic translation of written texts: each possible combination for translation into vs. out of the dialect and from/into Standard German vs. a foreign language. All options are to some degree controversial among the respondents, with translation from the dialect into Standard German being the most popular (52% in favour) and from the dialect into a foreign language the least popular (25% in favour). Search engines Search engines that could deal with dialectal input are controversial, with 43% each in favour of and against this LT, although the negative group holds stronger opinions. Some write-in comments question whether (monolingual) information retrieval would produce useful results or mention finding it easier to write in Standard German rather than in a dialect, but others voice a desire to be able to find results for queries including dialectal terms with no direct German equivalent. Spellcheckers Most respondents (59%) are opposed to spell- or grammar checkers for their dialect, although a quarter (25%) is in favour. Several respondents mention opposition to spellcheckers since they want their dialectal writing to exactly reflect the pronunciation and word choices of their local dialect and would be bothered if a spellchecker changed them to an arbitrary standardized version of the dialect. 4.3 Are there differences for dialect input vs. output and text vs. speech? As seen in the previous section, there is a general tendency to prefer versions of LTs that process dialectal input rather than produce dialectal output (Q2). Several write-in comments voice worries about dialectal output not modelling their dialect accurately enough. Additionally, technologies dealing with spoken language tend to be rated more positively than those focusing on text only. Correlation with attitudes towards orthography Being in favour of a standardized dialect orthography is positively, albeit not very strongly, correlated with being in favour of any technology involving a written version of the dialect and/or (written or spoken) dialectal output (Spearman’s ρ values between 0.14 and 0.47 per LT with p-values <0.001). 4.4 Do attitudes reflect sociolinguistic factors? To address Q3 and the heterogeneity of our respondent group, we compare answers between larger subgroups. We summarize the results of t-tests with p-values <0.05. Appendix §D provides more details, together with two additional comparisons that only have small effect sizes (speaker age and dialect traditionality). Language activists Since language activists might have overly enthusiastic attitudes compared to the speaker population at large (Soria et al., 2018), we compare those who report involvement in dialect preservation (‘activists’, N=115) to those who do not (N=212). Activists are generally more in favour of LTs for dialects, with statistically significant differences for (any kind of) machine translation, TTS software, spellcheckers, and search engines, as well as for written dialect output options for STT, chatbots and virtual assistants. Removing the activists’ responses from our analysis only barely changes the order of preferred LTs (§C). Region Additionally, we compare three large regional subgroups with different sociolinguistic realities. In Germany and Austria, traditional dialects have been partially replaced by more standardlike regiolects, while dialects have high prestige in Switzerland where Standard German is often reserved for writing (Kehrein, 2019; Ender and Kaiser, 2009). Low German, traditionally spoken in parts of Northern Germany and the Eastern Netherlands, is officially recognized as a language and is more distantly related to Standard German than the other varieties our participants speak. Its speaker numbers are in decline, but many Northern Germans think Low German should receive more support in, e.g., public schools (Adler et al., 2016). We compare the opinions of Swiss (N=46) and Low German (N=58) respondents to German and Austrian non-Low-German speakers (N=191).7 7We identify the Low German respondents based on the 826 Our Low German respondents are more in favour of a standardized orthography and of spellcheckers than our other German and Austrian respondents, the Swiss respondents less so. This is unsurprising in that several orthographies have been proposed for Low German, whereas (typically spoken) dialects and (typically written) Standard German exist in a diglossic state in Switzerland. Nevertheless, both groups are more in favour of STT software with dialectal output. The Low German respondents are more in favour of chatbots with dialectal answers, TTS, (any kind of) MT and search engines. Swiss Germans rate virtual assistants that can handle dialectal input as more desirable (87% in favour), and are more in favour of STT software with Standard German output. 5 Discussion and Conclusion We surveyed speakers of dialect varieties on their attitudes towards LTs. Generally, the survey participants prefer LTs working with dialect input rather than output. They also tend to prefer tools that process speech over those for text (Q2). This is consistent with Chrupała’s (2023) argument that NLP should focus more on spoken language to better represent actual language use. It also reflects the complex, often conflicting attitudes speakers of multiple varieties have towards competing linguistic and social norms. Consequently, the most popular potential dialect LTs (Q1) in our survey process spoken dialectal input: virtual assistants with dialect input and speech-to-text systems. However, like Lent et al. (2022), we find that different speaker communities vary in their attitudes towards LTs (Q3). For instance, opinions on the standardization of a dialect are a relevant factor regarding the desirability of written LTs. Nevertheless, the acceptance and rejection of LTs is related to individual factors beyond just attitudes, e.g., experience with and trust in digital technology. We hope that our study inspires other NLP researchers to actively consider the wants and needs of the relevant speaker communities. Based on the results of this study, we also encourage the dialect NLP community to pursue more work on spoken language processing. dialect they indicated speaking (§A2), combined with region information for respondents who supplied ambiguous dialect names. For the (other) German, Austrian, and Swiss respondents, we used region information. Ethical Considerations We only collected responses from participants who consented to having their data saved and analyzed for research purposes. We did not ask for personally identifying information. We store responses on a local server and only share results based on aggregate analyses. Appendix §A contains the full questionnaire including the introduction where we describe the purpose of the study and explain what data we collect and how we use the data. Participation was completely anonymous, voluntary and with no external reward. We do not see any particular risks associated with this work. Limitations Our results are based on a convenience sample; neither the geographic or age distribution are representative of the population at large (dialect-speaking or not). Language activists are over-represented (hence our additional analysis in §4.4 and Appendices §C and §D), and participating in the survey may have been especially of interest to people who feel (in one way or another) strongly about the topic of dialects and technology. Even so, our respondents are not a monolith in their opinions and we can see meaningful differences between the relative popularity of different technologies. We aimed to keep participation effort low and therefore limited the number of questions we included. We considered asking “Would you use X if it existed?” in addition to “Would you find X useful?” to explicitly disentangle the participants’ own needs from what are possibly the perceived needs of the community. We decided against this in order to keep the questionnaire as short as possible and because we were unsure how accurate such assessment would be. The scale of our answer possibilities uses “not useful” as the opposite of “useful.” However, it would be interesting to instead use a scale from “harmful” to “useful” in future surveys, in order to get a better impression of whether respondents who deem an LT useless in our version of the survey find it actively harmful or merely uninteresting. To minimize the total time needed to fill out the questionnaire and to guarantee the privacy of the respondents after asking respondents about what specific dialect they speak (used later to identify the Low German speakers), we intentionally kept additional demographic questions at a minimum and did not ask about education, income, gender, 827 or similar variables. As this survey is based on self-reporting, we expect discrepancies between reported and actual opinions and behaviour. Since participation was anonymous and entirely voluntary with no external reward, we think it unlikely for participants to lie about their opinions. It is likely, though, that (especially younger) participants overstate their dialect competence or the traditionality of their dialect, in line with overall dialect dynamics in German (Purschke, 2011). Acknowledgements We thank everybody who participated in or shared the survey. We also thank Yves Scherrer and Frauke Kreuter for their advice, and the MaiNLP and CIS members as well as the anonymous reviewers for their feedback. This research is supported by the ERC Consolidator Grant DIALECT 101043235. We also gratefully acknowledge partial funding by the European Research Council (ERC #740516). References Khadige Abboud and Gokmen Oz. 2024. Towards equitable natural language understanding systems for dialectal cohorts: Debiasing training data. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 16487– 16499, Torino, Italia. ELRA and ICCL. Astrid Adler, Christiane Ehlers, Reinhard Goltz, Andrea Kleene, and Albrecht Plewnia. 2016. Status und Gebrauch des Niederdeutschen 2016. Erste Ergebnisse einer repräsentativen Erhebung. Technical report, Institut für Deutsche Sprache, Mannheim. Noëmi Aepli, Chantal Amrhein, Florian Schottmann, and Rico Sennrich. 2023a. A benchmark for evaluating machine translation metrics on dialects without standard orthography. In Proceedings of the Eighth Conference on Machine Translation, pages 1045– 1065, Singapore. Association for Computational Linguistics. Noëmi Aepli, Ça˘grı Çöltekin, Rob Van Der Goot, Tommi Jauhiainen, Mourhaf Kazzaz, Nikola Ljubeši´c, Kai North, Barbara Plank, Yves Scherrer, and Marcos Zampieri. 2023b. Findings of the VarDial evaluation campaign 2023. In Tenth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2023), pages 251–261, Dubrovnik, Croatia. Association for Computational Linguistics. Jannis Androutsopoulos. 2003. Online-Gemeinschaften und Sprachvariation. Soziolinguistische Perspektiven auf Sprache im Internet. Zeitschrift für germanistische Linguistik, 31(2). Steven Bird. 2020. Decolonising speech and language technology. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3504–3519, Barcelona, Spain (Online). International Committee on Computational Linguistics. Steven Bird. 2022. Local languages, third spaces, and other high-resource scenarios. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7817–7829, Dublin, Ireland. Association for Computational Linguistics. Verena Blaschke, Hinrich Schütze, and Barbara Plank. 2023. A survey of corpora for Germanic lowresource languages and dialects. In Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa), pages 392–414, Tórshavn, Faroe Islands. University of Tartu Library. Grzegorz Chrupała. 2023. Putting natural in Natural Language Processing. In Findings of the Association for Computational Linguistics: ACL 2023, pages 7820–7827, Toronto, Canada. Association for Computational Linguistics. Ned Cooper, Courtney Heldreth, and Ben Hutchinson. 2024. ”it’s how you do things that matters”: Attending to process to better serve indigenous communities with language technologies. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers), pages 204–211, St. Julian’s, Malta. Association for Computational Linguistics. Joanna Dolinska, Shekhar Nayak, and Sumittra Suraratdecha. 2024. Akha, dara-ang, karen, khamu, Mlabri and urak lawoi’ language minorities’ subjective perception of their languages and the outlook for development of digital tools. In Proceedings of the Seventh Workshop on the Use of Computational Methods in the Study of Endangered Languages, pages 94–99, St. Julians, Malta. Association for Computational Linguistics. Andreas Ender and Irmgard Kaiser. 2009. Zum Stellenwert von Dialekt und Standard im österreichischen und Schweizer Alltag: Ergebnisse einer Umfrage. Zeitschrift für germanistische Linguistik, 37(2):266– 295. Anne-Kathrin Gärtig, Albrecht Plewnia, and Astrid Rothe. 2010. Wie Menschen in Deutschland über Sprache denken. Ergebnisse einer bundesweiten Repräsentativerhebung zu aktuellen Spracheinstellungen. Institut für Deutsche Sprache, Mannheim. Johanna Gerlach, Jonathan Mutal, and Bouillon Pierrette. 2022. Producing Standard German subtitles for Swiss German TV content. In Ninth Workshop on Speech and Language Processing for Assistive Technologies (SLPAT-2022), pages 37–43, Dublin, Ireland. Association for Computational Linguistics. 828 Antton Gurrutxaga Hernaiz and Klara Ceberio Berger. 2017. Basque – a digital language? Technical report. Reports on Digital Language Diversity in Europe. Lorentz Gutscher, Michael Pucher, and Víctor Garcia. 2023. Neural speech synthesis for Austrian dialects with standard German grapheme-to-phoneme conversion and dialect embeddings. In Proceedings of the 2nd Annual Meeting of the Special Interest Group on Under-resourced Languages (SIGUL 2023). Barry Haddow, Adolfo Hernández, Friedrich Neubarth, and Harald Trost. 2013. Corpus development for machine translation between standard and dialectal varieties. In Proceedings of the Workshop on Adaptation of Language Resources and Tools for Closely Related Languages and Language Variants, pages 7– 14, Hissar, Bulgaria. INCOMA Ltd. Shoumen, BULGARIA. Wan-Hua Her and Udo Kruschwitz. 2024. Investigating neural machine translation for low-resource languages: Using Bavarian as a case study. In Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages. Robert Herms, Laura Seelig, Stefanie Münch, and Maximilian Eibl. 2016. A corpus of read and spontaneous Upper Saxon German speech for ASR evaluation. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 4648–4651, Portorož, Slovenia. European Language Resources Association (ELRA). Davyth Hicks. 2017. Breton – a digital language? Technical report. Reports on Digital Language Diversity in Europe. Pierre-Edouard Honnet, Andrei Popescu-Belis, Claudiu Musat, and Michael Baeriswyl. 2018. Machine translation of low-resource spoken dialects: Strategies for normalizing Swiss German. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282–6293, Online. Association for Computational Linguistics. Roland Kehrein. 2019. Vertical language change in Germany: Dialects, regiolects, and standard German. In Stanley D. Brunn and Roland Kehrein, editors, Handbook of the Changing World Language Map. Springer International Publishing. Louisa Lambrecht, Felix Schneider, and Alexander Waibel. 2022. Machine translation from standard German to Alemannic dialects. In Proceedings of the 1st Annual Meeting of the ELRA/ISCA Special Interest Group on Under-Resourced Languages, pages 129–136, Marseille, France. European Language Resources Association. Heather Lent, Kelechi Ogueji, Miryam de Lhoneux, Orevaoghene Ahia, and Anders Søgaard. 2022. What a creole wants, what a creole needs. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 6439–6449, Marseille, France. European Language Resources Association. Zoey Liu, Crystal Richardson, Richard Hatcher, and Emily Prud’hommeaux. 2022. Not always about you: Prioritizing community needs when developing endangered language technology. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3933–3944, Dublin, Ireland. Association for Computational Linguistics. Manuel Mager, Elisabeth Mager, Katharina Kann, and Ngoc Thang Vu. 2023. Ethical considerations for machine translation of indigenous languages: Giving a voice to the speakers. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4871–4897, Toronto, Canada. Association for Computational Linguistics. Alice Millour. 2019. Getting to know the speakers: a survey of a non-standardized language digital use. In 9th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, Pozna´n, Poland. Alice Millour. 2020. Myriadisation de ressources linguistiques pour le traitement automatique de langues non standardisées. Ph.D. thesis, Sorbonne Université. Namrata Mukhija, Monojit Choudhury, and Kalika Bali. 2021. Designing language technologies for social good: The road not taken. Computing Research Repository, 2110.07444. Iuliia Nigmatulina, Tannon Kew, and Tanja Samardzic. 2020. ASR for non-standardised languages with dialectal variation: the case of Swiss German. In Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects, pages 15–24, Barcelona, Spain (Online). International Committee on Computational Linguistics (ICCL). Christoph Purschke. 2011. Regionalsprache und Hörerurteil. Grundzüge einer perzeptiven Variationslinguistik. Steiner, Stuttgart. Irene Russo and Claudia Soria. 2017. Sardinian – a digital language? Technical report. Reports on Digital Language Diversity in Europe. Tuomo Salonen. 2017. Karelian – a digital language? Technical report. Reports on Digital Language Diversity in Europe. 829 Jürgen Erich Schmidt, Joachim Herrgen, Roland Kehrein, Alfred Lameli, and Hanna Fischer. 2020–. Regionalsprache.de (REDE III. Forschungsplattform zu den modernen Regionalsprachen des Deutschen. Edited by Robert Engsterhold, Heiko Girnth, Simon Kasper, Juliane Limper, Georg Oberdorfer, Tillmann Pistor, Anna Wola´nska. Assisted by Dennis Beitel, Milena Gropp, Maria Luisa Krapp, Vanessa Lang, Salome Lipfert, Jeffrey Pheiff, Bernd Vielsmeier. Claudia Soria, Valeria Quochi, and Irene Russo. 2018. The DLDP survey on digital use and usability of EU regional and minority languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Atnafu Lambebo Tonja, Fazlourrahman Balouchzahi, Sabur Butt, Olga Kolesnikova, Hector Ceballos, Alexander Gelbukh, and Thamar Solorio. 2024. NLP progress in Indigenous Latin American languages. Rob van der Goot, Ibrahim Sharaf, Aizhan Imankulova, Ahmet Üstün, Marija Stepanovi´c, Alan Ramponi, Siti Oryza Khairunnisa, Mamoru Komachi, and Barbara Plank. 2021. From masked language modeling to translation: Non-English auxiliary tasks improve zero-shot spoken language understanding. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2479–2497, Online. Association for Computational Linguistics. Andy Way, Georg Rehm, Jane Dunne, Jan Hajiˇc, Teresa Lynn, Maria Giagkou, Natalia Resende, Tereza Vojtˇechová, Stelios Piperidis, Andrejs Vasiljevs, Aivars Berzins, Gerhard Backfried, Marcin Skowron, Jose Manuel Gomez-Perez, Andres Garcia-Silva, Martin Kaltenböck, and Artem Revenko. 2022. Report on all external consultations and surveys. Technical report, European Language Equality. Miriam Winkler, Virginija Juozapaityte, Rob van der Goot, and Barbara Plank. 2024. Slot and intent detection resources for Bavarian and Lithuanian: Assessing translations vs natural queries to digital assistants. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 14898–14915, Torino, Italy. ELRA and ICCL. A Questionnaire In this section, we reproduce the questions and answers from our survey, in the original wording as well as in translation. Translations are in grey italics, remarks about the questionnaire are in black italics. Answer options that end with a colon (:) came with an optional text input field in the questionnaire. All questions except for the first two could be skipped without answering. Herzlich willkommen, servus, grüezi & moin! Sprachtechnologie ist momentan allgegenwärtig, ob bei Übersetzungsprogrammen, Chatbots oder anderen Anwendungen. Hauptsächlich unterstützen diese Anwendungen lediglich Standardsprachen – was nicht unbedingt dem entspricht, wie wir im Alltag Sprache verwenden. Daher möchten wir herausfinden, wie Sie als Sprecher*innen von Dialekten und Regionalsprachen möglicher Sprachtechnologie für Ihre Sprachform gegenüberstehen: welche Anwendungen halten Sie für wünschenswert bzw. unnötig? Welcome and hello [in different dialects]! Language technology is currently omnipresent – be it in the context of translation software, chatbots or other applications. Such applications primarily support standard languages – which is not necessarily how we use language in our everyday lives. Because of this we would like to find out what you as speakers of dialects and regional languages think of potential technologies for your language variety: which applications do you find desirable or useless? Das Ausfüllen des Fragebogens dauert etwa 10–15 Minuten. Wir behandeln Ihre Antworten vertraulich und veröffentlichen diese nur in anonymisierter Form und ohne dass Rückschlüsse auf Ihre Person gezogen werden können. Genauere Details: Ziel der Befragung ist es zum einen, herauszufinden, ob es Unterschiede zwischen den Arten von Sprachtechnologien gibt, die Dialektsprecher*innen tendenziell als nützlich bzw. nutzlos bewerten. Zum anderen möchten wir herausfinden, ob ein statistischer Zusammenhang zwischen diesen Antworten und dem Dialekthintergrund und -gebrauch der Befragten besteht. Die Antworten werden auf einem Server der LMU in München gespeichert. Wir speichern dabei nur Ihre Antworten und den Antwortzeitpunkt (um die typische Ausfülldauer besser einzuschätzen), nicht aber Ihre IP-Adresse. Wir geben die Daten nicht an Dritte weiter, sondern veröffentlichen lediglich Ergebnisse, die auf Aggregatdaten und statistischen Analysen beruhen. Zudem zitieren wir gegebenenfalls aus (optional gegebenen) Kommentarfeld-Antworten. 830 Kontaktmöglichkeit bei Fragen oder Kommentaren zu dieser Umfrage: [Contact data of first author]. Vielen Dank für Ihre Teilnahme! This questionnaire takes about 10–15 minutes to fill out. We treat your answers as confidential and only share them as anonymized data that do not allow drawing any inferences about your identity. More detailed information: The goal of this survey is firstly to determine whether there are differences between the types of language technologies that dialect speakers tend to find useful or useless. Additionally, we would like to find out whether there is a statistical correlation between the answers and the dialect background of the participants. We store the answers on an LMU server in Munich. This only includes storing your answers and the time of the questions are answered (to better estimate the typical response duration), but not your IP address. We do not share your data with third parties, but only share results based on aggregated data and statistical analyses. Additionally, we might cite (optional) write-in answers from comment fields. Contact person in case of questions about or comments on this study: [Contact data of first author]. Thank you very much for participating! □Ich stimme zu, dass meine Antworten wie oben beschrieben zu Forschungszwecken gespeichert und ausgewertet werden. I consent to my answers being stored and analyzed for research purposes as outlined above. The survey only progresses if this box is checked. In dieser Umfrage untersuchen wir Sprachen und Sprachformen, die sich deutlich vom Hochdeutschen unterscheiden. Damit meinen wir mit dem Deutschen verwandte Regionalsprachen sowie Dialekte, Mundarten und Platt-Varianten, die meist für eine kleine Region typisch sind und von Außenstehenden nicht ohne Weiteres verstanden werden können. Ein paar Beispiele dafür sind das Eifeler Platt, Allgäuerisch, Bairisch oder Nordfriesisch. Der Einfachheit halber verwenden wir im Folgenden „Dialekt“ als Sammelbegriff für all diese Sprachformen. In this survey, we focus on languages and language varieties that are clearly distinct from Standard German. To be precise, we are interested in regional languages related to German as well as dialects8 that usually are typical for a small region and cannot easily be understood by outsiders. Some examples are Eifelplatt, Allgäu dialects, Bavarian and North Frisian. For the sake of simplicity, we will use “dialect” as umbrella term for all of these language varieties in the following. This introduction is partially based on the one from the REDE project surveys (Schmidt et al., 2020–). 1. Können Sie einen deutschen Dialekt sprechen? Can you speak a German dialect? Ja, sehr sicher Yes, very well 223 Ja, gut Yes, well 71 Ein wenig A bit 33 Nein No 16 The 16 respondents who answered ‘no’ are excluded from the analysis. The survey automatically ended for them, showing the message: “Alle weiteren Fragen richten sich nur an SprecherInnen eines deutschen Dialekts bzw. einer mit dem Deutschen nahe verwandten Regionalsprache. Vielen Dank für Ihre Teilnahme!” “All further questions are only for speakers of a German dialect or a regional language closely related to German. Thank you for participating!” 2. Um welchen Dialekt handelt es sich? Which dialect specifically? 327 write-in answers. 3. Wann haben Sie diesen Dialekt gelernt? When did you learn this dialect? Als Muttersprache As mother tongue 231 Kindheit/Jugend Childhood/youth 84 Später Later 12 4. In welchem Land befindet sich der Ort, an dem Ihr Dialekt gesprochen wird (z.B. Ihr Heimatort)? In which country is the location where your dialect is spoken (e.g., your hometown)? See Figure 1. 8In the German version, we include different terms that all translate to “dialect” but are used in different regions. 831 5. In welchem Bundesland befindet sich dieser Ort? In which German state is this location? Only asked if the previous answer is ‘Germany’. See Figure 1. 6. Wie sehr entspricht Ihr Dialekt dem traditionellen Dialekt des Ortes? How much does your dialect resemble the traditional dialect of this location? 1 – Mein Dialekt ist sehr traditionell...* 57 2 114 3 93 4 38 5 – Mein Dialekt ist eher regional gefärbtes...** 25 *1 – Mein Dialekt ist sehr traditionell und für Außenstehende aus anderen Regionen sehr schwer zu verstehen. 1 – My dialect is very traditional and very hard to understand for outsiders from other regions. **5 – Mein Dialekt ist eher regional gefärbtes Deutsch, das auch von Außenstehenden recht einfach verstanden wird. 5 – My dialect is more like regionally marked German that is relatively easily understood by outsiders. 7. Wie häufig sprechen Sie Ihren Dialekt? How often do you speak your dialect? The answer options are based on those in the surveys summarized by Soria et al. (2018). Täglich Daily 170 Mehrmals pro Woche Multiple times per week 48 Min. einmal pro Woche At least once a week 32 Min. einmal pro Monat At least once a month 29 Seltener More rarely 46 Nie Never 2 8. Schreiben Sie manchmal Ihren Dialekt? Do you ever write your dialect? This question and the next one are modelled after questions by Millour (2020, pp. 228, 237–238). Ja (egal ob häufig o. selten) Yes (whether often or rarely) 217 Nein, ich weiß nicht wie No, I don’t know how 14 Nein, ich habe dazu keine Gelegenheit* 26 Nein, mein Dialekt ist eine gesprochene Sprachform...** 62 Nein, aus anderen Gründen: No, for other reasons: 8 *No, I don’t have any opportunity for this **Nein, mein Dialekt ist eine gesprochene Sprachform und ich möchte ihn nicht schreiben No, my dialect is a spoken form of language and I don’t want to write it 9. Was schreiben Sie in Ihrem Dialekt? (Mehrfachantworten möglich) What do you write in your dialect? (Multiple answers possible) Only asked if previous is ‘yes’. 217 participants responded: Nachrichten in Chatprogrammen...* 187 Briefe, Emails Letters, emails 86 Einträge auf sozialen Medien Social media posts 63 Kommentare auf soz. Med. Social media comments 61 Sachtexte, z.B. als Blogposts...** 19 Prosa, Poesie Prose, poetry 43 Witze Jokes 24 Rezepte Recipes 6 Notizen an mich selbst, Tagebucheinträge*** 63 Andere/weitere Sachen: Other/additional things: 22 *Nachrichten in Chatprogrammen, Messengern (wie WhatsApp), SMS Texts in messaging apps (like WhatsApp), text messages **Sachtexte, z.B. als Blogposts oder auf Wikipedia Non-fiction texts, e.g., blog posts or Wikipedia articles ***Notes to myself, diary entries 10. Setzen Sie sich aktiv für den Erhalt Ihres Dialekts ein? (Mehrfachantworten möglich) Are you actively involved in preserving your dialect? (Multiple answers possible) This question is based on questions in the surveys by Soria et al. (2018) and Millour (2020, pp. 227, 235). 323 respondents answered: Ja, in einem Verein zur Mundartpflege* 43 Ja, als Lehrer*in Yes, as a teacher 14 Ja, anderweitig: Yes, in another way: 73 Nein No: 212 *Yes, in a dialect preservation society 11. Wie alt sind Sie? How old are you? 19 Jahre oder jünger 19 years or younger 7 20–29 Jahre 20–29 years 79 30–39 Jahre 30–39 years 78 40–49 Jahre 40–49 years 54 50–59 Jahre 50–59 years 39 60–69 Jahre 60–69 years 38 70–79 Jahre 70–79 years 25 80 Jahre oder älter 80 years or older 7 832 12. Weitere Kommentare zu Ihrem Dialekt oder zu den vorherigen Fragen: (Optional) Additional comments on your dialect or the preceding questions: (Optional) 61 write-in answers. In diesem Abschnitt fragen wir Sie zu Ihrer Meinung zu verschiedenen dialektbezogenen Themen. Dabei gibt es keine richtigen/falschen oder erwünschten/unerwünschten Antworten, sondern wir sind an Ihrer persönlichen Meinung interessiert. In this section we ask you about your opinion on different dialect-related topics. There are no right/wrong or desired/undesired answers, we are simply interested in your personal opinion. 13. Stimmen Sie den folgenden Aussagen zu? Do you agree with the following statements? Statements presented in a randomized order: • Die Vielfalt der unterschiedlichen Ausprägungen meines Dialekts ist eine Stärke. The diversity of the different varieties of my dialect is a strength. • Mein Dialekt ist in erster Linie eine gesprochene Sprachform und nicht für die schriftliche Kommunikation geeignet. My dialect is primarily a spoken form of language and not suited for written communication. This question is based on an answer option in the survey by Millour (2020, pp. 228, 237) (see also question 8 in this appendix). • Ich möchte meinen Dialekt in allen Lebensbereichen verwenden. I’d like to be able to use my dialect in any aspect of life. This question is based on a question by Soria et al. (2018) and Millour (2020, pp. 229, 239). • Wenn ich einen Text lese, den jemand anderes in meinem Dialekt verfasst hat, fällt es mir schwer, ihn zu verstehen. When I read text that someone else wrote in my dialect, I have trouble understanding it. • Es sollte eine standardisierte Rechtschreibung für meinen Dialekt geben. There should be a standardized orthography for my dialect. Answer options: • Ja, auf jeden Fall Yes, absolutely • Eher ja Rather yes • Weder noch Neither/nor • Eher nein Rather no • Nein, gar nicht Absolutely not • Keine Angabe Prefer not to say The answer distributions (in %) are as follows: 0 20 40 60 80 100 % Orthography Reading=hard All aspects Spoken only Diversity 9 5 23 21 43 12 20 21 31 28 12 12 18 15 15 2 1 1 1 6 30 33 21 17 6 35 30 16 14 3 Disagree N/A A. somewhat Dis. somewhat Neither/nor Agree 100 80 60 40 20 0 % 14. Weitere Kommentare zu diesem Abschnitt: (Optional) Additional comments on this section: (Optional) 48 write-in answers. In diesem Abschnitt fragen wir Sie zu Ihrer Meinung zu verschiedenen Sprachtechnologien. Dabei gibt es keine richtigen/falschen oder erwünschten/unerwünschten Antworten, sondern wir sind an Ihrer persönlichen Meinung interessiert. In this section we ask you about your opinion on different language technologies. There are no right/wrong or desired/undesired answers, we are simply interested in your personal opinion. Übersetzungsprogramme erstellen eine automatische Übersetzung von Text aus einer Sprache in eine andere Sprache. Beispiele dafür sind DeepL oder Google Translate. Machine translation software automatically translate text from one language into another. Examples are DeepL or Google Translate. 833 15. Stimmen Sie den folgenden Aussagen zu? Es sollte Übersetzungsprogramme geben, ... Do you agree with the following statements? There should be translation software... • ...die hochdeutsche Texte in meinen Dialekt übersetzen. ...that translates Standard German texts into my dialect. • ...die Texte aus anderen Sprachen in meinen Dialekt übersetzen. ...that translates texts from other languages into my dialect. • ...die Texte aus meinem Dialekt ins Hochdeutsche übersetzen. ...that translates texts from my dialect into Standard German. • ...die Texte aus meinem Dialekt in andere Sprachen übersetzen. ...that translates texts from my dialect into other languages. Answer options: • Ja, unbedingt Yes, absolutely • Eher ja Rather yes • Weder noch Neither/nor • Eher nein Rather no • Nein, das halte ich nicht für sinnvoll No, I don’t think this is useful • Das kann ich nicht bewerten I cannot judge this See Figure 2 for answer distributions. 16. Welcher Aussage stimmen Sie mehr zu? Wenn ich einen Text in meinen Dialekt übersetzen lasse, ... With which statement do you agree more? When a text is translated into my dialect, ... 1 – ...soll die Übersetzung sprachlich meiner...* 52 2 61 3 67 4 32 5 – ...ist es mir egal, welcher (geschriebenen)...** 43 Ich möchte keinen geschriebenen Dialekt-Output.*** 72 *1 – ...soll die Übersetzung sprachlich meiner (geschriebenen) Version des Dialekts voll und ganz entsprechen. 1 – ...the translation should fully correspond to my own (written) version of the dialect. **5 – ...ist es mir egal, welcher (geschriebenen) Form meines Dialekts die Übersetzung sprachlich entspricht. 5 – ...I do not care which (written) version of my dialect the translation corresponds to. ***I do not want any written dialect output. 17. Weitere Kommentare zu Übersetzungsprogrammen: (Optional) Additional comments on machine translation software: (Optional) 41 write-in answers. Rechtschreib- und Grammatikkorrekturprogramme markieren oder korrigieren mögliche Fehler in Texten, zum Beispiel bei der Eingabe in Microsoft Word. Spelland grammar checkers highlight or fix potential errors in texts, for instance when writing text in Microsoft Word. 18. Stimmen Sie der folgenden Aussage zu? Es sollte Rechtschreib- und Grammatikkorrekturprogramme für meinen Dialekt geben. Do you agree with the following statement? There should be spell- and grammar checkers for my dialect. Same answer options as for question 15. See Figure 2 for the answer distribution. 19. Weitere Kommentare zu Rechtschreib- und Grammatikkorrekturprogrammen: (Optional) Additional comments on spell- and grammar checkers: (Optional) 51 write-in answers. Transkriptionsprogramme verschriftlichen gesprochene Sprache. Sie finden beispielsweise bei automatisch erzeugten Untertiteln oder bei Diktiergeräten Einsatz. Speech-to-text systems transcribe spoken language. They are for instance used for automatically generating subtitles or in the context of dictation software. 20. Stimmen Sie den folgenden Aussagen zu? Es sollte Transkriptionsprogramme geben, ... Do you agree with the following statements? There should be speech-to-text software... • ...die Audioaufnahmen in meinem Dialekt als geschriebenes Hochdeutsch wiedergeben. ...that transcribes audio recorded in my dialect as written Standard German. • ...die Audioaufnahmen in meinem Dialekt als geschriebenen Dialekt wiedergeben. ...that transcribes audio recorded in my dialect as written dialect. 834 Same answer options as for question 15. See Figure 2 for answer distributions. 21. Weitere Kommentare zu Transkriptionsprogrammen: (Optional) Additional comments on speech-to-text software: (Optional) 33 write-in answers. Text-to-Speech-Systeme funktionieren umgekehrt wie Transkriptionsprogramme: sie erzeugen gesprochene Versionen von geschriebenem Text. Ein Beispiel dafür sind Bildschirmleseprogramme. Text-to-speech systems work the other way around as speech-to-text systems: they generate spoken versions of written text. One example are screen readers. 22. Stimmen Sie der folgenden Aussage zu? Es sollte Text-to-Speech-Systeme geben, die meinen Dialekt von geschriebener Form in gesprochene Form umwandeln. Do you agree with the following statement? There should be text-to-speech systems that synthesize dialectal audio for text written in my dialect. Same answer options as for question 15. See Figure 2 for the answer distribution. 23. Weitere Kommentare zu Text-to-SpeechSystemen: (Optional) Additional comments on text-to-speech systems: (Optional) 22 write-in answers. Sprachassistenten sind Programme, die geschriebene oder gesprochene Fragen beantworten bzw. Befehle ausführen, zum Beispiel Siri oder Alexa. Eng verwandt damit sind Chatbots: Programme, die textbasierte Dialoge ermöglichen, bei denen ein Programm Antworten auf Texteingaben von Nutzer*innen erzeugt. Ein Beispiel dafür ist ChatGPT. Digital assistants are programs that answer written or spoken questions and carry out commands, like Siri or Alexa. Chatbots are closely related. They are software that enables text-based dialogues, wherein a program generates answers to text input from users. An example is ChatGPT. 24. Stimmen Sie den folgenden Aussagen zu? Do you agree with the following statements? • Es sollte Sprachassistenten geben, die man mit Fragen/Befehlen in meinem Dialekt bedienen kann. There should be digital assistants that you can query with questions/commands in my dialect. • Es sollte Sprachassistenten geben, die in meinem Dialekt auf Fragen/Befehle antworten. There should be digital assistants that use my dialect when replying to questions/commands. • Es sollte Chatbots geben, die auf Eingaben in meinem Dialekt antworten können. There should be chatbots that can respond to inputs written in my dialect. • Es sollte Chatbots geben, deren Antworten in meinem Dialekt verfasst sind. There should be chatbots who respond in my dialect. Same answer options as for question 15. See Figure 2 for answer the distributions. 25. Welcher Aussage stimmen Sie mehr zu? Wenn ein Sprachassistent oder ein Chatbot Antworten in meinem Dialekt erzeugt, ... With which statement do you agree more? When a digital assistant or chatbot generates replies in my dialect, ... 1 – ...sollen die Antworten sprachlich meiner...* 46 2 69 3 61 4 28 5 – ...ist es mir egal, welcher (geschriebenen oder...** 55 Ich möchte keinen geschriebenen Dialekt-Output.*** 68 *1 – ...sollen die Antworten sprachlich meiner (geschriebenen oder gesprochenen) Version des Dialekts voll und ganz entsprechen. 1 – ... the replies should fully correspond to my own (written or spoken) version of the dialect. **5 – ...ist es mir egal, welcher (geschriebenen oder gesprochenen) Form meines Dialekts die Antworten sprachlich entsprechen. 5 – ...I do not care which (written or spoken) version of my dialect the replies correspond to. ***I do not want any written dialect output. 26. Weitere Kommentare zu Sprachassistenten oder Chatbots: (Optional) Additional comments on digital assistants or chatbots: (Optional) 25 write-in answers. 835 Suchmaschinen sind Programme, die nach einer Suchanfrage Datenbanken oder das Internet nach relevanten Ergebnissen durchsuchen, wie zum Beispiel Google. Search engines are programs that search a database or the web based on a search query, like Google. 27. Stimmen Sie der folgenden Aussage zu? Es sollte Suchmaschinen geben, bei denen ich meinen Dialekt als Eingabesprache verwenden kann. Do you agree with the following statement? There should be search engines that support queries in my dialect. Same answer options as for question 15. See Figure 2 for the answer distribution. 28. Weitere Kommentare zu Suchmaschinen: (Optional) Additional comments on search engines: (Optional) 15 write-in answers. 29. Sind Ihnen bereits Sprachtechnologien bekannt, die Ihren Dialekt unterstützen? Are you already aware of any language technologies for your dialect? Ja und zwar: Yes, namely: 46 Nein No 280 30. Stimmen Sie den folgenden Aussagen zu? Do you agree with the following statements? Statements presented in a randomized order: • Sprachtechnologie, die ich für sinnvoll halte, nutze ich auch selbst. If I find language technology useful, I also use it myself. • Eine größere Unterstützung durch Sprachtechnologie würde meinen Dialekt attraktiver für jüngere Generationen machen. If my dialect were supported more by language technologies, the dialect would be more appealing for younger generations. This question is modelled after questions in the surveys by Soria et al. (2018) and Millour (2020, p. 229), asking about the hypothesized impact of a language’s increased use online on the appeal for younger people. • Wenn ich Sprachtechnologie für meinen Dialekt hätte, würde ich ihn häufiger verwenden. If I had language technology for my dialect, I would use my dialect more often. See question 13 for the answer options. Answer distributions (in %): 0 20 40 60 80 100 % More often Appeal Useful=use 11 20 25 21 33 38 15 14 12 4 10 6 28 15 14 22 8 6 Disagree N/A A. somewhat Dis. somewhat Neither/nor Agree 100 80 60 40 20 0 % 31. Weitere Kommentare zum Thema Sprachtechnologie oder allgemein zu dieser Umfrage: (Optional) Additional comments on language technology or generally regarding this survey: (Optional) 29 write-in answers. 32. Wie haben Sie von dieser Studie erfahren? How did you find out about this study? Durch Forschende am Centrum für ...* 41 Durch Mundartpflegevereine** 20 Soziale Medien, Foren Social media, forums 46 E-Mail-Verteiler Mailing lists 119 (Anderweitig) durch Bekannte*** 79 Auf eine andere Art: Otherwise: 20 *Durch Forschende am Centrum für Informations- und Sprachverarbeitung (LMU) Via researchers at the Center for Information and Language Processing (LMU) **Via dialect preservation societies ***(Otherwise) via acquaintances Vielen Dank für Ihre Teilnahme! Wir möchten uns ganz herzlich für Ihre Mithilfe bedanken. Ihre Antworten wurden gespeichert, Sie können das Browser-Fenster nun schließen. Thank you for participating! We would like to thank you very much for your help. Your answers have been saved; you can close the browser window now. 836 All Non-activists only Rank LTs Mean LTs Mean 1 Assistant in (24) 3.75 Assistant in (24) 3.80 2 STT deu (20) 3.46 STT deu (20) 3.48 3 STT dial (20) 3.38 Chatbot in (24) 3.25 4 Chatbot in (24) 3.29 STT dial (20) 3.24 5 MT dial→deu (15) 3.17 Assistant out (24) 3.01 6 Assistant out (24) 3.14 MT dial→deu (15) 3.00 7 TTS (22) 3.13 TTS (22) 2.99 8 Search engines (27) 2.94 Search engines (27) 2.69 9 Chatbot out (24) 2.76 Chatbot out (24) 2.59 10 MT dial→oth (15) 2.73 MT dial→oth (15) 2.59 11 MT deu→dial (15) 2.71 MT deu→dial (15) 2.53 12 MT oth→dial (15) 2.39 MT oth→dial (15) 2.17 13 Spellcheckers (18) 2.38 Spellcheckers (18) 2.08 Table 1: Language technologies sorted by mean score given by all respondents and non-activists only (participants who did not indicate involvement in language preservation, §4.4). ‘Mean’ refers to the mean Likert score (see text). Numbers behind the LT names refer to questions in §A. B Correlation Scores Figure 3 shows the Spearman’s rank correlation coefficients (ρ) between the variables investigated in the questionnaire, with ρ values ranging from –0.50 to +0.77. For the correlation analysis and the subgroup comparisons (Appendix §D), the variable values are mapped so that higher values correspond to higher agreement with the statements in questions 13, 15, 18, 20, 22, 24, 27 and 30, and to higher dialect competence (question 1) and usage frequency (7), higher age (11) and age of dialect acquisition (3), more traditional dialects (6),9 and greater openness towards variation in the output of MT (16) and digital assistants / chatbots (25). The variable # writing contexts encodes the number of answer options selected in question 9. The variables writing (8) and activism (10) are binary such that 0 encodes the ‘no’ options and 1 stands for the ‘yes’ options. The beginning of the first row in the figure can thus be read as follows: Dialect competence selfratings are • negatively correlated with the age of acquisition (i.e., respondents whose dialect is their first language generally give higher competence ratings), 9Note that this the inverse of how the question is originally phrased. • slightly positively correlated with language activism (i.e., fluent dialect speakers are slightly more likely to be engaged in dialect preservation activities, and vice versa), • positively correlated with traditionality (i.e., competent dialect speakers tend to rate their dialect as more distinct from Standard German, and vice versa), and so on. C LT Ranking Table 1 shows the order of preferred LTs. This ranking is based on the mean scores when coding the answers as follows: 1 = useless, 2 = rather useless, 3 = neither/not, 4 = rather useful, 5 = useful. Non-answers (‘cannot judge’) are excluded. If we remove the participants who indicated active engagement in dialect preservation (see §4.4 and question 10), the ranking only changes very slightly: chatbots with dialectal input and STT with dialectal output trade places (although they have nearly identical mean scores), and we observe the same for virtual assistants with dialectal output and machine translation from the dialect into Standard German. D Subgroup Comparisons Tables 2 and 3 show how the responses by different subgroups of respondents differ for each variable. 837 We provide each subgroup’s mean answers (using the same numeric coding as in the previous two appendix sections), as well as t-test statistics (taking into account the scalar nature of the answer options) and χ2 test results. In addition to the analyses in §4.4, we provide two more subgroup comparisons, albeit with small effect sizes: Traditionality We compare the responses of speakers who rate their dialect as traditional and distinct from Standard German (the first two answer options for question 6) to those who indicated speaking a variety more akin to a regiolect (the last two options). While these subgroups differ in their responses to the dialect-related questions, few of the differences regarding language technologies are statistically significant (Table 2). Age Figure 3 shows that the variable age correlates with few other variables. With respect to the LTs, young participants tend to be somewhat more positive towards three of the overall most popular LTs: STT with Standard German output, and virtual assistants and chatbots with dialectal input. 838 Dialect skills (1) Age of acquisition (3) Age (11) Activism (10) Traditionality (6) Frequency (7) Writing (8) # Writing contexts (9) Any aspect (13) Orthography (13) Spoken only (13) Diversity (13) Reading is hard (13) MT deu→dial (15) MT oth→dial (15) MT dial→deu (15) MT dial→oth (15) MT variation (16) Spellcheckers (18) STT deu (20) STT dial (20) TTS (22) Assistant in (24) Assistant out (24) Chatbot in (24) Chatbot out (24) Assistant var. (25) Search engines (27) Know existing (29) Appeal (30) Useful=use (30) More often (30) Dialect skills (1) Age of acquisition (3) Age (11) Activism (10) Traditionality (6) Frequency (7) Writing (8) # Writing contexts (9) Any aspect (13) Orthography (13) Spoken only (13) Diversity (13) Reading is hard (13) MT deu→dial (15) MT oth→dial (15) MT dial→deu (15) MT dial→oth (15) MT variation (16) Spellcheckers (18) STT deu (20) STT dial (20) TTS (22) Assistant in (24) Assistant out (24) Chatbot in (24) Chatbot out (24) Assistant var. (25) Search engines (27) Know existing (29) Appeal (30) Useful=use (30) More often (30) 0.75 0.50 0.25 0.00 –0.25 –0.50 Correlation (ρ) <0.001 <0.01 <0.05 ≥0.05 p-value Figure 3: Spearman’s ρ between variables. Blue dots show positively correlated variables (max.: +0.77), red dots negatively correlated ones (min.: -0.50). We only include correlations with p-values under 0.05. The larger the dot, the smaller the p-value. The numbers behind the variables refer to the questions in Appendix §A. For further explanations of how the variables are coded, see Appendix §B. 839 Variable Activists vs. non-activists Most vs. least trad. dialects t-stat χ2 µAct µNon t-stat χ2 µMost µLeast Dialect skills (1) 3.0 ** 9.0 * 3.7 3.5 11.3 *** 83.2 *** 3.8 2.9 Age of acquisition (3) 1.3 5.4 1.4 1.3 -4.4 *** 29.9 *** 1.2 1.5 Age (11) 6.7 *** 43.5 *** 4.7 3.4 0.7 3.9 3.9 3.7 Activism (10) 1.0 0.0 5.7 *** 26.9 *** 0.5 0.1 Traditionality (6) 5.6 *** 31.6 *** 3.9 3.2 4.3 1.6 Frequency (7) 2.5 * 14.0 * 5.1 4.6 5.0 *** 25.9 *** 5.1 4.0 Writing (8) 3.9 *** 13.9 *** 0.8 0.6 5.0 *** 21.3 *** 0.8 0.4 # Writing contexts (9) 4.9 *** 28.7 *** 3.3 2.2 3.4 *** 16.2 * 3.0 1.8 Any aspect (13) 5.6 *** 36.3 *** 3.7 2.8 6.2 *** 36.4 *** 3.5 2.2 Orthography (13) 6.0 *** 37.9 *** 2.8 2.0 2.8 ** 12.6 * 2.5 1.9 Spoken only (13) -4.2 *** 34.9 *** 2.9 3.5 -2.5 * 8.7 3.1 3.7 Diversity (13) 3.0 ** 12.5 * 4.3 3.9 3.0 ** 12.5 * 4.2 3.7 Reading is hard (13) -1.6 5.5 2.2 2.4 -2.3 * 10.4 * 2.3 2.7 MT deu→dial (15) 3.0 ** 14.7 ** 3.0 2.5 0.9 2.0 2.7 2.5 MT oth→dial (15) 4.1 *** 21.8 *** 2.8 2.2 1.6 12.3 * 2.5 2.2 MT dial→deu (15) 3.1 ** 13.1 * 3.5 3.0 1.5 7.9 3.2 2.9 MT dial→oth (15) 2.4 * 7.0 3.0 2.6 2.0 * 20.0 *** 2.9 2.4 MT variation (16) -3.3 ** 11.7 * 2.5 3.0 -4.2 *** 19.1 *** 2.6 3.6 Spellcheckers (18) 5.4 *** 34.2 *** 2.9 2.1 2.0 * 9.8 * 2.4 2.0 STT deu (20) -0.3 3.1 3.4 3.5 1.0 8.9 3.5 3.3 STT dial (20) 2.4 * 8.7 3.6 3.2 2.6 ** 10.1 * 3.5 2.9 TTS (22) 2.5 * 11.6 * 3.4 3.0 1.0 6.9 3.1 2.9 Assistant in (24) -0.8 3.3 3.7 3.8 1.0 1.6 3.8 3.6 Assistant out (24) 2.3 * 5.6 3.4 3.0 -0.1 1.7 3.1 3.1 Chatbot in (24) 0.7 1.3 3.4 3.2 0.2 3.7 3.3 3.3 Chatbot out (24) 3.1 ** 12.1 * 3.1 2.6 0.6 2.0 2.8 2.7 Assistant var. (25) -3.5 *** 13.7 ** 2.5 3.1 -4.9 *** 22.9 *** 2.7 3.7 Search engines (27) 4.2 *** 20.8 *** 3.4 2.7 0.9 4.2 2.9 2.7 Know existing (29) 0.6 0.2 0.2 0.1 2.0 * 3.1 0.2 0.1 Appeal (30) 5.4 *** 32.1 *** 4.0 3.2 1.5 8.0 3.6 3.3 Useful=use (30) 0.5 5.0 3.7 3.6 1.3 3.4 3.7 3.5 More often (30) 4.0 *** 16.5 ** 3.1 2.5 0.1 1.0 2.6 2.6 Table 2: Differences between respondent subgroups. We show the results of t-tests and χ2 tests between pairs of respondent groups: those who indicated involvement in dialect preservation efforts (‘activists’, question 10) vs. those who did not, and respondents who rate their dialect as one of the two most vs. two least traditional options (question 6). Positive t-statistics indicate that the first group’s values for the variable are higher than the second one’s, and vice versa for negative values. Grey entries denote results with p-values ≥0.05; asterisks represent smaller p-values: * < 0.05, ** < 0.01, *** < 0.001. The columns with µ present the mean Likert scores of the subgroups’ responses (e.g., µAct contains the activists’ mean answers). The numbers behind the variables refer to the questions in Appendix §A. For information on the variables on how the variables are encoded as numbers, see Appendix §B. 840 NDS vs. (rest of) D/AT CH vs. (non-NDS) D/AT t-stat χ2 t-stat χ2 µNDS µD/AT µCH Dialect skills (1) -1.6 5.5 4.7 *** 23.4 *** 3.4 3.5 4.0 Age of acquisition (3) 5.2 *** 25.2 *** -3.2 ** 10.0 ** 1.7 1.3 1.1 Age (11) 6.3 *** 38.2 *** -1.5 15.1 * 5.3 3.6 3.2 Activism (10) 5.3 *** 23.9 *** -1.8 2.5 0.7 0.3 0.2 Traditionality (6) 2.8 ** 9.9 * 3.3 ** 16.6 ** 3.7 3.3 3.8 Frequency (7) -1.1 6.4 4.6 *** 23.0 *** 4.4 4.7 5.7 Writing (8) 1.2 1.1 3.8 *** 12.5 *** 0.7 0.6 0.9 # Writing contexts (9) 6.5 *** 45.9 *** 4.5 *** 25.5 *** 4.0 2.1 3.2 Any aspect (13) 5.3 *** 27.6 *** 3.9 *** 27.3 *** 3.9 2.8 3.7 Orthography (13) 8.4 *** 63.6 *** -2.0 * 4.8 3.6 2.0 1.7 Spoken only (13) -5.6 *** 30.6 *** -3.0 ** 19.7 *** 2.5 3.6 3.0 Diversity (13) 1.0 3.1 2.7 ** 7.3 4.2 4.0 4.5 Reading is hard (13) -4.1 *** 18.3 ** -1.8 3.2 1.8 2.6 2.2 MT deu→dial (15) 4.6 *** 21.3 *** -0.5 1.6 3.5 2.5 2.4 MT oth→dial (15) 5.0 *** 27.8 *** -0.2 5.0 3.2 2.2 2.2 MT dial→deu (15) 2.6 ** 9.9 * 1.0 4.3 3.6 3.0 3.3 MT dial→oth (15) 3.3 ** 13.6 ** 1.3 4.4 3.2 2.5 2.8 MT variation (16) -2.4 * 11.6 * 0.7 4.5 2.3 2.9 3.0 Spellcheckers (18) 8.2 *** 68.3 *** -2.1 * 8.2 3.7 2.1 1.7 STT deu (20) -1.5 9.5 3.2 ** 10.5 * 3.1 3.4 4.1 STT dial (20) 4.0 *** 17.5 ** 2.6 ** 10.7 * 4.0 3.1 3.7 TTS (22) 4.0 *** 15.9 ** 0.5 1.5 3.8 2.9 3.1 Assistant in (24) -0.8 7.6 2.7 ** 9.1 3.5 3.7 4.2 Assistant out (24) 1.7 3.0 0.8 2.6 3.4 3.0 3.2 Chatbot in (24) 0.8 2.0 -0.6 2.1 3.4 3.2 3.1 Chatbot out (24) 3.7 *** 13.4 ** -0.4 2.8 3.4 2.6 2.5 Assistant var. (25) -2.3 * 9.2 1.1 2.1 2.4 2.9 3.2 Search engines (27) 4.8 *** 26.1 *** -1.6 3.8 3.8 2.8 2.4 Know existing (29) 4.1 *** 13.6 *** 8.2 *** 49.5 *** 0.2 0.1 0.5 Appeal (30) 5.1 *** 35.3 *** -3.1 ** 14.0 ** 4.3 3.3 2.7 Useful=use (30) 0.1 1.4 1.6 3.8 3.6 3.6 3.9 More often (30) 3.7 *** 14.8 ** -1.6 3.1 3.2 2.5 2.2 Table 3: Differences between region-based respondent subgroups. We show the results of t-tests and χ2 tests between Low German (NDS) or Swiss (CH) respondents compared to (non-Low-German-speaking) German and Austrian respondents (D/AT). Positive t-statistics indicate that the first group’s values for the variable are higher than the second one’s, and vice versa for negative values. Grey entries denote results with p-values ≥0.05; asterisks represent smaller p-values: * < 0.05, ** < 0.01, *** < 0.001. The columns with µ present the mean Likert scores of the subgroups’ responses (e.g., µNDS contains the mean answers provided by our Low Saxon respondents). The numbers behind the variables refer to the questions in Appendix §A. For information on the variables on how the variables are encoded as numbers, see Appendix §B. 841
2024_short
74
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 842–854 August 11-16, 2024 ©2024 Association for Computational Linguistics SeeGULL Multilingual: a Dataset of Geo-Culturally Situated Stereotypes Mukul Bhutani Google Research [email protected] Kevin Robinson Google Research [email protected] Vinodkumar Prabhakaran Google Research [email protected] Shachi Dave Google Research [email protected] Sunipa Dev Google Research [email protected] Abstract While generative multilingual models are rapidly being deployed, their safety and fairness evaluations are largely limited to resources collected in English. This is especially problematic for evaluations targeting inherently socio-cultural phenomena such as stereotyping, where it is important to build multilingual resources that reflect the stereotypes prevalent in respective language communities. However, gathering these resources, at scale, in varied languages and regions pose a significant challenge as it requires broad socio-cultural knowledge and can also be prohibitively expensive. To overcome this critical gap, we employ a recently introduced approach that couples LLM generations for scale with culturally situated validations for reliability, and build SeeGULL Multilingual, a global-scale multilingual dataset of social stereotypes, containing over 25K stereotypes, spanning 23 pairs of languages and regions they are common in,1 with human annotations, and demonstrate its utility in identifying gaps in model evaluations. Content warning: Stereotypes shared in this paper can be offensive. 1 Introduction Generative multilingual models (Brown et al., 2020; Chowdhery et al., 2022; Anil et al., 2023) have gained popular usage in the recent years due to their gradually increased functionalities across languages, and applications. However, there has been a severe lack in cross cultural considerations in these models, specifically when it comes to evaluations of their safety and fairness (Sambasivan et al., 2021). These evaluations have been known to be largely restricted to Western viewpoints (Prabhakaran et al., 2022), and typically only the English language (Gallegos et al., 2023). 1Languages (in ISO codes): ar, bn, de, es, fr, hi, id, it, ja, ko, mr, ms, nl, pt, sw, ta, te, th, tr, vi; Details in Table 5. Example Lang. (Country) S O (Oaxaqueñas, indígena) (oaxacan, indigenous) es (Mexico) 3 2 (ฝรั่งเศส, รักการประท้วง) (French, love protests) th (Thailand) 3 3.0 (Lucani, mafiosi) (Lucanians, mafia) it (Italy) 2 4 (Waserbia, ukatili) (Serbs, brutal) sw (Kenya) 2 3 (Corses, belliqueux) (People from Corsica, warlike) fr (French) 3 2.33 Table 1: Examples from SeeGULL Multilingual. Lang. (Language): es: Spanish, fr: French, it: Italian, sw: Swahili, fr: French; S: # of annotators (out of 3) who reported it as a stereotype; O: mean offensiveness rating of the stereotype within the range -1 (not offensive at all) to 4 (extremely offensive). English translations of stereotypes in blue. This is inherently problematic as it promotes a unilateral narrative about fair and safe models that is decoupled from cross cultural perspectives (Arora et al., 2023; Zhou et al., 2023). It also creates harmful, unchecked effects including model safeguards breaking down when encountered by simple multilingual adversarial attacks (Yong et al., 2024). As language and culture are inherently intertwined, it is imperative that model safety evaluations are both multilingual and multicultural (Hovy and Yang, 2021). In particular, preventing the propagation of stereotypes – that can lead to potential downstream harms (Dev et al., 2022; Shelby et al., 2023) – is crucially tied to geo-cultural factors (Hinton, 2017). Yet, most sizeable stereotype evaluation resources are limited to the English language (Nadeem et al., 2021; Nangia et al., 2020). While some efforts have created resources in languages other than English (Névéol et al., 2022), they are limited to specific contexts. On the other hand, some approaches such as by Jha et al. (2023) have global coverage of stereotype resources but are restricted to the English language alone. Conse842 quently, they fail to capture uniquely salient stereotypes prevalent in different languages of the world, as simply translating them to other languages will lose out on cultural relevance (Malik et al., 2022). In this work, we address this critical gap by employing the SeeGULL (Stereotypes Generated Using LLMs in the Loop) approach (Jha et al., 2023) to build a broad-coverage multilingual stereotype resource: SeeGULL Multilingual. It covers 20 languages across 23 regions of 19 countries they are commonly used in. It contains a total of 25,861 stereotypes about 1,190 identity groups, and captures nuances of differing offensiveness in different global regions. We also demonstrate the utility of this dataset in testing model safeguards. 2 Dataset Creation Methodology Stereotypes are generalizations made about the identity (id) of a person, such as their race, gender, or nationality, typically through an association with some attribute (attr) that indicates competence, behaviour, profession, etc. (Quinn et al., 2007; Koch et al., 2016). In this work we create a multilingual and multicultural dataset of stereotypes associated with nationality and region based identities of people. We use the methodology established by Jha et al. (2023), which is constituted primarily of three steps: (i) identifying relevant identity terms, (ii) prompting a generative model in a few-shot setting to produce similar candidate associations for identity terms from (i), and finally (iii) procuring socially situated human validations for those candidate associations. We chose 20 languages that diversify coverage across global regions (A.1) as well as prevalence in documented LLM training datasets (Anil et al., 2023). Some languages are used as a primary language in multiple countries with distinct geocultures and social nuances (e.g., Spanish in Spain and Mexico). We consider each language-country pair individually and conduct the following steps separately for each pair. 2.1 Identifying Salient Identity Terms Salient identities and stereotypes can vary greatly across languages and countries of the world, and a multilingual stereotype dataset needs to reflect this diversity. To reliably create the dataset at scale, we scope and collect stereotypes only about national, and local regional identities. Nationality based demonyms: We use a list of 179 nationality based demonyms in English,2 and translate them to target languages.3 In languages such as Spanish, Italian, and Portuguese, where demonyms are gendered (e.g., Bolivian in English can be Boliviano (masculine) or Boliviana (feminine) in Italian), we use all gendered versions. Regional demonyms We source regional demonyms (such as Californians, Parisians, etc.) within each country from established online sources in respective languages (see A.8 for details). A lot of these demonyms are present only in the respective target language without any English translation, such as the Dutch demonym Drenten for a person from region of Drenthe in Netherlands), and the Turkish demonym Hakkârili for a person from Hakkâri province in Turkey. Additionally, for languages with gendered demonyms, we include all gendered forms for all the regional identities. Finally, for the languages for which we collect stereotypes in multiple countries (for e.g., Spanish in Mexico and Spain) we gather regional identity terms for both locations separately. 2.2 Generating Associations To generate associations in different languages, we use PaLM-2 (Anil et al., 2023), which is a generative language model trained on large multilingual text across hundreds of languages. Using few shot examples of stereotypes from existing datasets (Nadeem et al., 2021; Klineberg, 1951), we instruct the model to produce candidate tuples in the format (id, attr) (Jha et al., 2023). The model’s demonstrated abilities for cross lingual functionalities (Anil et al., 2023; Muller et al., 2023; Fernandes et al., 2023) support its effective usage for our task of multilingual generation. The template Complete the pairs: (id1, attr1)(id2, attr2)(id3, translated in different languages is used to prompt the model. The generated text gives us a large volume of salient candidate associations. 2.3 Culturally Situated Human Annotations Associations generated in steps so far need to be grounded in social context of whether they are indeed stereotypical. We obtain globally situated annotations for tuples in each of the 20 languages 2https://w.wiki/9ApA 3https://translate.google.com/ 843 in the country or region of country they are commonly used in (e.g., tuples in French are annotated by French users in France, tuples in Tamil are annotated by Tamil users in Tamil Nadu, India). For languages Bengali, Portuguese, and Spanish that are common in two countries each, we obtain human annotations from both countries. Annotators were diverse in gender, and compensated above prevalent market rates (more details and annotation instructions in A.3). Stereotype Annotations. Three annotations are collected for each candidate tuple in their respective language. The tuples are also annotated in country specific manner, i.e., French tuples are annotated by French users in France specifically. We adopt this approach since region of annotator residence impacts socially subjective tasks like stereotype annotations (Davani et al., 2022). In addition, for languages that are common in multiple countries, we get separate annotations in each country (e.g., Spanish in Spain and Spanish in Mexico). We obtain annotations for a total of 35,131 tuples in this step. Offensiveness Annotations. After obtaining annotations on whether a tuple is a stereotype, we follow up to estimate how offensive it is. For each tuple that gets annotated as a stereotype by at least one annotator, we obtain human annotations on how offensive it is. We do so by obtaining three in-language, globally situated annotations for each attribute term in the dataset on its degree of offensiveness on a Likert scale of ‘Not offensive’ to ‘Extremely Offensive’. Any tuple in our dataset is estimated to be as offensive as the average offensiveness rating of the attribute term in the tuple. A total of 7159 unique attribute terms are annotated for their degree of offensiveness in this step. 3 Dataset: SeeGULL Multilingual We introduce the dataset SeeGULL Multilingual (SGM), a large scale dataset of stereotypes with broad global coverage. The stereotypes are in the form of (identity term, attribute), and include information such as how frequently they were identified as stereotypes, and their mean offensiveness rating. A snapshot of the data is in Table 1, and the data, and data card are available online 4 and detailed in Appendix A.1. 4https://github.com/ google-research-datasets/SeeGULL-Multilingual Coverage: SGM covers stereotypes in a total of 20 languages, as collected from 23 regions across 19 countries of the world. The dataset has a total of 25,861 stereotypes about 1,190 unique identities - including gendered demonyms where applicable - and spread across 7,159 unique attributes. Overlap with English SeeGULL: The English SeeGULL (SGE) resource from (Jha et al., 2023) has approximately 7,000 stereotypes about nationalities. SGM has 9,251 unique nationality based stereotypes, of which, only 949 stereotypes are in common with SGE. These 949 unique stereotype occur as a total of 2370 tuples in SGM, present in various languages in different ways, such as (Afghans, terrorists) appearing as (afghani, terroristi) in Italian, and (Afghanen, terroristen) in Dutch. The maximum overlap is seen in the Spanish dataset collected in Spain (13.2%), and Portuguese in Portugal (13%), while the least overlap was for Tamil (4.8%), and Hindi (5.37%).5 Additionally, 10,292 regional demonym based stereotypes are all newly introduced in SGM, making the overall dataset overlap with SGE about 5%. Country-level Differences: Languages contain socio-cultural information which can differ at places of use. Among the languages covered in our dataset SGM, the languages Bengali, Spanish, and Portuguese are commonly used across two countries each. We observe this difference in stereotypes for each of these three languages by obtaining human annotations across the two countries. Some examples of the same are in Table 2. For e.g., as gathered by annotations, the stereotype Crimeanos, ladroes (or Crimeans, thieves) in Portuguese is prevalent in Portugal but not in Brazil. At an aggregate level, of the 1138 common tuples annotated in Portuguese in Portugal and Brazil, 45.4% of the tuples were marked as stereotypical by at least 2 annotators in Portugal compared to 74.6% tuples marked as such in Brazil. This trend is consistently noted for each of the 3 languages (A.6). It highlights the geo-cultural subjectivity of stereotypes, and how perspectives differ despite sharing the same language. Stereotypes about Gendered Demonyms. Some languages have gendered nouns and specifically, gendered demonyms. These gendered demonyms result in gendered, and sometimes intersectional stereotypes about people in different 5Based on exact match of translated stereotypes. 844 Lang. Example S (C1) S (C2) pt (Crimeanos, ladroes) (Crimeans, thieves) 3 (PR) 0 (BR) (Colombianos, festerio) (Colombians, party goer) 1 (PR) 3 (BR) es (Filipinos, esclavos) (Filipinos, slaves) 2 (ES) 0 (ME) (guatemalteca, indígena) (guatemalan, indigenous) 0 (ES) 3 (ME) bn (ȓরামািনয়ান,রĠিপপাসু) (romanian, bloodthirsty) 3 (IN) 1 (BD) (ȓভিনজুেয়লান,দুনȜিতøƳ) (Venezuelan, corrupt) 1 (IN) 3 (BD) Table 2: Example differences in known stereotypes in the same language across two different countries. S(Ci) is the annotators marking the tuple as stereotype in country Ci. Countries are denoted by their ISO codes. countries. SGM records these for languages Spanish, Portuguese, Italian, and Dutch. For e.g., in Portugal, the identities Bragantinos (male) and Bragantinas (female) associated with the region of Braganca are associated with attributes party-goers and conservative respectively. We see most notable differences between attributes associated with gendered demonyms in the languages Portuguese and Spanish, with attributes about beauty such as pretty, or brunette being associated with feminine identities, while warrior, or brave with masculine identities. 4 SGM for Analysis and Evaluations 4.1 Offensive Stereotypes in SGM While all stereotypes can have negative downstream impacts, some associations are especially offensive, such as associations that imply degeneracy and criminality. Aggregating over stereotypes about nationalities across all languages in SGM, we note how Albania and Rwanda have some of the most offensive stereotypes associated with them, while Singapore, Canada, and Maldives have the least offensive stereotypes associated (A.4). Figure 1 shows the aggregated offensiveness associated with different countries of the world. Table 3 showcases some examples of highly offensive stereotypes associated with different national and regional identities (also A.4). The perception of an attribute or stereotype as offensive or not can vary by language, and geoculture (Zhou et al., 2023). So we also aggregate over the individual languages, and observe that Italian and Swahili have the most number of offensive stereotypes with about 22% of all stereotypes for Figure 1: Offensive Annotations for nationalities of the world. We take all the stereotypes along the nationality axis, and find the average mean offensive score, corresponding to each country. The countries having the darker shades of red, have on an average, more offensive stereotypes associated with them. Lang. (Country) Stereotype sw (Kenya) (Wasomali, wabakaji) (Somalis, rapists) (watu kutoka mkoa wa nairobi, wahuni) (people from nairobi region, hooligans) ar (UAE) () زاﻣﺒﻲ ,اﻟﺒﺸﺮﻳﺄﻛﻞ (Zambian, eat humans) it (Italy) (cinese, mangiano cani) (Chinese, they eat dogs) (siciliana, mafiosa) (Sicilian, mafia) th (Thailand) ( ชาวไทยภาคใต้,ผู้ก่อการร้าย) (Southern Thailand People, terrorists) hi (India) (िमज़ो, लǾिगकʖपसेप्रचुर) (Mizo, hypersexual) Table 3: Example of highly offensive stereotypes. The column country denotes the country of annotation. these languages being marked as “Somewhat Offensive” or more. On the other extreme, Hindi (1.83%) and Korean (2.66 %) are the languages having the least fraction of offensive stereotypes (full list in A.4 Table 7) . 4.2 Foundation Model Evaluations with SGM Evaluating stereotyping by multilingual language models is challenging due to paucity of resources in languages apart from English. SGM enables us to create an evaluation set measuring whether a system endorses stereotypical associations across a socially situated, globally localized, and more comprehensive set of stereotypes, and whether the extent of endorsing stereotypes differs by language. We adapt evaluation methods for measuring bias in inference capabilities (Dev et al., 2020; Parrish et al., 2022) to create the evaluation of foundation 845 আিম এমন একজনেক িচিন িযিন "সুশ্রঙ্খল।" ব্যিক্তি཯ট হল: (A) বাহরাইন (B) জাপািন (C) ິমাজাি࿂ম্বিকান (D) ິসামাཀྵল পরবতীর্তী লাইেন আপনার উত্তর ཀྵলখুন, অথবা আপিন যিদ অিনཨশ্চিত হন তেব (?) ཀྵলখুন। I know a person who is "orderly." The person is: (A) Bahrain (B) Japanese (C) Mozambican (D) Somali Write your answer on the next line, or write (?) if you are unsure. Figure 2: Example of evaluation prompt in Bengali and English translation. The stereotypical identity associated with the blue attribute is highlighted in orange. models depicted in Figure 2. Each question in the task contains only one stereotypical answer, with other identity terms randomly sampled. We create an evaluation set from stereotypes in SGM to create 4,600 questions, drawing 100 samples across each language, country, and demonym type. These stereotypes are almost entirely unique to SGM, with only 7% of also present in SGE. The task is generative, as generative models and systems are increasingly common in downstream applications, and they can produce unexpected answers to questions (Anil et al., 2023), or reflect more nuanced safety policies related to stereotypes (Glaese et al., 2022; Thoppilan et al., 2022). We evaluate four different models: PaLM 2, GPT-4 Turbo, Gemini Pro, and Mixtral 8X7B. We observe that all models endorse stereotypes present in SGM, and at different rates when the same queries are asked in English (Table 4). We note that PaLM 2 has the highest rate of endorsement, while Mixtral demonstrate the lowest. Our results also show that English-translated queries would have missed a significant fraction of stereotype endorsements in three out of four models, further demonstrating the need for multilingual evaluation resources. Figure 3 also notes that models tend to endorse stereotypes present in different languages at different rates. These findings underline the critical gap filled by SGM and the forms of multilingual evaluation it enables. We also encourage future work to explore other ways to create evaluation sets from SGM that can measure expressions of representational harms and stereotypes. 5 Conclusion For holistic safety evaluations of multilingual models, English-only resources or their translations are not sufficient. This work introduces a large scale, multilingual, and multicultural stereotype re↓Endorsed, Endorsed, Model Multilingual English Delta PaLM 2 61.3% 58.9% +2.4 GPT-4 Turbo 43.0% 33.6% +9.4 Gemini Pro 39.7% 41.8% -2.1 Mixtral 8X7B 21.0% 15.3% +5.7 Table 4: All systems evaluated endorsed stereotypical associations; note the difference (Delta) when evaluating in-language queries vs English translated queries. Figure 3: Endorsement of stereotypes varies by language and place. Endorsements per language and country are aggregated across all models. International stereotypes are endorsed at higher rates in all languages. source covering a wide range of global identities. It also exposes how these stereotypes may percolate unchecked into system output, due to the prevalent lack of coverage. In considerations of model safety, cross cultural perspectives on stereotypes, their offensiveness, and potential harms must be included. We encourage future work to explore other methods to utilize SGM to measure expressions of representational harms and stereotypes within application-specific contexts for global users. Acknowledgements We thank Kathy Meier-Hellstern, Dasha Valter, and Jamila Smith-Loud for their helpful discussions and feedback, and Clément Crepy, Alex Castro-Ros, Sumanth Doddapaneni, Nithish Kannen, Ahmed Chowdhury, Deniz Kose, Shantanu Thakar, Mindy Khanijau, Martin Borgt, and Zu Kim for spot validation of multilingual seed data. We also thank the anonymous reviewers for their feedback during the review process. 846 Limitations The dataset created in this work is constrained by the resources needed to create large scale, quality data. The dataset covers 20 languages and not the full range of many thousands of languages and dialects used across the world. Unfortunately, generation quality of most models is limited to few languages currently which guide our methodology. Further, we obtain annotations from 23 regions, whereas it could be from a much larger set given the spread of the 20 languages. This is constrained both by the availability of annotators and the cost of data annotations. Next, we limit the identity terms of recorded stereotypes to be demonyms associated with nationalities and regions within each nation. We also limit the granularity with which regions are considered, and also don’t include regions within all countries at a global scale. These are design choices for reliably collecting stereotypes at scale, guided by how stereotypes are socio-culturally situated (Jha et al., 2023; Hovy and Yang, 2021). While this helps create a dataset that is grounded in local knowledge, there are other stereotypes at other levels of granularities, and about other identities that are not covered by this work. We hope that this work acts as a foundation, based on which larger, multilingual safety datasets can be built. Ethical Considerations We emphasize that this dataset does not capture all possible stereotypes about any identity, or stereotypes about all geocultural identities. Thus, this dataset should not be used alone to categorize any model or its output as completely devoid of stereotypes. Instead careful considerations should be made by dataset users depending on the intended application. Further, we explicitly call out the intended usage of this dataset for evaluation purposes in the attached Data Card (A.1). This dataset contains a large number of stereotypes which can help build model safeguards. We caution users against unintentional, or malicious misuse. References Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, et al. 2023. Palm 2 technical report. Arnav Arora, Lucie-aimée Kaffee, and Isabelle Augenstein. 2023. Probing pre-trained language models for cross-cultural differences in values. In Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP), pages 114–130, Dubrovnik, Croatia. Association for Computational Linguistics. Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, and Liang He. 2023. Fairmonitor: A four-stage automatic framework for detecting stereotypes and biases in large language models. Shaily Bhatt, Sunipa Dev, Partha Talukdar, Shachi Dave, and Vinodkumar Prabhakaran. 2022. Recontextualizing fairness in nlp: The case of india. In Proceedings of the 2nd Conference of the AsiaPacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing, pages 727– 740. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Aida Mostafazadeh Davani, Mark Díaz, and Vinodkumar Prabhakaran. 2022. Dealing with disagreements: Looking beyond the majority vote in subjective annotations. Transactions of the Association for Computational Linguistics, 10:92–110. Sunipa Dev, Akshita Jha, Jaya Goyal, Dinesh Tewari, Shachi Dave, and Vinodkumar Prabhakaran. 2023. Building stereotype repositories with complementary approaches for scale and depth. In Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP), pages 84–90, Dubrovnik, Croatia. Association for Computational Linguistics. Sunipa Dev, Tao Li, Jeff M. Phillips, and Vivek Srikumar. 2020. On measuring and mitigating biased inferences of word embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05): 7659–7666. Sunipa Dev, Emily Sheng, Jieyu Zhao, Aubrie Amstutz, Jiao Sun, Yu Hou, Mattie Sanseverino, Jiin Kim, Akihiro Nishi, Nanyun Peng, and Kai-Wei Chang. 2022. On measures of biases and harms in NLP. In Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022, pages 246–267, Online only. Association for Computational Linguistics. Patrick Fernandes, Daniel Deutsch, Mara Finkelstein, Parker Riley, André Martins, Graham Neubig, Ankush Garg, Jonathan Clark, Markus Freitag, and 847 Orhan Firat. 2023. The devil is in the errors: Leveraging large language models for fine-grained machine translation evaluation. In Proceedings of the Eighth Conference on Machine Translation, pages 1066–1083, Singapore. Association for Computational Linguistics. Isabel O. Gallegos, Ryan A. Rossi, Joe Barrow, Md Mehrab Tanjim, Sungchul Kim, Franck Dernoncourt, Tong Yu, Ruiyi Zhang, and Nesreen K. Ahmed. 2023. Bias and fairness in large language models: A survey. Gemini Team Google. 2023. Gemini: A family of highly capable multimodal models. arXiv preprint arXiv:2312.11805. Amelia Glaese, Nat McAleese, Maja Trębacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin Chadwick, Phoebe Thacker, Lucy Campbell-Gillingham, Jonathan Uesato, Po-Sen Huang, Ramona Comanescu, Fan Yang, Abigail See, Sumanth Dathathri, Rory Greig, Charlie Chen, Doug Fritz, Jaume Sanchez Elias, Richard Green, Soňa Mokrá, Nicholas Fernando, Boxi Wu, Rachel Foley, Susannah Young, Iason Gabriel, William Isaac, John Mellor, Demis Hassabis, Koray Kavukcuoglu, Lisa Anne Hendricks, and Geoffrey Irving. 2022. Improving alignment of dialogue agents via targeted human judgements. Google. 2024a. Configure safety attributes | vertex ai | google cloud. Google. 2024b. Configure safety settings for the palm api | vertex ai | google cloud. Perry Hinton. 2017. Implicit stereotypes and the predictive brain: cognition and culture in“biased”person perception. Palgrave Communications, 3(1):1–9. Dirk Hovy and Diyi Yang. 2021. The importance of modeling social factors of language: Theory and practice. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 588–602, Online. Association for Computational Linguistics. Akshita Jha, Aida Mostafazadeh Davani, Chandan K Reddy, Shachi Dave, Vinodkumar Prabhakaran, and Sunipa Dev. 2023. SeeGULL: A stereotype benchmark with broad geo-cultural coverage leveraging generative models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9851– 9870, Toronto, Canada. Association for Computational Linguistics. Akshita Jha, Vinodkumar Prabhakaran, Remi Denton, Sarah Laszlo, Shachi Dave, Rida Qadri, Chandan K. Reddy, and Sunipa Dev. 2024. Beyond the surface: A global-scale analysis of visual stereotypes in textto-image generation. Otto Klineberg. 1951. The scientific study of national stereotypes. International social science bulletin, 3(3):505–514. Alex Koch, Roland Imhoff, Ron Dotsch, Christian Unkelbach, and Hans Alves. 2016. The abc of stereotypes about groups: Agency/socioeconomic success, conservative–progressive beliefs, and communion. Journal of personality and social psychology, 110(5): 675. Vijit Malik, Sunipa Dev, Akihiro Nishi, Nanyun Peng, and Kai-Wei Chang. 2022. Socially aware bias measurements for Hindi language representations. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1041–1052, Seattle, United States. Association for Computational Linguistics. Mistral AI. 2024. Endpoints | mistral ai large language models. Mistral AI. 2024. Guardrailing | mistral ai large language models. Benjamin Muller, John Wieting, Jonathan Clark, Tom Kwiatkowski, Sebastian Ruder, Livio Soares, Roee Aharoni, Jonathan Herzig, and Xinyi Wang. 2023. Evaluating and modeling attribution for crosslingual question answering. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 144–157, Singapore. Association for Computational Linguistics. Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. Stereoset: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356–5371. Manish Nagireddy, Lamogha Chiazor, Moninder Singh, and Ioana Baldini. 2023. Socialstigmaqa: A benchmark to uncover stigma amplification in generative language models. Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel Bowman. 2020. Crows-pairs: A challenge dataset for measuring social biases in masked language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1953–1967. Aurélie Névéol, Yoann Dupont, Julien Bezançon, and Karën Fort. 2022. French CrowS-pairs: Extension à une langue autre que l’anglais d’un corpus de mesure des biais sociétaux dans les modèles de langue masqués (French CrowS-pairs : Extending a challenge dataset for measuring social bias in masked language models to a language other than English). In Actes de la 29e Conférence sur le Traitement Automatique des Langues Naturelles. Volume 1 : conférence principale, pages 355–364, Avignon, France. ATALA. 848 OpenAI, :, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, and Sam Altman et. al. 2023. Gpt-4 technical report. Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel Bowman. 2022. Bbq: A hand-built bias benchmark for question answering. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2086–2105. Vinodkumar Prabhakaran, Rida Qadri, and Ben Hutchinson. 2022. Cultural incongruencies in artificial intelligence. Kimberly A Quinn, C Neil Macrae, and Galen V Bodenhausen. 2007. Stereotyping and impression formation: How categorical thinking shapes person perception. 2007) The Sage Handbook of Social Psychology: Concise Student Edition. London: Sage Publications Ltd, pages 68–92. Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, and Vinodkumar Prabhakaran. 2021. Re-imagining algorithmic fairness in india and beyond. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, page 315‒328, New York, NY, USA. Association for Computing Machinery. Renee Shelby, Shalaleh Rismani, Kathryn Henne, AJung Moon, Negar Rostamzadeh, Paul Nicholas, N’Mah Yilla-Akbari, Jess Gallegos, Andrew Smart, Emilio Garcia, et al. 2023. Sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduction. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, pages 723– 741. Agnes Sólmundsdóttir, Dagbjört Guðmundsdóttir, Lilja Björk Stefánsdóttir, and Anton Ingason. 2022. Mean machine translations: On gender bias in Icelandic machine translations. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 3113–3121, Marseille, France. European Language Resources Association. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, ChungChing Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise AgueraArcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. 2022. Lamda: Language models for dialog applications. Aniket Vashishtha, Kabir Ahuja, and Sunayana Sitaram. 2023. On evaluating and mitigating gender biases in multilingual settings. In Findings of the Association for Computational Linguistics: ACL 2023, pages 307–318, Toronto, Canada. Association for Computational Linguistics. Zheng-Xin Yong, Cristina Menghini, and Stephen H. Bach. 2024. Low-resource languages jailbreak gpt4. Li Zhou, Laura Cabello, Yong Cao, and Daniel Hershcovich. 2023. Cross-cultural transfer learning for Chinese offensive language detection. In Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP), pages 8–15, Dubrovnik, Croatia. Association for Computational Linguistics. 849 A Appendix A.1 Dataset The dataset contains 25,861 annotated stereotypes across 23 pairs of languages and regions they are common in (Table 5), and is available online 6. The first two columns of Table 10 describes the languages, countries (of annotations), and the total annotations that are being released as part of this dataset. Since data disagreements are features of subjective data (Davani et al., 2022), we consider any associations with at least 1 annotation (of 3 annotators) as stereotype to be sufficient for the tuple to be included in the published dataset. The filtering of the data for usage is left to the user. The data card detailing intended usage, data collection and annotation, costs, etc. is also made available online 7. Lang. Lang. ISO code Country Country. ISO code French fr France FR German de Germany DE Japanese ja Japan JA Korean ko South Korea KR Turkish tr Turkey TR Portuguese pt Portugal PT Portuguese pt Brazil BR Spanish es Spain ES Spanish es Mexico MX Indonesian id Indonesia ID Vietnamese vi Vietnam VN Arabic ar UAE AE Malay ms Malaysia MY Thai th Thailand TH Italian it Italy IT Swahili sw Kenya KE Dutch nl Netherlands NL Bengali bn Bangladesh BD Bengali bn India IN Hindi hi India IN Marathi mr India IN Tamil ta India IN Telugu te India IN Table 5: Languages (with ISO codes) and the countries (with ISO codes) where we get them annotated. Table 6 shows the distribution of tuples across the nationality and regional axis. Of the 25,861 annotated tuples, 19,543 stereotypes have unique English translations (via Google Translate API). The differences arises due to the fact that we, by design, get a few tuples annotated in two different 6https://github.com/ google-research-datasets/SeeGULL-Multilingual 7https://github.com/ google-research-datasets/SeeGULL-Multilingual/ blob/main/SeeGULL_Multilingual_Data_Card.pdf countries speaking the same language (section 3 and A.6). Finally, stereotypes having different gender based identity terms but with same attributes (e.g (mauritana, árabe) and (mauritanos, árabe)) are back-translated to English in exact same way and are thus counted as such. Axis # All Stereotypes # Unique Stereotypes # identities Nationality 14,960 9,251 492 Regional 10,901 10,292 698 Total 25,861 19,543 1,190 Table 6: Distribution of number of unique stereotypes and identities across nationality and regional axis. For the nationality axis, the 492 identities/demonyms map to 179 unique international countries. A.2 Related Stereotype Resources Stereotype resources are essential for generative model evaluations, and a large body of work pushes to increase the overall coverage of these resources (Nadeem et al., 2021; Nangia et al., 2020; Jha et al., 2023). These resources help significantly bolster model safeguards (Nagireddy et al., 2023; Bai et al., 2023; Jha et al., 2024). Thus, it is imperative that the resources cover global identities, to enable models across modalities and languages to be safe and beneficial for all. There have been attempts to increase these resources across languages (Névéol et al., 2022; Sólmundsdóttir et al., 2022; Vashishtha et al., 2023), and cultures (Bhatt et al., 2022; Dev et al., 2023). However, due to the cost of curating, these resources are often limited in both size, and global coverage. In this work, we address these challenges by leveraging social information captured and generated by multilingual models and globally situated annotations. A.3 Annotation Details We get annotations from humans for two different task. The first task, called Stereotype Annotation is used to determine if an (identity, attribute) tuple is considered as stereotypical or not. The second task, Offensive Annotation is for determining the offensiveness of a given attribute in a particular country. All annotations were procured through a partner vendor who handled the recruitment, obtained informed consent, and provided clean, anonymous ratings within each task. Annotators were recruited such that they used the target language and resided in the country of 850 popular usage. They were also selected such that every data point was annotated by at least one nonmale identifying person. Annotators were paid above prevalent market rates and respecting local minimum wage laws in respective countries. A.3.1 Task: Stereotype Annotation Annotators: Since we are documenting stereotypes in a multilingual setting, our annotation tasks are in different target languages depending on the country the task is being conducted in. Annotators are proficient in English reading and writing, as well as in the target language that the data is in. Moreover, since we have regional identities corresponding to each country, we ensure that annotators are people living in that country. Finally, as described in section 3, for the languages of Spanish, Portuguese, and Bengali, we perform human annotations separately in two different countries each. Labels: The annotators had the options to select between three labels. • Non-Stereotype (NS) • Stereotype (S) • Unsure (U) (with justification) The label Unsure captures the case where either the annotator does not have full confidence, or if the tuple is factual in nature. A.3.2 Task: Offensiveness Annotation Annotators: The offensiveness annotation task is also in different target languages per dataset. Annotators are proficient in English reading and writing, as well as in the target language that the data is in. The countries that the raters are from is also different per data segment (for example, Spanish users in Mexico, Spanish users in Spain, Bengali users in Bangladesh, etc.). Each data point (attribute term) is rated by 3 annotators of diverse gender identities. Labels: The annotators need to label each attribute as one of the following on a Likert scale. • Not Offensive (NO) (-1): if the attribute is not perceived as offensive at all, or is positive or neutral such as ’good’, ’okay’, etc in English. Consider similar terms in the language the data is in for this task. • Slightly Offensive (O1) (+1) • Somewhat Offensive (O2) (+2) • Moderately Offensive (O3) (+3) • Extremely Offensive (O4) (+4): if the attribute is perceived as extremely offensive when associated with the identity (like female, Russian, Asian, Jew, etc.) of a person such as attributes implying criminal behavior and activities, such as ’criminal’, ’terrorist’, etc in English. Consider similar terms in the language the data is in for this task. • Unsure (with justification) (U) (0): if the annotator is not sure about if the attribute is offensive. The answers can vary from Extremely offensive to Not offensive. The integers from (-1) to (+4) are used for calculating the mean offensiveness of an attribute and are not visible to the annotators. A.4 Offensiveness For all the stereotypes in SeeGULL Multilingual, we also get the offensive annotations of the corresponding attributes on Likert scale (A.3.2). For all the attributes, we average out the offensiveness annotations by the three annotators and call it the ”mean offensiveness” score. Table 7 shows the percentage of stereotypes that are annotated as ”Somewhat offensive (O2)” or higher, per language and country. Lang. (Country) # Stereotypes w/ MO >= 2 % Stereotypes w/ MO >= 2 it (Italy) 223 22.62% sw (Kenya) 213 22.07% es (Spain) 179 13.32% th (Thailand) 116 12.03% ar (UAE) 86 10.78% pt (Brazil) 180 8.65% es (Mexico) 142 8.14% ja (Japan) 71 8.05% id (Indonesia) 91 7.98% de (Germany) 72 6.94% ms (Malaysia) 88 6.83% bn (India) 57 6.14% vi (Vietnam) 47 6.01% pt (Portuguese) 91 5.99% fr (France) 60 4.85% tr (Turkey) 40 3.92% te (India) 10 3.68% nl (Netherlands) 45 3.65% mr (India) 38 3.17% ta (India) 43 3.1% bn (Bangladesh) 36 2.82% ko (South Korea) 23 2.66% hi (India) 14 1.83% Table 7: Percentage of stereotypes with mean offensive (MO) score >=2, i.e with a rating of ”somewhat offensive” or more. Finally, stereotypes in SeeGULL Multilingual can be thought of either belonging having a nationality based demonym or a regional (within a country) based demonym. For all the nationality based demonyms in SGE, we group them based on their 851 corresponding countries and get an average of offensiveness scores associated with them. Table 8 shows the top 20 countries/regions which have the most offensive stereotypes associated with them. Similarly, the table 9 lists out the countries having the least offensive stereotypes associated with them. Country Mean MO # Stereotypes Albania 2.09 33 Rwanda 1.99 46 Iraq 1.54 70 Colombia 1.50 140 Somalia 1.18 76 Afghanistan 1.07 121 Nigeria 1.05 59 Serbia 0.95 142 South Sudan 0.84 66 North Korea 0.78 370 Northern Ireland 0.73 123 Ireland 0.66 141 Syria 0.65 116 Romania 0.53 55 Crimea 0.43 61 Pakistan 0.41 74 South Africa 0.40 54 Palestine 0.39 181 Algeria 0.33 55 Israel 0.32 76 Table 8: Top 20 countries (or geographical regions) having the highest mean offensive scores associated with them. The higher the number, the more offensive stereotypes are associated. Please note: we have filter out any countries having fewer than 30 stereotypes from this analysis. A.5 Overlap with English SeeGULL SeeGULL Multilingual dataset contain a total of 25,861 stereotypes out of which a total of 2370 stereotypes (949 unique stereotypes) were overlapping with SGE. Thus, about 5% of unique stereotypes in SeeGULL Multilingual overlap with SGE. The Table 10 shows the overlap of SGE with SeeGULL Multilingual corresponding to each of the 23 language and country combinations. A.6 Stereotypes in a Language across Countries A few languages are spoken across different countries in the world. These countries, that may share the same language, due to different socio-cultural backgrounds, can have a different notions of what is considered a stereotype. Table 11 quantitatively demonstrates how much annotations vary across countries Country Mean MO # Stereotypes Singapore -0.94 138 Canada -0.91 63 Maldives -0.91 134 Seychelles -0.90 75 South Korea -0.87 72 Slovakia -0.87 40 New Zealand -0.86 57 Japan -0.86 274 Nepal -0.85 321 Kenya -0.85 139 Switzerland -0.85 281 Uruguay -0.84 135 Bhutan -0.83 102 Bermuda -0.83 52 Slovenia -0.83 62 Gibraltar -0.82 67 Denmark -0.81 144 Greece -0.80 296 Armenia -0.80 43 Lebanon -0.79 36 Table 9: Top 20 countries having the lowest mean offensive scores associated with them. The higher the number, the more offensive stereotypes are associated. Please note: we have filter out any countries having fewer than 30 stereotypes from this analysis. A.7 Foundation Model Evaluations A.7.1 Creating the Evaluation set To create the evaluation set, we create a balanced sample across country, language, and regional or international demonyms. Within each bucket, we take all attributes (e.g., orderly) where we could also create three distractor demonyms that do not also share an association with that same attribute. From there, we first sample attributes, then sample from potential distractor demonyms for that attribute. We randomize the demonyms to form a question item. To encode each question item into a prompt, we first substitute the attribute (in the target language) into the English instruction prefix. Then, we separately translate the prefix into the target language, as well as a suffix instruction. Finally, we take those translations and merge them with the SeeGULL Multilingual demonyms (which are already in the target language) into the prompt for the evaluation set. We create parallel English-language prompts using the same sample of question items. To encode questions into English prompts, we use the same instructions and process but without translation, using the English demonyms and attributes from the SeeGULL Multilingual dataset. 852 Lang. (Country) Total Annotations # SGE matched % SGE matched es (Spain) 1344 178 13.24% pt (Portugal) 1520 199 13.09% te (India) 272 35 12.86% it (Italy) 986 121 12.27% es (Mexico) 1745 203 11.63% ja (Japan) 882 98 11.11% pt (Brazil) 2082 209 10.03% ko (South Korea) 864 86 9.95% fr (France) 1238 115 9.28% de (Germany) 1037 95 9.16% ar (UAE) 943 84 8.90% vi (Vietnam) 782 67 8.56% tr (Turkey) 1021 84 8.22% ms (Malaysia) 1288 103 7.99% id (Indonesia) 1141 91 7.97% bn (India) 929 74 7.96% sw (Kenya) 965 76 7.87% nl (Netherlands) 1233 97 7.86% bn (Bangladesh) 1276 95 7.44% th (Thailand) 964 68 7.05% mr (India) 1197 84 7.01% hi (Hindi) 763 41 5.37% ta (Tamil) 1389 67 4.82% Table 10: Per language overlap between SGE(SeeGULL English (Jha et al., 2023) and SeeGULL Multilingual. A.7.2 Multilingual capabilities of Models Foundation models have varying multilingual capabilities across languages. For example, the underlying PaLM 2 language model was trained on hundreds of languages (Anil et al., 2023) and Gemini was trained to support a range of multilingual capabilities (Gemini Team Google, 2023). Mixtral supports English, French, German, Italian, and Spanish (Mistral AI, 2024), while GPT systems are primarily built using English data only (OpenAI et al., 2023). We evaluate all foundation models on all languages included in SeeGULL Multilingual. A.7.3 Evaluation protocol In order to demonstrate that SeeGULL Multilingual can be used for improving foundation models, we run inference without additional safety guardrails or mitigation layers that are typically used by downstream application developers. Mistral (Mistral AI, 2024) and Gemini (Google, 2024a) provide configurable safety guardrails which we disable, and PaLM 2 includes metadata about safety with responses (Google, 2024b) which we do not consider. GPT models do not support configurable safety through the API. We run inference for evaluations through public APIs for four families of foundation models. We draw one sample from each model with temperature=0. All system versions were fixed, and inference was run during January and February 2024. Each system was queried with temperature=0.0. Model version are show in Table 12. Model response styles varied by foundation model, even with unambiguous and consistent instructions. To score responses, we use a heuristic to parse decoded text, and considered the model to endorse the stereotype if it produced text a) used the format as instructed and produced the letter of the stereotypical association, b) instead generated the exact word of the stereotypical association, c) produced text containing only the letter of the stereotypical association formatted as instructed, but with other additional text, and d) all formatted letter choices, repeating one letter choice twice. A.8 Regional Demonyms There is no single place containing regional demonyms for all the countries of the world. We source the regional demonyms online from the following sources followed by manual validation. France: • https://en.wikipedia.org/wiki/Regions_of_ France • https://en.wiktionary.org/wiki/Category: fr:Demonyms • https://en.wiktionary.org/wiki/Appendix: French_demonyms Germany: • https://en.wikipedia.org/wiki/List_ of_adjectival_and_demonymic_forms_of_ place_names#Federated_states_and_other_ territories_of_Germany Japan: • https://en.wikipedia.org/wiki/List_of_ regions_of_Japan • Since no particular demonym are found, we default to ”People from [name of the region]”. South Korea: • https://en.wikipedia.org/wiki/Provinces_ of_South_Korea • Since no particular demonym are found, we default to ”People from [name of the region]”. Bangladesh: • https://en.wikipedia.org/wiki/List_of_ adjectival_and_demonymic_forms_of_place_ names#Bangladeshi_divisions Turkey: • https://en.wikipedia.org/wiki/Provinces_ of_Turkey • https://en.wiktionary.org/wiki/Category: tr:Demonyms Portugal: • https://pt.wikipedia.org/wiki/Lista_de_ gent%C3%ADlicos_de_Portugal • http://www.portaldalinguaportuguesa.org/ index.php?action=gentilicos Brazil: 853 Spain Mexico Portugal Brazil India Bangladesh Language Spanish Portuguese Bengali # candidate associations annotated 1229 1138 650 % Stereotype >= 1 65.8% 89.6% 79.7% 98.0% 67.5% 97.5% % Stereotype >= 2 31.0% 35.2% 45.4% 74.5% 35.6% 87.5% % Stereotype >= 3 11.6% 9.6% 21.9% 27.7% 10.3% 44.3% Table 11: Annotation differences for the same language across two different countries. Table 12: Inference details for each foundation model Model Version API parameters PaLM 2 text-bison-001 no filtering GPT-4 Turbo gpt-4-1106-preview no sys. instructions Gemini Pro gemini-pro no filtering Mixtral 8X7B mistral-small no prompting • https://en.wikipedia.org/wiki/List_of_ adjectival_and_demonymic_forms_of_place_ names#Brazilian_states Spain: • https://en.wikipedia.org/wiki/Autonomous_ communities_of_Spain • https://en.wiktionary.org/wiki/Category: es:Demonyms Mexico: • https://en.wikipedia.org/wiki/List_of_ adjectival_and_demonymic_forms_of_place_ names#States_of_Mexico Indonesia: • https://en.wikipedia.org/wiki/Javanese_ people • https://www.dictionary.com/browse/ sumatran • https://en.wikipedia.org/wiki/Sundanese_ people# • https://en.wikipedia.org/wiki/Western_ New_Guinea • https://en.wikipedia.org/wiki/Moluccans# • https://en.wiktionary.org/wiki/Sulawesian Vietnam: • https://en.wikipedia.org/wiki/List_of_ regions_of_Vietnam • Since no particular demonym are found, we default to ”People from [name of the region]”. United Arab Emirates (UAE): • https://en.wikipedia.org/wiki/Emirate_of_ Abu_Dhabi • https://en.wikipedia.org/wiki/Emirate_of_ Ajman • https://en.wikipedia.org/wiki/Emirate_of_ Dubai • https://en.wikipedia.org/wiki/Emirate_of_ Sharjah Malaysia: • https://en.wikipedia.org/wiki/List_of_ adjectival_and_demonymic_forms_of_place_ names#Malaysian_states_and_territories Thailand: • No particular demonym, defaulted to ”People from [name of the region]”. Italy: • https://en.wikipedia.org/wiki/Regions_of_ Italy India: • https://en.wikipedia.org/wiki/List_of_ adjectival_and_demonymic_forms_of_place_ names#Indian_states_and_territories Kenya: • No particular demonym, defaulted to ”People from [name of the region]”. Netherlands: • https://nl.wiktionary.org/w/index. php?title=Categorie:Demoniem_in_het_ Nederlands&from=F A.9 Licenses of models and data used The data (SGE) was released with CC-BY-4.0 licence 8 which permits its usage for research purposes. The intended usage guidelines of the different models were adhered to 9 10 11. We abide by the terms of use of any models used in this paper. 8https://github.com/google-research-datasets/seegull/ tree/main?tab=CC-BY-4.0-1-ov-filereadme 9https://mistral.ai/terms-of-service/ 10https://ai.google.dev/terms 11https://openai.com/policies/business-terms 854
2024_short
75
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 855–869 August 11-16, 2024 ©2024 Association for Computational Linguistics Getting Serious about Humor: Crafting Humor Datasets with Unfunny Large Language Models Zachary Horvitz1,*, Jingru Chen1,*, Rahul Aditya1, Harshvardhan Srivastava1, Robert West2, Zhou Yu1, Kathleen McKeown1 1Columbia University, 2EPFL {zfh2000, jc5898, ra3261, hs3447, zy2461}@columbia.edu [email protected], [email protected] Abstract Humor is a fundamental facet of human cognition and interaction. Yet, despite recent advances in natural language processing, humor detection remains a challenging task that is complicated by the scarcity of datasets that pair humorous texts with similar non-humorous counterparts. We investigate whether large language models (LLMs) can generate synthetic data for humor detection via editing texts. We benchmark LLMs on an existing human dataset and show that current LLMs display an impressive ability to “unfun” jokes, as judged by humans and as measured on the downstream task of humor detection. We extend our approach to a code-mixed English-Hindi humor dataset where we find that GPT-4’s synthetic data is highly rated by bilingual annotators and provides challenging adversarial examples for humor classifiers. 1 Introduction Despite their success on natural language tasks, large language models (LLMs) struggle to reliably detect and explain humor (Baranov et al., 2023; Góes et al.; Hessel et al., 2023), and generate novel jokes (Jentzsch and Kersting, 2023). Notably, humans also struggle to write jokes; even at satirical newspapers like The Onion, less than 3% of proposed headlines are printed (West and Horvitz, 2019; Glass, 2008). In contrast, humans are able to consistently edit jokes to unfun them, an insight which motivated West and Horvitz (2019) to host a game where internet users competed to edit satirical headlines to make them serious. The resulting dataset, the Unfun Corpus (West and Horvitz, 2019), has been a valuable tool for advancing computational humor research. The dataset has been used to study properties of both humor and transformer architectures (West and Horvitz, 2019; Peyrard et al., 2021) and even to generate *Equal contribution. Figure 1: Outputs from GPT-4. We leverage language models to edit away (or "unfun") humor in existing human-written jokes, resulting in aligned datasets that pair humorous texts with non-humorous counterparts. novel satire (Horvitz et al., 2020). Additionally, recent work has found that despite the relatively small size of the original dataset, humor detection models trained on Unfun data generalize remarkably well to other datasets, while models trained on other humor datasets perform poorly at classifying Unfun-edited data (Baranov et al., 2023). While useful contributions, Unfun and other aligned humor datasets (Hossain et al., 2019, 2020) are limited in both size and scope, due to their reliance on human annotation. We investigate the alternative of using LLMs to create datasets of aligned humorous and non-humorous texts.1 Previous work (Jentzsch and Kersting, 2023; Li et al., 2023; Veselovsky et al., 2023) has found that LLMs are limited in their ability to create synthetic humor. We take a new approach, exploiting the asymmetrical difficulty (Josifoski et al., 2023) of synthetic humor generation. Rather than only testing whether LLMs can generate humor, we explore their ability to edit away humor in existing jokes. Validating and harnessing this capability could provide large 1Our code and datasets are available at https://github. com/zacharyhorvitz/Getting-Serious-With-LLMs. 1 855 paired datasets and support future work on improving humor detection and even generation. Our contributions include benchmarking against human-curated data in the Unfun corpus, where we find that LLMs like GPT-4 and GPT-3.5 (OpenAI, 2023, 2022) can (1) outperform humans at removing humor from texts and that (2) this ability can be harnessed to generate high quality synthetic data for training humor classifiers. While these models can also be prompted to modify unfunny headlines to craft satire, we find that this ability is more inconsistent and lags behind satirical writers. Finally, we consider a code-mixed English-Hindi humor dataset to evaluate whether GPT-4’s “unfunning” ability generalizes to other domains and languages. We find that the resulting synthetic unfunny dataset is rated highly by bilingual annotators and poses challenging adversarial data for models trained on the original corpus. 2 Getting Serious with Language Models We first revisit the Unfun task and resulting dataset, but with language models as players. 2.1 Unfun Dataset In the original Unfun game (West and Horvitz, 2019), players were tasked with editing existing satirical headlines from The Onion,2 to transform the original satire into corresponding serious headlines. For example (removing “Delicious”): “Scientists Discover Delicious New Species" Players were rewarded for preserving token-level similarity with the original satire and for crafting convincingly serious headlines that other players rated as real. The resulting dataset includes approximately 11K unfunned headlines, with a subset rated by players. We leverage Unfun pairs, of satirical headlines and their unfunned counterparts, to benchmark the performance of LLMs at editing humorous texts against humans. We include additional details on data preparation in Appendix A.1.1. 2.2 Unfun Generation We consider a few-shot setting (Brown et al., 2020), and provide LLMs with a short task description, along with a set of input-output exemplar pairs: (humorous text, serious text). Following Veselovsky et al. (2023), we encourage diversity in our synthetic data by sampling these exemplars from a 2https://www.theonion.com/ subset of the existing pairs rated as high-quality by the original human players. For the unfunning task, we consider four popular LLMs: GPT-4 (OpenAI, 2023) and GPT-3.5-TURBO, along with MISTRAL-7B-INSTRUCT and MISTRAL-7B (Jiang et al., 2023). We also consider a lightweight alternative approach, ROBERTA-SWAP, that replaces low probability tokens using predictions from a ROBERTA masked language model (Liu et al., 2019). This approach is motivated by the Incongruity Theory of Humor (Hutcheson, 1750; Morreall, 2023), which associates humor with surprise, and previous work that has found humorous headlines to have higher perplexities (Peyrard et al., 2021). ROBERTA-SWAP edits satirical headlines by iteratively performing token swaps at k positions. At each selected position, the original token is replaced with the highest probability token predicted by the model at that masked time-step. The k swap positions are selected using the ratio between the probability of the original token and the probability assigned to the language model’s prediction. Additional details on unfun generation are included in Appendix A.2.1. 3 Unfun Evaluation 3.1 Experimental Setup The existing Unfun data enables comparison of human and LLM players, via both automatic and human evaluations. We first evaluate the quality of synthetically generated data through automated evaluation on the downstream task of Unfun detection, and then follow this with a human evaluation. 3.1.1 Automatic Evaluations First, following recent work on synthetic data (Li et al., 2023; Veselovsky et al., 2023) we evaluate the data quality of outputs from LLMs by testing whether binary humor classifiers trained on the synthetic outputs can differentiate between actual humorous and unfunned headlines from the original Unfun dataset. We compare training on data from human players and actual satirical headlines to two configurations of synthetic data: [Synthetic unfun; Original satire] [Human unfun; Synthetic satire] These two configurations enable comparing the “unfunning” and joke writing capabilities of LLMs. Additionally, we consider the alternative of using actual unrelated news headlines as non-humorous examples. Using data from each approach, we 2 856 Data Characteristics Holdout Accuracy Direction Source Diversity (TTR) Edit Dist MISTRAL ROBERTA Unfun ROBERTA-SWAP 0.262 2.7 69.9 (0.9) 62.7 (0.7) MISTRAL 0.257 2.1 70.7 (0.7) 61.7 (0.3) MISTRAL INSTRUCT 0.255 2.4 70.9 (0.7) 64.7 (0.5) GPT-3.5 0.259 4.5 72.9 (0.2) 65.9 (0.4) GPT-4 0.252 3.8 76.5 (0.2) 69.9 (0.5) News Headlines 0.306 66.3 (0.2) 64.1 (0.2) Unfun Players 0.271 2.9 80.3 (0.5) 72.7 (0.4) Humor MISTRAL 0.244 2.8 66.3 (0.7) 56.3 (0.4) MISTRAL INSTRUCT 0.221 4.5 65.2 (0.8) 58.8 (0.4) GPT-3.5 0.24 4.6 69.9 (0.5) 58.7 (0.4) GPT-4 0.246 5.5 69.5 (0.7) 59.7 (0.6) The Onion 0.262 Table 1: Automatic evaluations of synthetic Unfun data. We consider the two directions of editing away (Unfun) and editing in humor (Humor). We report median accuracies (and standard error) on a balanced holdout set (n = 750) over 5 seeds when fine-tuning MISTRAL (Jiang et al., 2023) and ROBERTA (Liu et al., 2019) humor classifiers. Direction Source Rated Real Slightly Funny / Funny Grammatical Coherence Unfun ROBERTA-SWAP 30% 15% / 5% 93% 86% MISTRAL INSTRUCT 21% 50% / 14% 100% 96% GPT-3.5 51% 23% / 3% 100% 98% GPT-4 49% 21% / 3% 100% 99% News Headlines 81% 2% / 0% 99% 93% Human Players 33% 21% / 7% 94% 92% Humor MISTRAL INSTRUCT 21% 34% / 9% 99% 93% GPT-3.5 11% 54% / 8% 100% 94% GPT-4 10% 45% / 10% 100% 98% The Onion 4% 68% / 24% 99% 97% Table 2: Human evaluations of synthetic Unfun data. We consider n = 100 samples per approach. We collect three annotations per example and assign labels by majority agreement. fine-tune ROBERTA and MISTRAL-7B for humor classification. Our test set comprises a subset of headline pairs from the Unfun corpus that were highly rated in the original game. Additional evaluation details are provided in Appendix A.4. 3.1.2 Human evaluations To perform our human evaluations, we recruited 10 university students as annotators, all of whom were American and native English speakers. Annotators were tasked with rating headlines as real/satire/neither. In the case of the “satire” label, we also task the annotators with rating funniness ([0 = not funny, 1 = slightly humorous, 2 = funny]). If the annotator selects “neither”, we ask them to rate the headline’s grammaticality ({0, 1}) and coherence ({0, 1}). We gather three annotations for each sample and assign labels based on majority vote. We include additional information on our human evaluations and annotation scheme in Appendix A.3 and C.1 3.2 Results Automatic Evaluations Table 1 contains the automatic evaluations on the Unfun corpus. Notably, when validated on human data, humor classifiers trained on GPT-4’s synthetic unfun data are very performant, incurring the smallest accuracy drop relative to human-edited training data (∆Mistral = −3.8% and ∆RoBERTa = −2.8%). In contrast, classifiers trained with real news head3 857 Source Edit Dist Humor Coherence Non-Humor 16.8% 92.8% GPT-4 Unfuns 6.6 16.0% 93.6% + GPT-4 Filter 6.9 3.6% 89.3% Humor 48.0% 93.6% Table 3: Human evaluations and edit distance of original and synthetic English-Hindi Tweet data (Khandelwal et al., 2018). n = 125 per approach. lines as unfunny data perform poorly, highlighting the importance of aligned data for this task. However, we find that not all aligned data is created equal, and that classifiers perform significantly worse when trained on synthetic humor data relative to human-edited data (∆< −10%). Even data from our ROBERTA-SWAP unfun baseline dramatically outperforms, or is on par with, all synthetic humor approaches. The edit distances demonstrate that each approach retains a large portion of the original humorous text. However, GPT-4 and GPT3.5 tend to modify headlines more than human players (3.8 and 4.5 vs 2.9). Human Evaluations Table 2 displays the results from our human evaluations. All approaches for generating synthetic humor significantly underperform Onion headlines on funniness and realness ratings (p < 0.05). Notably, we do not observe a significant improvement between GPT-3.5 and GPT4. In contrast, synthetic unfuns from both GPT3.5 and GPT-4 were significantly more likely than human unfuns to be rated as real news headlines. They were also rated as similarly unfunny and more grammatical and coherent. Surprisingly, our simple ROBERTA-SWAP approach also performed comparably with Unfun players on funniness and real headline metrics, but underperformed on coherence. Together, these results indicate that current LM-based methods underperform satirical writers on humor generation, but can outperform human crowd-workers at editing away humor in satire to craft aligned datasets. 4 Extending Unfun to Other Languages Recent work has found that GPT-4 exhibits strong multilingual capabilities (Møller et al., 2023; Jiao et al., 2023; Ahuja et al., 2023). Motivated by these findings, we investigate whether its ability to edit away humor generalizes to other languages and forms of joke. 4.1 Experimental Setup We consider an existing corpus of code-mixed English-Hindi tweets, previously annotated as humorous or non-humorous (Khandelwal et al., 2018). Here, we prompt GPT-4 to unfun humorous tweets. To remove low quality results, we secondarily filter outputs that GPT-4 still classifies as humorous. We provide additional details on dataset preparation in Appendix A.1.2 and English-Hindi unfun generation in A.2. We perform a human evaluation with bilingual annotators who rated these unfunned outputs from GPT-4 alongside samples from the original dataset. We also run an automatic evaluation, testing the performance of humor classifiers trained with different proportions of synthetic non-humorous data. We evaluate on holdout synthetic data rated by the annotators as coherent and successfully nonhumorous. For the humor classifier, we fine-tune an XLM-ROBERTA model (Conneau et al., 2020) previously fine-tuned on English-Hindi Twitter data (Nayak and Joshi, 2022). 4.2 Results Tables 3 and 4 contain the human evaluations and automatic results for English-Hindi data. GPT4 edited texts were rated comparably to nonhumorous human tweets despite being derived from humorous tweets, which were rated as humorous by our annotators (48%) of the time. Filtering with GPT-4 yielded a smaller sample (56/125) that was rated as much less humorous (3.6%). These results demonstrate that GPT-4 is able to reliably unfun English-Hindi tweets, but with more edits than American satirical headlines (6.6 vs 3.8). Additionally, unfunned data can provide a challenging adversarial dataset. In Table 4 we evaluate the performance of humor classifiers on human-vetted unfunned data. When trained on the original dataset, the classifier fails to generalize to the unfunned samples and performs poorly (23% accuracy). Incorporating synthetic training data improves this metric at a cost to accuracy on humorous examples in the original dataset. Together, these results provide evidence that the humor classifier relies on superficial features to identify humorous text, and that, even with fine-tuning, the model struggles to recognize synthetic unfunny data. 4 858 Original Dataset Source Unfuns Balanced Accuracy Humor Non-Humor Original 22.6 (3.7) 67.9 (0.9) 80.3 (3.5) 56.9 (5.1) (25%) Synth Unfuns 34.0 (8.4) 67.7 (1.7) 78.4 (3.3) 55.4 (5.9) (50%) Synth Unfuns 57.7 (6.0) 62.1 (0.6) 68.4 (5.7) 55.9 (4.7) Table 4: Automatic evaluations with English-Hindi synthetic data. We report median accuracies (and standard error) on a holdout set from the original dataset (n = 591) and the human-vetted unfuns (n = 97). We also report median class-level accuracies for the original dataset. 5 Discussion Our results indicate that current LLMs struggle to generate humor, but can outperform crowd-workers at editing away (or unfunning) humor. We hypothesize that maximum likelihood training, combined with autoregressive sampling techniques, does not endow models with the creative spark required for joke writing, and instead lends itself to making high probability, reasonable substitutions to replace incongruous twists. Our evaluations on code-mixed English Hindi Twitter data indicate that, for GPT4, this ability can impressively generalize to other languages and settings to create novel Unfun-like datasets. We are excited for future work that harnesses this capability and resulting data to improve humor detection and generation systems, and also to demystify fundamental properties of humor. 6 Limitations We consider two settings, English satirical headlines and code-mixed English-Hindi tweets. Humor practices and references vary by culture (Alden et al., 1993; Jiang et al., 2019), and we leave investigating cultural impacts on LLMs and humor to future work. In both of our evaluations, the subjectivity of humor presents a challenge for our evaluations (Warren et al., 2021). We see evidence of this in Table 3, where only 48% of tweets previously annotated as humorous were also rated as humorous by our annotators, and where 16% of nonhumorous tweets were rated as humorous. This likely reflects differences in background knowledge and context between annotators. Additionally, we note that human Unfun players were incentivized to perform minimal edits, which may have affected their human evaluation metrics and lowered edit distances. On average, however, GPT-4 performs less than one additional word edit, and several approaches, including ROBERTA-SWAP, were performant with lower edit distances than human players. Another concern is data contamination (Sainz et al., 2023), and that a portion of the text from the Unfun corpus could have been trained on and memorized by the LLMs we evaluated. We investigate this concern in Appendix A.6. We note that our results on English-Hindi data show that GPT-4’s abilities generalize to a dataset where these pairs do not already exist on the internet. 7 Ethical Statement Humor brings joy to people and plays a critical role in building and maintaining social relationships (Basso, 1979). However, its importance presents a double-edged sword; offensive and hurtful humor can cause real harms, and reinforce prejudice (Benatar, 1999). As a result, with their widespread adoption, it will be paramount for AI systems to be more capable of identifying and appropriately navigating jokes. We believe that our work on benchmarking LLM humor abilities and building challenging detection datasets is an important step in this direction. However, one possible concern is that malicious actors could leverage our unfunning approach to circumvent existing safeguards. In our experimentation, we found numerous settings where GPT-4 refused to generate jokes for offensive topics, but had no trouble editing texts to remove humor and offensiveness. This could enable building large parallel datasets of (offensivetext, non-offensive counterparts) that could then be used to train models for offensive joke generation. Acknowledgements We would like to thank Eric Horvitz for guidance that helped shape the direction of this work. We are also grateful to Nicholas Deas, Debasmita Bhattacharya, and Maximillian Chen for their feedback. Additionally, we would like to extend our gratitude to Amith Ananthram, Samir Gadre, Fei-Tzin Lee, Matthew Toles, Elsbeth Turcan, Melanie Sub5 859 biah, Emily Allaway, Tymon Nieduzak, Rattandeep Singh, Prabhpreet Singh Sodhi, and Apoorva Joshi for support on human evaluations. References Kabir Ahuja, Rishav Hada, Millicent Ochieng, Prachi Jain, Harshita Diddee, Krithika Ramesh, Samuel C. Maina, Tanuja Ganu, Sameer Segal, Maxamed Axmed, Kalika Bali, and Sunayana Sitaram. 2023. Mega: Multilingual evaluation of generative ai. ArXiv, abs/2303.12528. Dana L. Alden, Wayne D. Hoyer, and Chol Lee. 1993. Identifying global and culture-specific dimensions of humor in advertising: A multinational analysis. Journal of Marketing, 57:64 – 75. Alexander Baranov, Vladimir Kniazhevsky, and Pavel Braslavski. 2023. You told me that joke twice: A systematic investigation of transferability and robustness of humor detection models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 13701–13715, Singapore. Association for Computational Linguistics. K.H. Basso. 1979. Portraits of ’the Whiteman’: Linguistic Play and Cultural Symbols among the Western Apache. Cambridge University Press. David Benatar. 1999. Prejudice in jest: When racial and gender humor harms. Public Affairs Quarterly, 13(2):191–203. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023. Qlora: Efficient finetuning of quantized llms. Ira Glass. 2008. Tough room. Fabrício Góes, Piotr Sawicki, Marek Grze´s, Daniel Brown, and Marco Volpe. Is gpt-4 good enough to evaluate jokes? Jack Hessel, Ana Marasovic, Jena D. Hwang, Lillian Lee, Jeff Da, Rowan Zellers, Robert Mankoff, and Yejin Choi. 2023. Do androids laugh at electric sheep? humor “understanding” benchmarks from the new yorker caption contest. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 688–714, Toronto, Canada. Association for Computational Linguistics. Zachary Horvitz, Nam Do, and Michael L. Littman. 2020. Context-driven satirical news generation. In Proceedings of the Second Workshop on Figurative Language Processing, pages 40–50, Online. Association for Computational Linguistics. Nabil Hossain, John Krumm, and Michael Gamon. 2019. “president vows to cut <taxes> hair”: Dataset and analysis of creative text editing for humorous headlines. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 133–142, Minneapolis, Minnesota. Association for Computational Linguistics. Nabil Hossain, John Krumm, Tanvir Sajed, and Henry Kautz. 2020. Stimulating creativity with funlines: A case study of humor generation in headlines. F. Hutcheson. 1750. Reflections Upon Laughter: And Remarks Upon the Fable of the Bees. Garland Publishing. Sophie Jentzsch and Kristian Kersting. 2023. Chatgpt is fun, but it is not funny! humor is still challenging large language models. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. Tonglin Jiang, Hao Li, and Yubo Hou. 2019. Cultural differences in humor perception, usage, and implications. Frontiers in Psychology, 10. Wenxiang Jiao, Wenxuan Wang, Jen tse Huang, Xing Wang, and Zhaopeng Tu. 2023. Is chatgpt a good translator? yes with gpt-4 as the engine. Martin Josifoski, Marija Sakota, Maxime Peyrard, and Robert West. 2023. Exploiting asymmetry for synthetic training data generation: SynthIE and the case of information extraction. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 1555–1574, Singapore. Association for Computational Linguistics. Ankush Khandelwal, Sahil Swami, Syed S. Akhtar, and Manish Shrivastava. 2018. Humor detection in english-hindi code-mixed social media content : Corpus and baseline system. 6 860 Zhuoyan Li, Hangxiao Zhu, Zhuoran Lu, and Ming Yin. 2023. Synthetic data generation with large language models for text classification: Potential and limitations. ArXiv, abs/2310.07849. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. Anders Giovanni Møller, Jacob Aarup Dalsgaard, Arianna Pera, and Luca Maria Aiello. 2023. Is a prompt and a few samples all you need? using gpt-4 for data augmentation in low-resource classification tasks. ArXiv, abs/2304.13861. John Morreall. 2023. Philosophy of Humor. In Edward N. Zalta and Uri Nodelman, editors, The Stanford Encyclopedia of Philosophy, Summer 2023 edition. Metaphysics Research Lab, Stanford University. Ravindra Nayak and Raviraj Joshi. 2022. L3CubeHingCorpus and HingBERT: A code mixed HindiEnglish dataset and BERT language models. In Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference, pages 7–12, Marseille, France. European Language Resources Association. OpenAI. 2022. Chatgpt: Optimizing language models for dialogue. OpenAI. 2023. Gpt-4 technical report. Maxime Peyrard, Beatriz Borges, Kristina Gligori´c, and Robert West. 2021. Laughing heads: Can transformers detect what makes a sentence funny? Oscar Sainz, Jon Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, and Eneko Agirre. 2023. NLP evaluation in trouble: On the need to measure LLM data contamination for each benchmark. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10776–10787, Singapore. Association for Computational Linguistics. Veniamin Veselovsky, Manoel Horta Ribeiro, Akhil Arora, Martin Josifoski, Ashton Anderson, and Robert West. 2023. Generating faithful synthetic data with large language models: A case study in computational social science. ArXiv, abs/2305.15041. Caleb Warren, Adam Barsky, and A. Peter McGraw. 2021. What makes things funny? an integrative review of the antecedents of laughter and amusement. Personality and Social Psychology Review, 25(1):41– 65. PMID: 33342368. Robert West and Eric Horvitz. 2019. Reverseengineering satire, or “paper on computational humor accepted despite making serious advances”. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence. A Appendix A.1 Data Preparation A.1.1 Unfun Corpus We use the February 2, 2023 Unfun (West and Horvitz, 2019) database backup,3 and consider all valid unfunned headlines (i.e. not None). This results in 11831 pairs. A subset of these have ratings from other players. We use these to curate a high quality evaluation subset of pairs where: • There is at least one annotation. • The satirical headline has a funniness rating ≥0.8. • The unfunned headline has a funniness rating ≤0.2. The resulting 867 pairs were split among prompt examples (10%), dev (30%), and test (60%) shards. For our training set, we consider the remaining headlines, again ensuring that there is no overlap with other shards. The resulting dataset has many instances where there are multiple unfunned counterparts for each satirical headline. As an additional step, we randomly filter our training, dev, and test shards so that there is only one unfunned headline per satirical headline. This results in a training set of 3882 unfuns, a dev set of 186 unfuns, and a test set of 375 unfuns, in each case, these are included alongside their corresponding satirical headlines. For an additional training data baseline, we also retrieve an equal number of real news headlines included in the Unfun database. A.1.2 Code-Mixed English-Hindi Humor We use the version of the English-Hindi Humor dataset by Khandelwal et al. (2018) hosted on GitHub.4 We use the provided labels for the available data. Notably, a portion of annotated samples appear to be unavailable. We divide the available dataset (n = 2951) into training, dev, and test shards (60%, 20%, 20%). Additionally, we filter tweets containing links. A.2 Data Generation Details We include our full prompts in Appendix B. For decoding hyperparameters, we use top-p = 0.85 and τ = 1.0 for all LLMs. 3https://github.com/epfl-dlab/unfun 4https://github.com/Ankh2295/ humor-detection-corpus 7 861 A.2.1 Unfun Data Generation To generate synthetic Unfun for each LLM approach, we prompt each model with 8 randomly sampled in-context pairs from examples from our high quality subset that was set aside for prompting. For our ROBERTA-SWAP baseline, we replace tokens in the original satirical headline using a ROBERTA-BASE5 model. To select each replacement, we iterate over and individually mask each token in the headline, and then predict the masked token: ˆxi = arg max x P(x | x̸=i, θRoBERTa) The position with the largest ratio between the predicted token and the original token probabilities is selected as the swap position: swap position = arg max i [P(ˆxi | x̸=i, θRoBERTa) P(xi | x̸=i, θRoBERTa)] We then replace xi with ˆxi, and repeat this procedure k times. We set k = 3 in our experiments. A.2.2 Hindi-English Data Generation Unlike for Unfun, we do not have existing pairs of (un-humorous, humorous) English Hindi tweets. To remedy this, we first generated 50 examples in a zero-shot setting on our training set, and then selected nine high quality results to serve as our prompt. We additionally prompt GPT-4 with humorous and non-humourous texts to classify the resulting unfunned tweets as humorous or nonhumorous. We filter unfunned tweets if they are still classified as humorous. A.3 Human Evaluations We recruited 10 university students as annotators for the Unfun task. All annotators were American and native English speakers. For the EnglishHindi dataset, we worked with three bilingual (Hindi and English) speakers. For both evaluations, we gathered three unique annotations per example, and assigned labels based on majority votes. Our Unfun evaluation assumes that any headline labeled as satirical or as real headline is grammatical and coherent. In contrast, we do not consider the grammatical label for English-Hindi data, due to the varied syntactic styles of tweets. In Table 2, headlines are only rated "Real" if a majority of annotators rated the headline as "Real" 5https://huggingface.co/FacebookAI/ roberta-base (not "Satire" or "Neither"). Headlines are rated "Slightly Funny" if a majority of annotators assigned the headline funniness ≥1, and "Funny" with funniness = 2. Our full instructions for both human evaluations are included in Appendix C.1. Tables 5 and 6 display inter-annotator agreement statistics. Human Label Krippendorff Real 0.507 Funny 0.333 Very Funny 0.214 Grammar 0.271 Coherence 0.214 Table 5: Krippendorff’s α results on Unfun dataset. Human Label Krippendorff Coherence 0.206 Humorous 0.377 Table 6: Krippendorff’s α results on English-Hindi dataset. A.4 Automatic Evaluations On the Unfun dataset, for each synthetic Unfun approach, we generate data using the corresponding original 3882 training examples as inputs. We then evaluate classifiers trained on each dataset on the filtered high quality holdout data. To generate humor, we provide the unfunned example as input. To edit away humor, we provide the original satirical headline. We also provide in-context pairs drawn from the high quality prompt examples (See A.1.1). For our Real News baseline, we randomly select 3882 real news headlines to serve as non-humorous examples. On the English-Hindi dataset, we compare training on the original dataset to training on data where (25%) and (50%) of non-humorous examples have been replaced by GPT-4 Filtered unfunned data. We evaluate classifiers on a holdout set from original dataset (n = 591), and also set of Unfuns (n = 97), derived from humorous examples in our holdout set and rated by our annotators as both coherent and non-humorous. All results for both datasets are computed over 5 seeds. 8 862 A.5 Humor Classifier Training For the Unfun task, we fine-tune MISTRAL (Jiang et al., 2023)6 and ROBERTA (Liu et al., 2019)7 models. For Hindi-English, we consider HINGROBERTA (Nayak and Joshi, 2022)8. All models are trained with the AdamW optimizer (Loshchilov and Hutter, 2019) and a constant learning rate. Due to the class imbalance in the available EnglishHindi dataset (39% non-humorous, 61% humorous), we weight the loss by the inverse proportion of class frequency. We fine-tune our MISTRAL classifier with 4-bit quantized LoRA (Dettmers et al., 2023) and the addition of a classification head. For all classifiers, we first perform hyperparameter tuning on the original human authored datasets. For the Unfun dataset we consider: • Learning Rates ∈{5e −5, 2.5e −5, 1.25e − 5, 6.25e −6, 3.125e −6, 1.5625e −6} • Batch Size ∈[32] (Due to resource constraints) For the English-Hindi Dataset dataset we consider: • Learning Rates ∈{5e −5, 2.5e −5, 1.25e − 5, 6.25e −6, 3.125e −6, 1.5625e −6} • Batch Size ∈{256, 128, 64, 32, 16, 8} After selecting the highest performing configuration, we run each experiment with 5 seeds ([1234, 2345, 3456, 4567, 5678]). We include the most performant hyperparameters in Table 7. All model trains use a single NVIDIA A100 GPU. We estimate the total compute budge to be 200 hours. A.6 Considering Memorization We investigate whether data contamination and memorization is affecting our results by testing how often synthetic unfuns or humor appear in the original Unfun corpus. We find that only a small fraction of outputs appear to match humanunfunned text or satire headlines. We include results in Table 8. Of these, the majority represent simple edits, indicating that the models may have rediscovered trivial unfuns. For example: “Egypt plunges into state of Middle East crisis" 6https://huggingface.co/mistralai/ Mistral-7B-v0.1 7https://huggingface.co/FacebookAI/ roberta-base 8https://huggingface.co/l3cube-pune/ hing-roberta B Prompts B.1 Unfun Task Prompts B.1.1 Humor Generation Chat Models "You are a helpful assistant that edits realistic headlines to make them humorous." {"role": "user", "content": <Unfunned Headline>}, {"role": "assistant", "content": <Satire Headline>} Completion Models "The following realistic headlines can be edited to be humorous:" "<Unfunned Headline> -> <Satire Headline>" B.1.2 Unfun Generation Chat Models "You are a helpful assistant that edits humorous headlines to make them realistic." {"role": "user", "content": <Satire Headline>}, {"role": "assistant", "content": <Unfunned Headline>}, ... Completion Models "The following humorous headlines can be edited to be realistic:" "<Satire Headline> -> <Unfunned Headline>" B.2 English-Hindi Task Prompts B.2.1 Unfun Generation Chat Models "Kya ye diye hue tweet ka humor wala part hata kar use normal bana sakti ho? Aur jitna ho sake utna punctuation use same rakhne ki koshish karna" [Can you remove the humorous part of the given tweets and make them normal? And try to keep the punctuation as much the same as possible.]. 9 863 Model Learning Rate Batch Size MISTRAL (QLoRA) 6.25e-06 32 ROBERTA 1.25e-05 32 HING-ROBERTA 1.5625e-06 8 Table 7: The training configurations for our automatic evaluations, after hyperparameter tuning. Model Unfun Satire GPT-3.5 3/200 0/200 GPT-4 7/200 0/200 MISTRAL 2/200 1/200 MISTRAL INSTRUCT 2/200 0/200 ROBERTA-SWAP 0/200 Table 8: The number of overlapping samples between human-curated headlines and synthetic headlines in our test examples (n = 200). {"role": "user", "content": <Context Funny Tweet>}, {"role": "assistant", "content": <Context Unfunned Tweet>} B.2.2 Unfun Filtering Chat Models "You are a pattern-following assistant used to rigorously determine whether a Hindi tweet is intended to be humorous. Given a Hindi tweet, respond only with either of Yes or No. Yes if it is humoruous and No if it is not humorous" {"role": "user", "content": <Context Tweet>}, {"role": "assistant", "content": <Context Yes/No Label>} C Human Evaluation Instructions C.1 Unfun Task Instructions Each annotator has been assigned a series of text samples to review. First, you are asked to evaluate whether the text sounds like a • r) real news headline (like from a nonhumorous news website) • OR s) satirical news headline (like from a humorous newspaper like TheOnion.) • OR n) neither (text that would not appear in either setting, because it is ungrammatical, or incoherent. If you rate a headline as n (neither), you will be further prompted to rate it as a grammatical [no=0,yes=1 (for a news headline) and coherent [no=0,yes=1]. If you rate a headline as s (satire), you will be prompted to subjectively rate the quality of humor: • 0 - not funny • 1 - slightly humorous / there is some identifiable joke • 2 - funny Content Warning: Several headlines may contain references to upsetting content. EXAMPLES: Satirical Headlines • nhl not quite sure why it has a preseason • america’s sweetheart dumps u.s. for some douchebag • apple: new iphone good • cat general says war on string may be unwinnable • fire chief grants fireman 3-day extension on difficult fire News Headlines • the word ’doofuses’ may cost ex-yahoo ceo bartz $10 million • 2 meteorites hit connecticut 10 864 • world outraged by north korea’s latest nuke test • poverty rate hits 17-year high • philippines: 5 foreign terror suspects in south C.2 English-Hindi Task Instructions The following task instructions specify additional information based on the original instructions provided to annotators in (Khandelwal et al., 2018). Each annotator has been assigned a series of text samples to review. First, you are asked to evaluate whether the text is h) humorous n) non-humorous Secondarily, you will be asked to rate whether a text is coherent [no=0,yes=1] A tweet should be marked as coherent, even if you don’t have all the required background knowledge, as long as you can reasonably understand its meaning. Additional info: • Any tweets stating any facts, news or reality should be classified as nonhumorous. • Tweets which consisted of any humorous anecdotes, fantasy, irony, jokes, insults should be annotated as humorous • Tweets stating any facts, dialogues or speech which did not contain amusement should be put in non-humorous class. • Tweets containing normal jokes and funny quotes should be placed in the humorous category. • Some tweets consist of poems or lines of a song but modified. If such tweets contain satire or any humoristic features, then they could be categorized as humorous otherwise not. Content Warning: Several tweets may contain references to upsetting/offensive content. EXAMPLES (We give the English Translations of each in brackets but they were not presented to the annotators): Humorous Tweets • Jhonka hawa ka aaj bhi chhup ke hilaata hoga na #Samir #HawaKaJhonka #BeingSalmanKhan [Does the breeze still sway secretly today? #Samir #HawaKaJhonka #BeingSalmanKhan) • Working on a Sunday, chand rupye kamaane ke liye insaan apni khushiyon ka bhi sauda kar leta hai. [Working on a Sunday, to earn a few rupees, a person sometimes even sacrifices their happiness.] • DJ wale babu bhosdike ab to gaana baja de iska.. bol bol ke kaan se khoon nikaal diya hai isne [DJ wale babu, play the song now.. he has made our ears bleed by talking so much.] • Is Arvind Kejriwal new Che Guavara ? RT @ashutosh83B Is Rahul Gandhi new Arvind Kejariwal ? [Is Arvind Kejriwal the new Che Guevara? RT @ashutosh83B Is Rahul Gandhi the new Arvind Kejriwal?] • Sukh bhare din beete re bhaiya, Babadook aayo re [Brother, may the days filled with joy pass by. The Babadook has arrived.] Non-Humorous Tweets • Apne support wale MLAs ko farmhouse main band kar lenge. Parade karayenge. Takhta palat karenge. Akhand chutiyap. [We will lock up our supporting MLAs in the farmhouse. Parade them. Flip the throne. Absolute nonsense.] • Hrithik Roshan is using Vodafone. [Hrithik Roshan is using Vodafone.] • PLEASE STOP MAKING JOKES ON SALMAN KHAN. BHAI BOLA NAHI CHALA RAHA THA GAADI TO NAHI CHALA RAHA THA. #BHAIROXX [Please stop making jokes on Salman Khan. Bhai was not driving the car if he said he was not driving the car. #BHAIROXX] 11 865 • Bhaari sankhya mein vote karein, aapke TL par wph hi nazar aayega [Vote in large numbers, wph will only appear in your TL.] D Reference Examples Tables 9, 10, and 11 include reference samples for English synthetic unfun outputs, English satire outputs, and English-Hindi unfun outputs respectively. 12 866 Original Satire tom petty to play some new stuff he’s been working on at super bowl jaguars offensive line not sure they can open big enough hole for maurice jones drew obama takes surprise caller during weekly radio address Human tom petty to play some new stuff he’s been working on at coachella jaguars offensive line not sure they can open stable positioning hole for maurice jones drew obama takes caller during weekly radio address GPT3.5 tom petty to perform classic hits at super bowl jaguars offensive line not sure they can create sufficient gap for maurice jones drew obama takes surprise caller during live radio interview GPT-4 tom petty to perform new material at super bowl jaguars offensive line unsure if they can open big enough hole for maurice jones drew obama takes unexpected caller during weekly radio address MISTRAL tom petty to play some new stuff he’s been working on at superbowl jaguars offensive line not sure they can open big enough hole for joe flacco obama takes surprise caller during weekly radio address MISTRAL INSTRUCT tom petty to play some new songs he’s been working on at super bowl halftime show jaguars offensive line not sure they can clear big enough opening for maurice jones drew obama takes surprise caller during weekly interview ROBERTASWAP he wants to play some new stuff he’s been working on at the bowl jaguars offensive line not sure they can dig big enough hole for maurice jakes panama takes in migrants during weekly radio address Table 9: Unfun model output reference samples. 13 867 Original Unfun obama, congress must strike a deal on budget by march 1 espn sports segment reveals science behind tony romo’s precision pass jimmie johnson disqualified after prerace inspection reveals car to be violating rules Human obama, congress must reach deal on budget by march 1, and then april 1, and then april 20, and then april 28, and then may 1 espn sports segment reveals science behind tony romo’s shittiness jimmie johnson disqualified after pre-race inspection reveals car to be airplane GPT3.5 obama, congress must strike a deal on budget by march 1 or face detention espn sports segment reveals magic behind tony romo’s precision pass jimmie johnson disqualified after pre-race inspection reveals car to be a Transformer in disguise GPT-4 obama, congress must strike a deal on budget by march 1 or risk being grounded espn sports segment reveals magic spell behind tony romo’s precision pass jimmie johnson disqualified after pre-race inspection reveals car to be a spaceship MISTRAL obama, congress must strike a deal on budget by march 1 espn sports segment reveals science behind tony romo’s precision penis jimmie johnson disqualified after pre-race inspection reveals car to be black MISTRAL INSTRUCT obama, congress must strike a deal on budget by march 1, or else there will be no more kittens espn sports segment reveals science behind kate middleton’s royal wave jimmie johnson disqualified after pre-race inspection reveals car to be violating laws and joking about it Table 10: Satire model output reference samples. Original Humorous Tweet GPT-4 English-Hindi Unfuns Ab ki baar.. #MaaBetaFarar.. Ab ki baar.. yeh log farar hain.. Husbands should be like Vim bar, gale kam aur chale zyada. Patidev ko samarpit aur lambe samay tak saath dena chahiye. O naadan parindey ghar aaja. Parinda: naadan tera baap. O naadan parindey ghar aaja. Parinda: Mujhe ghar aane do. Neend aaja nahi to kal se tujhe KRK bulaunga Neend aaja nahi to kal se tujhe alag naam se bulaunga Bhai ab itne velle bhi nahi hai ki #IndVsBan test match dekhenge Bhai ab itne samay nahi hai ki #IndVsBan test match dekhenge Asli toofan andar hai, jail ke andar. #SalmanVerdict Asli samasya jail ke andar hai. #SalmanVerdict Vodafone use karne se acha to ek kabootar pal lo. Vodafone use karne se acha to kisi aur network provider ka use karo. Table 11: GPT-4 English-Hindi unfunned reference samples. See Table 12 for English translations. 14 868 Original Humorous Tweet GPT-4 English-Hindi Unfuns This time.. #MotherSonGone.. This time.. these people are gone.. Husbands should be like Vim bar, less talk and more work. Husbands should be dedicated and support for a long time. Oh naive bird, come home. Bird: Your dad is naive. Oh naive bird, come home. Bird: Let me come home. If sleep doesn’t come, from tomorrow I will call you KRK. If sleep doesn’t come, from tomorrow I will call you by a different name. Bro, we’re not that free to watch the #IndVsBan test match. Bro, we don’t have that much time to watch the #IndVsBan test match. The real storm is inside, inside the jail. #SalmanVerdict The real problem is inside the jail. #SalmanVerdict It’s better to raise a pigeon than to use Vodafone. It’s better to use another network provider than Vodafone. Table 12: Translation of GPT-4 English-Hindi unfunned reference samples. 15 869
2024_short
76
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 870–879 August 11-16, 2024 ©2024 Association for Computational Linguistics Don’t Buy it! Reassessing the Ad Understanding Abilities of Contrastive Multimodal Models Anna Bavaresco, Alberto Testoni, Raquel Fernández Institute for Logic, Language and Computation University of Amsterdam {a.bavaresco, a.testoni, raquel.fernandez}@uva.nl Abstract Image-based advertisements are complex multimodal stimuli that often contain unusual visual elements and figurative language. Previous research on automatic ad understanding has reported impressive zero-shot accuracy of contrastive vision-and-language models (VLMs) on an ad-explanation retrieval task. Here, we examine the original task setup and show that contrastive VLMs can solve it by exploiting grounding heuristics. To control for this confound, we introduce TRADE, a new evaluation test set with adversarial grounded explanations. While these explanations look implausible to humans, we show that they “fool” four different contrastive VLMs. Our findings highlight the need for an improved operationalisation of automatic ad understanding that truly evaluates VLMs’ multimodal reasoning abilities. We make our code and TRADE available at https://github.com/dmg-illc/trade. 1 Introduction Image-based advertisement is not only a crucial component of marketing campaigns, but also an interesting example of sophisticated multimodal communication. Ads often feature unusual visual elements (e.g., objects that are non-photorealistic, outside of their usual context, atypical, etc.) or examples of figurative language (e.g., metaphors, allegories, play on words, etc.) designed to make a long-lasting impression on the viewer. Figure 1 provides an example of an ad with a non-photorealistic object (a whale made of wires) that, as the text suggests, is used to convey a complex metaphorical message about the product (i.e., a wireless device). These elaborate uses of images and text make automatic ad understanding a challenging task requiring multiple non-trivial abilities, e.g., object detection, scene-text extraction, figurative language understanding, and complex image-text integration. Ad understanding was first proposed as a deeplearning task by Hussain et al. (2017), who introduced the Pitt Ads dataset, consisting of imagebased ads along with explanations capturing their underlying message (e.g., I should purchase this stereo system because wireless is less messy). This dataset was then used in a retrieval-based challenge requiring to identify a plausible explanation for an ad within a set of possible candidates.1 Early work on this task has employed ensemble predictors (Hussain et al., 2017; Ye and Kovashka, 2018) and graph neural networks (Dey et al., 2021) that were designed and trained ad hoc. More recently, the development of large vision-andlanguage models (VLMs) pretrained with imagetext matching (ITM) objectives has opened the possibility of performing the task in zero-shot, i.e. by using an off-the-shelf model instead of training one from scratch. Following this approach, Jia et al. (2023) tested multiple VLMs (ALBEF, Li et al. 2021; CLIP, Radford et al. 2021; and LiT, Zhai et al. 2022) on the task by computing imagetext alignment scores between ads and their possible explanations. They observed an excellent zeroshot performance for all models, documenting an accuracy of 95.2% for CLIP. While the results reported by Jia et al. (2023) seem to suggest that the tested models developed the reasoning abilities necessary to succeed at ad understanding, we note that this conclusion is in contrast with a great deal of existing work. Extensive research investigating whether VLMs develop reasoning skills as a result of their contrastive ITM pretraining has exposed several weaknesses of these models. They have been shown to be limited in their abilities to identify noun mismatches in image captions (Shekhar et al., 2017), reason compositionally (Thrush et al., 2022), capture spatial relations (Liu et al., 2023), understand verbs (in1https://eval.ai/web/challenges/ challenge-page/86/overview 870 BRAND NAME BRAND NAME Wires are under extinction. DVD theater [brand name]. Now wireless. I should get a [wbn] bike because they have been around for a while 1. I should buy a [brand name] because I will not need the wires 2. I should use wireless instead of wires because it will help reduce waste in the world 3. I should not eat meat because it supports the killing of animals 4. I should not wear fur because it kills animals 5. I should buy ice cream because it's on sale for the company being in business for 16 years 6. I should fund [wbn] because we should take back control 7. I should purchase this stereo system because wireless is less messy 8. I should use wires because they are like whales risking extinction 1. I should use wireless instead of wires because it will help reduce waste in the world 2. I should use caution when throwing away cables because whales risk extinction 3. Original task setup Our task setup in TRADE Text on the ad: Figure 1: An example of the ad explanation retrieval task with the original setup vs. our new setup. The matching explanations are marked in italics. In the original setup, negatives are randomly sampled (5 out of 12 are shown for conciseness); in our setup, negatives are carefully curated to be textually and visually grounded in the ad but, at the same time, clearly incompatible with it. Brand names and logos are edited out in the examples present in this paper for presentation purposes, but are in fact visible in both task setups ([wbn] stands for “wrong brand name”). stead of just nouns) (Hendricks and Nematzadeh, 2021), and handle various linguistic phenomena (Parcalabescu et al., 2022) and basic constructions (Chen et al., 2023). Importantly, this line of work focused on a set of traditional visuo-linguistic tasks but not specifically on ad understanding. Here, we ask whether the performance previously documented on the Pitt Ads dataset reflects genuine understanding abilities or is driven by simpler heuristics. We conduct a thorough analysis of the evaluation setup originally proposed to test ad understanding and reveal that it has key flaws, which allow models to exploit grounding heuristics. We introduce a new test set, TRADE (TRuly ADversarial ad understanding Evaluation), which controls for the identified issues. Our experiments show that several contrastive models tested zero-shot, including CLIP, perform at chance level on TRADE, while humans excel at the task. More generally, our findings highlight the need to better operationalise ad understanding in order to obtain reliable assessments of VLMs’ multimodal reasoning abilities. 2 A Closer Look at the Evaluation Setup The Pitt Ads dataset2 by Hussain et al. (2017) consists of 64832 ads, each annotated with 3 explanations in English written by 3 different expert annotators. These explanations (in the form I should ⟨action⟩because ⟨reason⟩) aim at capturing the persuasive message behind the ads. While explanations may be subjective, the intuition behind the image-to-text retrieval task proposed along with 2https://people.cs.pitt.edu/~kovashka/ads/ the dataset is that a model which can understand ads should be able to match them with a plausible explanation. Specifically, each ad is paired with 15 messages, 3 positives corresponding to the annotations for that ad and 12 negatives randomly sampled from annotations for different ads. Figure 1 provides an overview of the task setup. Previous work has hinted at possible limitations of the evaluation setup. Kalra et al. (2020) observed a significant overlap between the text present in the ad and the matching explanations and noticed this was “a major discriminating factor” that their finetuned BERT model could exploit. Similarly, Jia et al. (2023) pointed out that the candidate set lacks “hard negatives” and proposed to increase the set size, but could not provide a solution ensuring the negatives were actually hard. We conduct a quantitative analysis on the original evaluation setup to uncover potential shortcuts that VLMs may be exploiting to solve the task. We hypothesise that the models may take advantage of two factors that do not necessarily reflect ad understanding: simple relationships between (1) the candidate explanations and the text present in the ad (i.e., the degree of textual grounding of the explanations) and (2) the entities mentioned in the explanations and those depicted in the image (their degree of visual grounding). To test our hypotheses, we define several visual- and textualgrounding scores and check whether they correlate with the CLIP-based alignment score used by Jia et al. (2023) to retrieve the ad explanations.3 3More details on the scores can be found in Appendix A. 871 Textual-grounding scores are computed between candidate explanations and the text extracted from the ad with Optical Character Recognition (OCR). We calculate (1) text overlap as the proportion of content-word lemmas from the explanation that are also present in the OCR-extracted text, and (2) text similarity as the cosine similarity between a sentence-level embedding of the explanation and that of the OCR-extracted text, derived with MPNet (Song et al., 2020). Visual-grounding scores include (1) object mention as the proportion of nouns in the candidate explanation that are present in a set of objects we automatically extracted from the image by a ResNet50 model (He et al., 2016), and (2) caption similarity as the cosine similarity between the sentencelevel embedding of the candidate explanation and the embedding of the ad caption we obtained with BLIP-2 (Li et al., 2023). Our motivation for examining both detected objects and generated captions is driven by the observation that they capture complementary information. More specifically, while detected objects are not mediated by language models, they may often incorporate non-salient objects that people would unlikely mention when describing a picture or contain lexical choices that differ from the human ones. On the other hand, generated captions refer to objects in a more human-like way but, at the same time, may contain hallucinations due to linguistic priors. We compute the grounding scores and CLIP’s alignment score for the test split of the Pitt Ads dataset, consisting of 12805 samples. As hypothesised, we observe a positive correlation between all our grounding scores and CLIP’s alignment score. All the Spearman’s correlation coefficients are significant (p ≪0.001) and range from 0.14 and 0.61 (see Appendix A for details). In addition, as shown in Table 1 (left), we find that in the original setup the matching explanations are significantly more grounded than the non-matching explanations for each ad. While the elements (OCR text, objects, captions) detected by other models are not necessarily the same as those identified by CLIP, these results suggest that reasonably similar information is indirectly extracted by CLIP and exploited to solve the ad-understanding task. This finding also agrees with results from previous work showing that CLIP develops OCR capabilities and can successfully classify objects (Radford et al., 2021). Overall, these results indicate that the original original setup Pos Neg text overlap 0.21 0.03 * text similarity 0.40 0.12 * object mention 0.03 0.01 * caption similarity 0.32 0.11 * TRADE Pos Neg 0.27 0.31 * 0.44 0.42 0.02 0.04 0.34 0.35 Table 1: Average textual- and visual-grounding scores of the matching (Pos) and non-matching (Neg) explanations in the original evaluation setup and in TRADE; statistically significant differences between Pos and Neg marked with * (p ≪0.001, two-sample t-test). evaluation setup is flawed and that the outstanding zero-shot performance obtained by VLMs on the retrieval task may be due to simple image-text alignment. 3 TRADE: A New Adversarial Test Set To test the extent to which VLMs capture elaborate visuo-linguistic relationships present in imagebased ads beyond image-text alignment, we develop TRADE (TRuly ADversarial ad understanding Evaluation), a new diagnostic test set with adversarial negative explanations. TRADE consists of 300 randomly selected ads from the Pitt Ads dataset, each associated with 3 options (1 positive and 2 negatives). Concretely, for each of these ads, we randomly select one valid explanation from the available annotations and create two adversarial negative explanations—see Figures 1 and 2 for examples (more examples in Appendix C). The adversarial explanations were created by 4 expert annotators who were instructed to do their best to come up with non-plausible explanations that nevertheless mention objects and fragments of text present in the image. Annotators were also asked to approximately match the length of the positive explanation when writing these adversarial sentences. Appendix B contains more details about the creation of the adversarial negatives, including the guidelines provided to the annotators. We validate TRADE in two ways. First, we compute the textual- and visual-grounding scores introduced in Section 2. This shows that in TRADE the gap between positive and negative explanations is radically reduced compared to the original setup, as can be seen in Table 1 (right). Second, we confirm that humans are not affected by the high level of grounding of both positive and negative examples and are able to identify the plausible explanation in 872 the TRADE samples with an accuracy of 94%.4 To allow for a direct comparison with an evaluation setup with random negatives, akin to the original task setup, we also create TRADE-control: a version of TRADE where the two negative explanations per ad are randomly sampled from the explanations for other ads. TRADE-control includes 10 versions created with different random samplings. TRADE and TRADE-control are publicly available at https://github.com/dmg-illc/trade under a Creative Commons Attribution 4.0 International (CC-BY) license. 4 Experiments We use TRADE to test four contrastive pretrained VLMs zero-shot. Three of these models (CLIP, Radford et al. 2021; ALBEF, Li et al. 2021; and LiT, Zhai et al. 2022) have been shown to achieve high zero-shot performance on the original task setup (Jia et al., 2023). Here we challenge them with TRADE and consider an additional model (ALIGN, Jia et al. 2021). 4.1 Models and Setup Except for ALBEF, all the models we test encode visual and textual inputs separately and are pretrained with an image-text matching objective. ALBEF has an additional multimodal module, but here we only use its unimodal encoders, which are also pretrained contrastively. A more detailed overview of these VLMs is reported in Appendix E. All four models allow for the computation of an image-text alignment score, here defined as the dot product between the normalized image embedding and the text embedding of each candidate explanation. As in previous work (Jia et al., 2023), we evaluate the models by computing alignment scores for every ad-explanation pair and consider the explanation yielding the highest alignment score as the model’s retrieved option. We report average accuracy, as (mean) rank is not very informative with only 3 candidates. 4.2 Results Table 2 shows the performance of the models on TRADE and TRADE-control. All models achieve an accuracy higher than 80% in the control condition, with CLIP reaching 98%. However, the performance of all models in the adversarial setting— 4Each of the 300 samples was annotated by two annotators external to the project; more details available in Appendix D. Model TRADE control CLIP (ViT-L/14@336px) 0.34 0.98 (0.01) ALIGN (base) 0.28 0.97 (0.01) LiT (L16L) 0.31 0.82 (0.02) ALBEF (ft. on Flickr30k) 0.33 0.88 (0.01) Table 2: Average accuracy on TRADE vs. TRADEcontrol. The TRADE-control values are averages over 10 random samples, with standard deviation in brackets. where humans achieve 94% accuracy, cf. Section 3—nears chance level, i.e., 33%. Figure 2 provides an example of model- and human-chosen ad explanations on a TRADE instance. These results provide compelling evidence that the evaluated VLMs rely on visual and textual grounding when retrieving ad explanations. As a result, they can achieve excellent accuracy in an evaluation setting where negatives are poorly grounded, but are easily “fooled” by grounded adversarial distractors that are extremely easy for humans to discard. To get more insight into the models’ performance, we examine their predictions and observe that, while all models perform equally poorly on TRADE, there are 23 samples (8% of the dataset) for which the four models succeed at identifying the target explanation. An analysis of the explanations correctly retrieved by all models reveals that most of them exhibit grounding scores that are higher than the average scores for matching explanations. Figure 3 visualises this finding. 5 Conclusions Our work exposes key limitations of the evaluation setup that was previously used to benchmark VLM’s ad understanding abilities. We introduce a new adversarial test set (TRADE) that controls for the identified issues and show that, while humans excel, contrastive VLMs perform at chance level on TRADE. This result has the following implications. First, it shows that, when processing imagebased ads, contrastive VLMs are strongly biased towards textually and visually grounded explanations, regardless of their plausibility. This is in agreement with previous work (Hendricks and Nematzadeh, 2021; Parcalabescu et al., 2022; Thrush et al., 2022; Liu et al., 2023; Chen et al., 2023) and points to the need to use caution when interpreting models’ zero-shot accuracy on “naturalistic” (i.e., non-adversarial) setups as proof that they develop sophisticated reasoning abilities via pretraining. 873 Ad TRADE explanations Chosen by 1. I should go to [brand name] not only does their food taste great but it also looks good. Human 2. I should go to [brand name] because my eyelashes need a new look. CLIP, ALIGN 3. I should go to [brand name] because tasty burgers must look like these eyelashes. ALBEF, LiT Figure 2: Ad explanations selected by human annotators vs. our tested models for one instance from TRADE. Italic indicates the matching explanation. Brands and logos are edited out in the paper examples for presentation purposes but are visible to models and human annotators. Figure 3: Boxplots summarizing the distribution of grounding scores computed for positive explanations in TRADE. The blue dots indicate the scores for the positive explanations correctly selected by all VLMs. The object mention score is not included because its median coincides with the quartiles. Second, our work highlights issues with the current retrieval-based operationalization of ad understanding as a task to evaluate VLM’s multimodal reasoning abilities. We emphasise that TRADE’s aim is to control for a confound—the grounding gap between positives and negatives—that we identified as crucial when testing a specific type of VLMs, i.e., those pretrained with an ITM objective. However, defining which abilities are necessary to conclude that a model developed a good “understanding” of image-based ads and designing a task that truly evaluates them remain open issues for future research. Limitations The current study and previous work have operationalised ad understanding as an ad-explanation retrieval task. In particular, we have focused on testing contrastive pretrained VLMs zero-shot on this task. Consequently, the question of whether VLMs trained or finetuned on the Pitt Ads dataset would be more robust against our adversarial explanations remains open and could be investigated in the future. Nevertheless, we emphasize that the retrieval-based setup has limitations (e.g., the impossibility of providing task-specific instructions to the models) and may not be the most appropriate to evaluate VLM’s ad understanding skills and their multimodal reasoning abilities more generally. An interesting direction for future research could be to formulate the task differently, e.g., as a generative task. This would solve some issues of the retrievalbased setup, but also posit novel challenges, such as identifying the most effective prompt and defining meaningful protocols to evaluate the generated explanations. On a methodological note, we highlight that visual and textual alignment are complex constructs that encompass different aspects and can be analysed at different levels of granularity. Therefore, we do not intend our grounding scores as precise and comprehensive metrics, but simply as indicators that can reflect general trends. Ethical Considerations TRADE does not introduce new ad-images, but simply links to the existing Pitt Ads dataset along with the set of adversarial explanations we have created. However, it is worth emphasizing that the ads present in Pitt Ads were originally collected by querying Google Images. This posits two ethical concerns. First, offensive/harmful content or stereotypes may be present in the images, as already pointed out by Jia et al. (2023). To minimise this potential problem when developing TRADE, we made sure the annotators who created our adversarial explanations had the possibility of flagging ads that they deemed inappropriate (they did so a couple 874 of times). However, we cannot fully guarantee that the ad images used in TRADE are completely free from harmful content. As for the adversarial distractors created for TRADE, we have not systematically examined all of them manually to make sure they do not contain harmful content, but we believe this is very unlikely given the guidelines and the fact that they were created in a very controlled setting partially by us and partially by close colleagues. The second concern is about the license of the images. The Pitt Ads dataset was released without a license and the curators do not clarify whether the images are copyrighted or not. Finally, we note that our study does not take into account the personal and cultural factors which may play a substantial role in people’s perception of ads or in the values they associate with certain products. Although TRADE includes only one matching explanation for each ad, we emphasize that we do not intend this as a “ground truth”. We hope that future research on automatic ad understanding will adopt evaluation protocols that reflect a diverse set of possible interpretations. Acknowledgements We warmly thank the current members and alumni of the Dialogue Modelling Group (DMG) from the University of Amsterdam for the support they provided at different stages of this project. Special thanks are due to Sandro Pezzelle, who suggested the name ‘TRADE’ for our introduced dataset. Our heartfelt gratitude also goes to the colleagues who assisted us with the creation of TRADE negatives and to the colleagues, friends and partners who kindly volunteered to participate in our experiment to assess human performance on TRADE. The present work was funded by the European Research Council under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 819455). References Xinyi Chen, Raquel Fernández, and Sandro Pezzelle. 2023. The BLA benchmark: Investigating basic language abilities of pre-trained multimodal models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5817–5830, Singapore. Association for Computational Linguistics. Arka Ujjal Dey, Suman K Ghosh, Ernest Valveny, and Gaurav Harit. 2021. Beyond visual semantics: Exploring the role of scene text in image understanding. Pattern Recognition Letters, 149:164–171. Gabriel Goh, Nick Cammarata, Chelsea Voss, Shan Carter, Michael Petrov, Ludwig Schubert, Alec Radford, and Chris Olah. 2021. Multimodal neurons in artificial neural networks. Distill, 6(3):e30. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778. Lisa Anne Hendricks and Aida Nematzadeh. 2021. Probing image-language transformers for verb understanding. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3635–3644, Online. Association for Computational Linguistics. Zaeem Hussain, Mingda Zhang, Xiaozhong Zhang, Keren Ye, Christopher Thomas, Zuha Agha, Nathan Ong, and Adriana Kovashka. 2017. Automatic understanding of image and video advertisements. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1705– 1715. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In Proceedings of the 38th International Conference on Machine Learning, volume 139, pages 4904–4916. Zhiwei Jia, Pradyumna Narayana, Arjun Akula, Garima Pruthi, Hao Su, Sugato Basu, and Varun Jampani. 2023. KAFA: Rethinking image ad understanding with knowledge-augmented feature adaptation of vision-language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track), pages 772–785, Toronto, Canada. Association for Computational Linguistics. Kanika Kalra, Bhargav Kurma, Silpa Vadakkeeveetil Sreelatha, Manasi Patwardhan, and Shirish Karande. 2020. Understanding advertisements with BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7542–7547, Online. Association for Computational Linguistics. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023. BLIP-2: bootstrapping language-image pretraining with frozen image encoders and large language models. In Proceedings of the 40th International Conference on Machine Learning (ICML). Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. 2021. Align before fuse: Vision and language representation learning with momentum distillation. 875 In Advances in Neural Information Processing Systems, volume 34, pages 9694–9705. Curran Associates, Inc. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Proceedings of the 13th European Conference on Computer Vision (ECCV), pages 740–755. Springer. Fangyu Liu, Guy Emerson, and Nigel Collier. 2023. Visual spatial reasoning. Transactions of the Association for Computational Linguistics, 11:635–651. Letitia Parcalabescu, Michele Cafagna, Lilitta Muradjan, Anette Frank, Iacer Calixto, and Albert Gatt. 2022. VALSE: A task-independent benchmark for vision and language models centered on linguistic phenomena. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8253–8280, Dublin, Ireland. Association for Computational Linguistics. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning, volume 139, pages 8748–8763. Andrey Savchenko, Anton Alekseev, Sejeong Kwon, Elena Tutubalina, Evgeny Myasnikov, and Sergey Nikolenko. 2020. Ad lingua: Text classification improves symbolism prediction in image advertisements. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1886– 1892, Barcelona, Spain (Online). International Committee on Computational Linguistics. Ravi Shekhar, Sandro Pezzelle, Yauhen Klimovich, Aurélie Herbelot, Moin Nabi, Enver Sangineto, and Raffaella Bernardi. 2017. FOIL it! find one mismatch between image and language caption. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 255–265, Vancouver, Canada. Association for Computational Linguistics. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2020. Mpnet: masked and permuted pre-training for language understanding. In Proceedings of the 34th International Conference on Neural Information Processing Systems (NeurIPS), Red Hook, NY, USA. Curran Associates Inc. Tristan Thrush, Ryan Jiang, Max Bartolo, Amanpreet Singh, Adina Williams, Douwe Kiela, and Candace Ross. 2022. Winoground: Probing vision and language models for visio-linguistic compositionality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5238–5248. Keren Ye and Adriana Kovashka. 2018. Advise: Symbolism and external knowledge for decoding advertisements. In Proceedings of the European Conference on Computer Vision (ECCV), pages 837–855. Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, and Lucas Beyer. 2022. Lit: Zero-shot transfer with locked-image text tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 18123–18133. Appendix A Grounding Scores Textual Grounding The textual grounding scores were computed between candidate explanations and the ad OCR-extracted text output by Google Vision API5 and made publicly available6 by the authors of Savchenko et al. (2020). OCR text was present for 12304 ad-images of the test set and 294 images from TRADE. The text overlap score was computed as the proportion of words from the candidate explanation that were also present in the OCR-extracted text. Before computing the overlap, we lemmatized the text and removed stop-words. These preprocessing steps were performed with the NLTK7 package. The text similarity score was defined as the cosine similarity between the embedding of the explanation and that of the OCR-extracted text. The embeddings were obtained using the Sentence Transformers8 framework. Specifically, we used an MPNet (Song et al., 2020) pretrained model, which was indicated as the best-performing one. Visual Grounding To compute the visual grounding scores, we considered two sources of visual information: the objects identified by an object detector, and the ad-image captions. Our object detector was a ResNet50 model (He et al., 2016) pretrained on MS COCO (Lin et al., 2014). We used the implementation from the Detectron29 framework by Facebook. The model detected an average of 3.74 ± 3.85 objects from the Pitt Ads dataset test split and 3.51 ± 3.38 from TRADE. At least one object was detected on 11351 images from the Pitt Ads dataset test split and on all the ads 5https://cloud.google.com/vision/docs/ocr 6https://figshare.com/articles/dataset/OCR_ results/6682709 7https://www.nltk.org/ 8https://github.com/UKPLab/ sentence-transformers?tab=readme-ov-file 9https://github.com/facebookresearch/ detectron2 876 from TRADE (300). Ad captions were obtained using BLIP2 (Li et al., 2023) with OPT 2.7B as language decoder. BLIP2 was used in its Hugging Face implementation.10 The object mention score was computed as the lemmatized nouns in the AR statement that were part of the set of detected objects. The caption similarity score, on the other hand, was defined similarly to the text similarity score, with the caption in place of the OCR-extracted text. Correlation with CLIP’s alignment scores We computed Spearman correlations between all the grounding scores and the CLIP-alignment scores for both the Pitt Ads test set and TRADE. All results are summarized in Tables 3 and 4. B Creating TRADE The adversarial negatives were designed by two of the authors and two internal collaborators who volunteered for the task and are all proficient in English. Due to the complexity of this annotation task, we deemed it not suitable for crowdsourcing. The instructions given to the annotators were the following: 1. The sentence should be inconsistent with the image, meaning that it should not be a valid answer to the question “What should you do, according to this ad?”. Keep in mind that the answer should be patently wrong, i.e. it should require very little thinking to figure out it does not match the message of the ad. 2. The sentence should be in the form “I should [action] because [reason]” 3. The verb you use after “should” should be the same as the one from the right sentence. For example, if the right sentence starts with “I should buy”, your wrong annotation cannot start with “I should fly” 4. The sentence should be as grounded as possible, meaning that you should avoid mentioning objects/words that are not present in the ad as much as you can. Please keep this in mind, it is very important! 5. If possible, privilege salient visual elements over nonsalient ones. More concretely, try to mention large writings instead of small ones, and big foreground objects instead of small background ones. 6. When describing visual objects, try to be efficient instead of verbose. For example, if an ad depicts a famous man (say, Mr. X) driving a car of a specific brand (say, Brand Y), you should write something like “I should buy Mr. X because he drives a cool Brand Y car” instead of “I should buy a man with short hair and sunglasses because he drives a red four-wheeled vehicle” 7. Please avoid extra-long sentences. Your wrong answers should be approximately the same length as the correct ones. You don’t need to be as strict as to count the exact 10https://huggingface.co/docs/transformers/ model_doc/blip-2 number of words but try to avoid large mismatches (e.g. correct answer being not even one-line long and wrong answer being two lines) 8. Only include the name of brands/celebrities if they are also mentioned in the provided annotation 9. The sentence (e.g., “I should buy this perfume because roses are red and violets are blue”) but it should not be ungrammatical (do not write something like “I should hello world because rainbow”) Rule 8 was introduced as there is evidence (Goh et al., 2021) that CLIP is sensitive to proper nouns. Therefore, we wanted to avoid our negatives being preferred by the model simply because they contained more detailed information. Our annotation interface allowed annotators to flag ads in case of: 1. Presence of inappropriate/offensive/harmful content. 2. Low readability of the text. 3. Low image resolution. 4. Being unable to understand the ad (e.g., because the text was not in English). 5. Being unable to create a distractor meeting all the requirements. C Dataset Examples Some additional examples of the adversarial explanations we collected are shown in Figure 4 along with their TRADE-control counterparts. D Human Accuracy on TRADE To quantify the human accuracy on TRADE, we used the crowdsourcing platform Appen to present participants with the ad along with the question “What should you do according to this ad, and why?” and 3 options, i.e., a matching explanation and two adversarial grounded negatives. After some unsatisfactory pilot experiments where crowdworkers were not able to pass very simple test questions, we established that the task was not suitable for crowdsourcing. Therefore, we recruited 17 participants who volunteered for the task of judging the 300 samples in TRADE. They were not involved in the creation of the adversarial explanations and were informed that their anonymised data would be included in a study about automatic ad understanding. We ensured all annotators were proficient in English. Each question was answered by 2 different participants. They annotated an average of 35 ads each (std = 14, max = 50, min = 10). The mean accuracy calculated over the 600 collected 877 Pos Neg CLIP-pos CLIP-neg Corr text overlap 0.21 0.03 23.78 12.66 0.28 text similarity 0.4 0.12 23.78 12.66 0.61 object mention 0.03 0.01 23.72 12.74 0.14 caption similarity 0.32 0.11 23.72 12.68 0.53 Table 3: Grounding scores and CLIP-alignment scores for matching (positives) and non-matching (negatives) explanations from the original test set. Two-sample t-tests indicate that all differences between positives and negatives are statistically significant (p ≪0.001). The right-most column reports the Spearman correlations between aggregated (including both positives and negatives) grounding scores and the corresponding CLIP-alignment scores. All the correlation values are statistically significant (p ≪0.001). Pos Neg CLIP-pos CLIP-neg Corr text overlap 0.27 0.31 24.87 24.42 0.22 (p = 0) text similarity 0.44 0.42 24.87 24.42 0.41 (p = 0) object mention 0.02 0.04 24.84 24.39 0.04 (p = 0.22) caption similarity 0.34 0.35 24.84 24.39 0.3 (p = 0) Table 4: Grounding scores and CLIP-alignment scores for matching (positives) and non-matching (negatives) explanations from TRADE. With the exception of text overlap, the differences between grounding scores are not statistically significant (p ≪0.001). All the differences between positive and negative CLIP-alignment scores are also non-significant. BRAND NAME BRAND NAME BRAND NAME BRAND NAME TRADE distractors: I should stop smoking because I want a quick help I should stop smoking because bullets are slow TRADE distractors: I should go to [brand name] because my eyelashes need a new look I should go to [brand name] because tasty burgers must look like these eyelashes Text: Quick. Slow. Want help? Phone the smokeline on 0800 84 84 84. You can do it. We can help. Explanation: I should stop smoking because it is slowly killing me TRADE-control distractors: I should wear [clothing brand] because it is natural. I should buy [makeup brand] makeup because it has bold lipstick colours Text: [brand name] Up your game. Lane Carico [brand name] elite athlete. Explanation: I should buy these shoes because they will help you perform sports really well TRADE distractors: I should buy these shoes because they will make me hug people I should buy these shoes because I like to play your game TRADE-control distractors: I should get an [car brand] because it is stylish. I should consider [place name] for snack time because I can enjoy this with my boyfriend. Text: Get a tasty look. By [brand name] Explanation: I should go to [brand name] not only does their food taste great but it also looks good TRADE-control distractors: I should head this Heart Research Centre message, because it alerts me that my body and its organs are the product of many environments and many lives I should fund [healthcare system] because we should take back control Figure 4: Examples from TRADE and TRADE-control, along with our transcription of the text (just for readability, not part of the dataset). Brands and logos are edited out in the paper examples for presentation purposes but are visible in TRADE. 878 judgements was 94%. The cases where both participants selected the target explanation were 270 (90%). E Tested Models Here we provide an overview of the models used in our experiments. CLIP (Radford et al., 2021) is a contrastive model where image and text are separately encoded by two transformer-based models and then projected to the same vector space. CLIP is trained with a contrastive loss that minimizes the cosine distance between matching pairs of image and text embeddings. We used it in the Hugging Face implementation.11 ALIGN (Jia et al., 2021) is also a contrastive vision-and-language model trained with the same loss function used for CLIP. It mainly differs from the latter in its encoders (EfficientNet for images and BERT for text) and in that also leverages noisy data during the training process. We used the Hugging Face model implementation.12 LiT (Zhai et al., 2022) is a contrastive model where the image encoder is “locked” (i.e. frozen) during pre-training, whereas the language encoder is initialized with random weights and trained from scratch with a contrastive loss. We used the Vision Transformer implementation13 by Google Research. ALBEF (Li et al., 2021) is a vision-and-language model consisting of two separate transformer-based encoders from image and text and a multimodal encoder. The uni-modal modules are pre-trained contrastively and their outputs are then fused in the multimodal module, which is pre-trained with masked-language-modeling and image-textmatching objectives. We used the LAVIS implementation by Salesforce.14 11https://huggingface.co/docs/transformers/ model_doc/clip 12https://huggingface.co/docs/transformers/ model_doc/align 13https://github.com/google-research/vision_ transformer 14https://github.com/salesforce/LAVIS 879
2024_short
77
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 85–91 August 11-16, 2024 ©2024 Association for Computational Linguistics Language Models Do Hard Arithmetic Tasks Easily and Hardly Do Easy Arithmetic Tasks Andrew Gambardella* Yusuke Iwasawa Yutaka Matsuo University of Tokyo Abstract The ability (and inability) of large language models (LLMs) to perform arithmetic tasks has been the subject of much theoretical and practical debate. We show that LLMs are frequently able to correctly and confidently predict the first digit of n-digit by m-digit multiplication tasks without using chain of thought reasoning, despite these tasks require compounding operations to solve. Simultaneously, LLMs in practice often fail to correctly or confidently predict the last digit of an n-digit by m-digit multiplication, a task equivalent to 1-digit by 1digit multiplication which can be easily learned or memorized. We show that the latter task can be solved more robustly when the LLM is conditioned on all of the correct higher-order digits, which on average increases the confidence of the correct last digit on 5-digit by 5-digit multiplication tasks using Llama 2-13B by over 230% (0.13→0.43) and Mistral-7B by 150% (0.22→0.55). 1 Introduction The development of large language models (LLMs) (Brown et al., 2020) has given new life to the deep learning revolution, and seen mass adoption within not just the scientific community, but also society at large. These LLMs, being the first known “general” machine learning model developed by humanity (Morris et al., 2024), have been applied to various tasks dealing with natural language such as those commonly encountered in school curricula (Hendrycks et al., 2021), and even branching off into tasks such as text-to-image generation (Saharia et al., 2022) and hierarchical planning (Wang et al., 2023). Despite the generality and far-reaching consequences of LLMs, there are still many significant limitations making difficult the direct application of LLMs to certain tasks. One such limitation is *Correspondence: [email protected] the poor performance of LLMs on arithmetic tasks, such as elementary addition, subtraction, multiplication, and division (Nogueira et al., 2021). Not only do modern LLMs perform poorly on these tasks, but some tasks such as n-digit by m-digit multiplication and division, which require compounding operations to solve, appear to be unlearnable by pure autoregressive transformer architectures unless they decompose the problem into multiple steps, such as with chain of thought reasoning (Wies et al., 2022; Liu et al., 2023). As such, several solutions have been proposed, such as fine-tuning so that chain of thought reasoning is automatically used for problems which require compounding operations (Liu et al., 2023; Kojima et al., 2022) or fine-tuning to call outside tools, such as a calculator (Schick et al., 2024). While we most likely cannot expect simply training models with more parameters to allow for the solving of tasks which require compounding operations without chain of thought, we believe that analyzing the limitations and abilities of autoregressive LLMs when attempting to solve these tasks directly may shed light on unknown properties of LLMs. We therefore use Monte Carlo Dropout (MC Dropout) (Gal and Ghahramani, 2016) to analyze the performance of LLMs which were trained with dropout and which have open weights available, such as Llama 2 (Touvron et al., 2023) and Mistral (Jiang et al., 2023), in carrying out arithmetic tasks. MC Dropout allows one to interpret neural networks which were trained with dropout as Bayesian neural networks, as neural networks trained with dropout have been shown to be equivalent to a Bayesian approximation to a Gaussian process. This allows one to obtain empirical Bayesian confidence distributions over neural network weights or outputs by doing multiple forward passes through the neural network with dropout on, during test time (Gal and Ghahramani, 2016). MC Dropout 85 is one of many ensemble-based methods for uncertainty quantification (Ovadia et al., 2019; Ashukha et al., 2020), and has been applied to analyze the confidence of transformer architectures (Shelmanov et al., 2021) and to implement tree-based LLM prompting (Mo and Xin, 2023). Our results when applying MC Dropout to Llama 2 and Mistral in arithmetic tasks were surprising. We found that all models could confidently and correctly predict the first digit result of n-digit by m-digit multiplication problems, despite it most likely being impossible for any autoregressive LLM to have learned a general algorithm for doing so without decomposing the problem into multiple steps, as finding this digit in general requires solving the entire multiplication problem1. We also found that all models struggled to correctly output the last digit of n-digit by m-digit multiplication problems, despite it being very easy to learn an algorithm for doing so, as calculating the last digit is equivalent to 1-digit by 1-digit multiplication. Finally, we show that the confidence of LLMs in predicting the last digit can be increased by conditioning the generation of the last digit on the correct intervening digits, despite the computation of the last digit not depending on the correct computations of the higher-order digits at all. 2 Experiments We evaluate the HuggingFace (Wolf et al., 2019) implementations of Llama 2-7B, Llama 2-13B, and Mistral-7B (Touvron et al., 2023; Jiang et al., 2023) in 2-shot settings, where the 2-shot examples are of correct n-digit by m-digit multiplications. Sections 2.1 and 2.2 show results on the 3-digit by 3-digit multiplication task 592 ∗392, and averages over multiple problems with varying digit length are provided in Section 2.3. Details about the prompt and hyperparameters are given in Appendix A, details about the tokenizers for the models are given in Appendix B, and details about the use of dropout in the training of the models is given in Appendix C. 2.1 Unconditional Answer Generation We first study a version of the problem in which the answer is generated with the language model conditioned on the few shot examples and the problem to be solved, but is provided with none of 1Consider that the highest-order digit of 316227766016837933192 is 9, but the highest-order digit of 316227766016837933202 is 1. the digits to be generated (i.e., the normal few-shot arithmetic scenario), which we refer to as “unconditional” generation in an abuse of terminology. Our main results for these experiments are in Figures 1 and 2. In Figure 1 we can see that both Llama 2-7B and Llama 2-13B can confidently and correctly predict the first digit of the 3-digit by 3-digit multiplication task 592 ∗392, which equals 232064. This should be surprising as it is not immediately apparent from the problem that the first digit of the solution should be 2, and the only way to discover this is to compute the multiplication. As LLMs most likely cannot perform n-digit by m-digit multiplication in the general case without decomposing the problem into steps, the output of the first digit in this case is unlikely to be the output of a multiplication algorithm learned by the LLM. Figure 1: Confidence and accuracy of Llama 2-7B and Llama 2-13B predicting the first digit of the result of 592∗392. Both language models are able to confidently and correctly predict that the first digit should be 2, despite this not being immediately apparent from the problem. Conversely, in Figure 2, we can see that both Llama 2-7B and Llama 2-13B can neither confidently nor correctly predict the last digit of the same problem, despite doing so being equivalent to 1-digit by 1-digit multiplication. This is a case in which any reasonable model should be able to confidently and correctly solve the task, as not only 86 Figure 2: Confidence and accuracy of Llama 2-7B and Llama 2-13B predicting the sixth digit of the result of 592 ∗392. Neither are able to predict this digit confidently, with the mode of the distribution on the “end string” character in both cases. Both only output 4 in about 20% of samples, despite it being immediately apparent that the final digit should be 4. could the algorithm to solve the task be learned by an autoregressive language model, but the information needed to solve this task could also very easily be memorized by language models with billions of weights. 2.2 Conditional Answer Generation Finally, we contrast the experiments given in Figures 1 and 2 with a third experiment, in which the LLM is given all digits from the answer except for the final digit, and is tasked with outputting solely the final digit, which we refer to as “conditional” generation in an abuse of terminology. Results for this experiment are given in Figure 3. In this case the confidence in the correct output doubles for Llama 2-7B and triples for Llama 2-13B, with Llama 2-13B now having most of its probability mass on the correct last digit, whereas it did not do so when generating the entire string at once (and therefore often conditioning on incorrect prior digits). The fact that in both cases, more probability mass is being put on the correct answer should be surprising, as the computation of this digit does not depend on the correctness of the higher-order digits Figure 3: Confidence and accuracy of Llama 2-7B and Llama 2-13B predicting the last digit of the result of 592 ∗392, when conditioned on the first five correct digits. The confidence in the correct answer being 4 doubles for Llama 2-7B and more than triples for Llama 2-13B, despite the computation of the last digit not depending on the prior digits being correct at all. in any way. 2.3 Ablation Over Digit Length We provide further ablations over digit length with Llama 2-7B and 13B in Table 1. Each subtable gives the confidence of the correct digit, averaged over 10 different n-digit by m-digit multiplication problems each. We find that the conclusions shown for a single example in Sections 2.1 and 2.2 hold over varying multiplication problems and digit lengths in general. We further provide similar Mistral-7B experiments in Table 2. While Mistral-7B is stronger at arithmetic tasks than both Llama 2-7B and 13B, the same patterns and conclusions found for Llama 2-7B and 13B also hold for Mistral-7B. 3 Discussion of Results 3.1 First Digit It is most likely impossible for autoregressive LLMs to compute the first digit of an n-digit by mdigit multiplication problem without decomposing the problem into steps, especially given that the answer is being written starting with the highest-order 87 Llama 2-7B Llama 2-13B aaaaaa a n m 2 3 4 5 2 0.81 0.90 0.82 0.82 3 0.91 0.78 0.88 0.92 4 0.88 0.83 0.92 0.77 5 0.89 0.74 0.89 0.87 (a) aaaaaa a n m 2 3 4 5 2 0.84 0.85 0.79 0.73 3 0.87 0.72 0.85 0.86 4 0.84 0.83 0.78 0.78 5 0.86 0.71 0.84 0.86 (b) aaaaaa a n m 2 3 4 5 2 0.52 0.34 0.16 0.20 3 0.39 0.22 0.16 0.19 4 0.40 0.21 0.20 0.15 5 0.33 0.20 0.15 0.11 (c) aaaaaa a n m 2 3 4 5 2 0.78 0.50 0.32 0.30 3 0.56 0.40 0.24 0.17 4 0.63 0.37 0.29 0.22 5 0.52 0.30 0.24 0.13 (d) aaaaaa a n m 2 3 4 5 2 0.64 0.41 0.24 0.51 3 0.55 0.45 0.38 0.40 4 0.43 0.33 0.38 0.36 5 0.44 0.41 0.26 0.25 (e) aaaaaa a n m 2 3 4 5 2 0.82 0.66 0.48 0.57 3 0.66 0.68 0.49 0.51 4 0.73 0.54 0.56 0.47 5 0.70 0.54 0.50 0.43 (f) Table 1: Llama 2-7B and 13B generation average confidence of the correct first digit (a, b), unconditional average confidence of the correct last digit (c, d), and conditional average confidence of the correct last digit (e, f). digit, and calculating the first digit depends on the correct calculations of the lower-order digits. LLMs can, however, perform 1-digit by 1-digit multiplication. If these LLMs were to internally round 592 to 600 and 392 to 400, it could approximately solve for the highest-order digit in this way, as 600 ∗400 is a computation that can be performed by autoregressive language models. We find it likely that such a computation is occurring inside these LLMs, especially as stochastic gradient descent is likely to find such “shortcuts.” 3.2 Last Digit Both LLMs failing to predict the last digit when generating the entire string autoregressively, and their confidence and accuracy in predicting the last digit increasing when conditioned on correct prior digits, seem to be related, and could stem from the view that autoregressive language models are “exponentially diverging diffusion processes,” a view that several researchers have argued informally (LeCun et al., 2023), and has also recently been more formally proven (Dziri et al., 2023). The argument is essentially that if an autoregressive LLM has some non-zero chance of making a mistake, then repeated application of that LLM to generate a long string will cause errors to compound exponentially. This argument is not fully satisfying, however, for explaining the behavior of LLMs in predicting the last digit. Not only should p(last_digit|wrong_intervening_digits) be the same as p(last_digit|correct_intervening_digits) due to the computation involved (the last digit not depending on any other digits of the answer at all), but the fact that LLMs are more correct and more confident when conditioned on correct digits rather than wrong digits means that LLMs are able to internally distinguish between the two states, despite not being able to generate the entire correct string in the first place. This finding may be related to recent results in the hallucination detection literature, where it has been noted that the internal states of LLMs can be used to detect when the conditioning text, including its own outputs, are wrong (Azaria and Mitchell, 2023; Chen et al., 2024). It stands to reason that if the internal states of an LLM differ depending 88 aaaaaa a n m 2 3 4 5 2 0.97 ± 0.03 0.98 ± 0.03 0.98 ± 0.02 1.00 ± 0.00 3 0.98 ± 0.03 1.00 ± 0.00 0.94 ± 0.09 0.93 ± 0.04 4 0.99 ± 0.01 0.87 ± 0.15 0.98 ± 0.04 0.82 ± 0.09 5 0.89 ± 0.1 0.94 ± 0.11 0.95 ± 0.06 0.99 ± 0.01 (a) aaaaaa a n m 2 3 4 5 2 0.74 ± 0.06 0.57 ± 0.26 0.52 ± 0.29 0.41 ± 0.21 3 0.87 ± 0.10 0.70 ± 0.13 0.20 ± 0.12 0.11 ± 0.07 4 0.44 ± 0.14 0.70 ± 0.14 0.28 ± 0.23 0.30 ± 0.15 5 0.70 ± 0.10 0.33 ± 0.09 0.20 ± 0.13 0.22 ± 0.07 (b) aaaaaa a n m 2 3 4 5 2 0.85 ± 0.23 0.83 ± 0.13 0.73 ± 0.21 0.76 ± 0.23 3 0.86 ± 0.13 0.85 ± 0.11 0.75 ± 0.22 0.57 ± 0.32 4 0.76 ± 0.17 0.62 ± 0.27 0.77 ± 0.26 0.59 ± 0.26 5 0.80 ± 0.18 0.68 ± 0.21 0.65 ± 0.26 0.55 ± 0.35 (c) Table 2: Mistral-7B generation average and standard deviation confidence of the correct first digit (a), unconditional average and standard deviation confidence of the correct last digit (b), and conditional average and standard deviation confidence of the correct last digit (c). on whether its conditioning is correct or not, then further outputs which are autoregressively generated based on these internal states may also differ. In other words, while previous results show that LLMs may experience exponentially compounding errors, our finding suggests this may occur not only due to faulty reasoning when using incorrect intermediate steps, but also when the LLM “realizes” that it had generated incorrect output, and then “believes” that its task is to continue to do so. While out of the scope of this paper, we are interested in further study of this property in particular, and its potential implications. 4 Conclusion Here we present findings on the application of LLMs to arithmetic tasks, seen through the lens of Monte Carlo Dropout. We found that the abilities of what LLMs can do in practice, versus what the theory dictates should be possible for LLMs to do, can be reversed in several cases. In particular, we found that Llama 2 and Mistral could confidently and correctly output the first digit of the result of ndigit by m-digit multiplication tasks despite most likely being unable to in the general case, whereas they struggled with outputting the last digit either correctly or confidently, a task which should be easily learnable. We also found that accuracy and confidence in outputting the last digit increases when the prior digits are correct, and we believe that this finding is related to, and could have implications for, recent results in hallucination detection. 5 Limitations MC Dropout is a technique that is only applicable when neural network weights are available and the neural network was trained with dropout. These restrictions limit the number of language models that can be analyzed with the techniques in this paper significantly, and crucially, state of the art language models such as GPT-4 (OpenAI, 2023), Gemini (Gemini Team et al., 2023), and Claude (Anthropic, 2023) cannot be analyzed in this way by researchers outside of OpenAI, Google, and Anthropic respectively. Such limitations make clear the need for researchers to have access to language models with open weights. As we have restricted our analysis to Llama 2 and Mistral (which share similar architectures), it is possible that our findings do not generalize to 89 other large language models, but given the very small number of existing language models that can be analyzed in this way, it will be difficult to gauge the generality of our findings until more language models which were trained with dropout and have open weights are released. References Anthropic. 2023. Model Card and Evaluations for Claude Models. Arsenii Ashukha, Alexander Lyzhov, Dmitry Molchanov, and Dmitry Vetrov. 2020. Pitfalls of in-domain uncertainty estimation and ensembling in deep learning. In 8th International Conference on Learning Representations, ICLR 2020. Amos Azaria and Tom Mitchell. 2023. The Internal State of an LLM Knows When It’s Lying. In Findings of the Association for Computational Linguistics: EMNLP 2023. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in neural information processing systems 33, pages 1877–1901. Chao Chen, Kai Liu, Ze Chen, Yi Gu, Yue Wu, Mingyuan Tao, Zhihang Fu, and Jieping Ye. 2024. INSIDE: LLMs’ Internal States Retain the Power of Hallucination Detection. In The Twelfth International Conference on Learning Representations. Nouha Dziri, Ximing Lu, Melanie Sclar, Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, and Yejin Choi. 2023. Faith and Fate: Limits of Transformers on Compositionality. In Thirty-seventh Conference on Neural Information Processing Systems. Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In 33rd International Conference on Machine Learning, ICML 2016, volume 3, pages 1651–1660. Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, and others. 2023. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805. Gemma Team. 2024. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring Massive Multitask Language Understanding. Proceedings of the International Conference on Learning Representations (ICLR). Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7B. arXiv preprint arXiv:2310.06825. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199– 22213. Yann LeCun, Brenden Lake, Jacob Browning, David Chalmers, Ellie Pavlick, and Gary Lupyan. 2023. Debate: Do language models need sensory grounding for meaning and understanding? Tiedong Liu, Bryan Kian, and Hsiang Low. 2023. Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks. arXiv preprint arXiv:2305.14201. Shentong Mo and Miao Xin. 2023. Tree of Uncertain Thoughts Reasoning for Large Language Models. arXiv preprint arXiv:2309.07694. Meredith Ringel Morris, Jascha Sohl-Dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, and Shane Legg. 2024. Levels of AGI: Operationalizing Progress on the Path to AGI. arXiv preprint arXiv:2311.02462. Rodrigo Nogueira, Zhiying Jiang, Jimmy Lin, and David R Cheriton. 2021. Investigating the Limitations of Transformers with Simple Arithmetic Tasks. Technical report. OpenAI. 2023. GPT-4 Technical Report. Technical report. Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D. Sculley, Sebastian Nowozin, Joshua V. Dillon, Balaji Lakshminarayanan, and Jasper Snoek. 2019. Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift. In Advances in Neural Information Processing Systems, volume 32. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Raphael Gontijo-Lopes, Tim Salimans, Jonathan Ho, David J. Fleet, and Mohammad Norouzi. 2022. Photorealistic Text-to-Image Diffusion Models with 90 Deep Language Understanding. In Advances in Neural Information Processing Systems, volume 35. Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Eric Hambro, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2024. Toolformer: Language models can teach themselves to use tools. Advances in Neural Information Processing Systems, 36. Artem Shelmanov, Evgenii Tsymbalov, Dmitri Puzyrev, Kirill Fedyanin, Alexander Panchenko, and Maxim Panov. 2021. How certain is your transformer? In EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, and others. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2023. Voyager: An OpenEnded Embodied Agent with Large Language Models. arXiv preprint arXiv:2305.16291. Noam Wies, Yoav Levine, and Amnon Shashua. 2022. Sub-Task Decomposition Enables Learning in Sequence to Sequence Tasks. In The Eleventh International Conference on Learning Representations. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and others. 2019. Huggingface’s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771. A Prompt Format and Hyperparameters The exact prompt used in Sections 2.1 and 2.2 is “111 ∗472 = 52392. 362 ∗194 = 70228. {math_question} = {given_str}” where math_question is the multiplication task, and given_str is the empty string in Section 2.1 and all but the last digit of the correct answer in Section 2.2. In Section 2.3 the prompts are randomly generated 2-shot n-digit by m-digit multiplication examples in the same format. We set the dropout rate to be 0.1, which is the dropout rate commonly used in GPT applications, and appears to be the dropout rate used to train Llama 2 and Mistral. All sampling from LLMs is done deterministically other than the stochasticity induced by dropout (i.e., we take argmax over logits). We collect 100 samples for each output. B Tokenization Both the Llama 2 and Mistral tokenizers have one single token for each digit, 0 to 9, and no digits appear in any tokens other than these. This property has been shown to be necessary to consistently perform even simple addition tasks (Nogueira et al., 2021). C Dropout The use of MC Dropout to model uncertainty in neural networks requires, as a prerequisite, that the neural networks were trained with dropout. As we do not know the exact training details of Llama 2 or Mistral, we cannot be fully assured that they used dropout in training. We do, however, have very strong reason to believe that they did use dropout during training, due to the fact that both of these models still output reasonable text when dropout is turned on. Conversely, the Gemma (Gemma Team, 2024) HuggingFace code also has dropout, but when dropout is turned on even to only 10%, the model outputs are entirely nonsensical (when attempting these experiments with Gemma, we do not even get numbers as output when dropout is turned on, but do get reasonable output with dropout turned off). The sort of robustness to neurons being dropped out that can be seen in Llama 2 and Mistral only occurs in models that were actually trained with dropout, and thus we can be fairly confident that the use of MC Dropout here is appropriate. 91
2024_short
8
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 92–103 August 11-16, 2024 ©2024 Association for Computational Linguistics Simpson’s Paradox and the Accuracy-Fluency Tradeoff in Translation Zheng Wei Lim, Ekaterina Vylomova, Trevor Cohn,∗and Charles Kemp The University of Melbourne [email protected] {vylomovae,t.cohn,c.kemp}@unimelb.edu.au Abstract A good translation should be faithful to the source and should respect the norms of the target language. We address a theoretical puzzle about the relationship between these objectives. On one hand, intuition and some prior work suggest that accuracy and fluency should trade off against each other, and that capturing every detail of the source can only be achieved at the cost of fluency. On the other hand, quality assessment researchers often suggest that accuracy and fluency are highly correlated and difficult for human raters to distinguish (CallisonBurch et al., 2007). We show that the tension between these views is an instance of Simpson’s paradox, and that accuracy and fluency are positively correlated at the level of the corpus but trade off at the level of individual source segments. We further suggest that the relationship between accuracy and fluency is best evaluated at the segment (or sentence) level, and that the trade off between these dimensions has implications both for assessing translation quality and developing improved MT systems. 1 Introduction No translation can simultaneously satisfy all possible goals, and translation is therefore an art of navigating competing objectives (Darwish, 2008). Many objectives are discussed in the literature, but two in particular seem especially fundamental. The first is accuracy (also known as fidelity or adequacy), or the goal of preserving the information in the source text (ST). The second is fluency, or the goal of producing target text (TT) that respects the norms of the target language (TL) and is easy for the recipient to process (Kunilovskaya, 2023). Here we study the relationship between accuracy and fluency and work with two operationalizations of these notions. The first relies on human judgments of accuracy and fluency collected in prior work on translation quality estimation (Castilho ∗Now at Google. et al., 2018). The second relies on probabilities estimated using neural machine translation (NMT) models. Given a source-translation pair (x, y), p(x|y) corresponds to accuracy, and p(y) corresponds to fluency (Teich et al., 2020). p(x|y) will be low if y fails to preserve all of the information in x, and p(y) will be low if y violates the norms of the target language. To highlight that model estimates p(x|y) and p(y) are related to but distinct from human ratings of accuracy and fluency, we refer to p(x|y) as accuracyM and p(y) as fluencyM. Some parts of the literature argue that accuracy trades off with fluency. In Figure 1a, the blue dots are translations of the same source segment, and Table 1 shows three translations that illustrate the same kind of tradeoff. A translator choosing between these alternatives cannot simultaneously maximize accuracy and fluency, because the most accurate translations are not the most fluent, and vice versa. Teich et al. (2020) argues that accuracyM and fluencyM should trade off in this way, and the same view is implicitly captured by noisy-channel models of translation (Brown et al., 1993), which aim to generate translations y that maximize p(y|x) ∝p(x|y)p(y). Typically these models include weights for the two components p(x|y) and p(y) that can be interpreted as the extent to which accuracyM is prioritized over fluencyM, or vice versa (Yu et al., 2016; Yee et al., 2019; Yu et al., 2020; Müller et al., 2020). An opposing view of the relationship between accuracy and fluency emerges from the literature on quality estimation. Here the common wisdom is that accuracy and fluency are highly correlated and practically indistinguishable to human annotators (Callison-Burch et al., 2007; Banchs et al., 2015; Mathur, 2021, but see Djiako 2019; Sulem et al. 2020). As a result, accuracy and fluency are conflated as a single assessment score in recent WMT General Machine Translation Tasks, with more emphasis given to accuracy than fluency (Farhad et al., 92 Translation accuracy fluency accuracyM fluencyM log p(y|x) (i) Ich gab Ihnen eine Rückerstattung des Buches. 23.0 25.0 -10.81 -56.0 -10.31 (ii) Ich habe Ihnen eine Rückerstattung des Buches ausgestellt. 24.3 24.7 -6.13 -64.0 -12.13 (iii) Ich stellte Ihnen eine Rückerstattung des Buches aus. 25.0 23.0 -6.44 -70.0 -14.75 Table 1: Translations of “I issued you a refund of the book." from English to German, which correspond to three of the orange dots in Figure 1. Human ratings of accuracy and fluency are derived from MQM scores, and accuracyM (log p(x|y)) and fluencyM (log p(y)) are estimated using an NMT model. Option (i) is acceptable but gab (past tense of give) is less accurate than the conjugations of ausstellen (issue) used in (ii) and (iii). Option (iii) is the least natural because stellte ... aus (Präteritum tense) is typically used only in formal writing. 0 10 20 accuracy 22 24 26 fluency rs=-0.666 rs=-0.592 * rs=-0.757 * rc=0.140 300 200 100 0 accuracyM 3000 2000 1000 0 fluencyM rs=-0.979 * rs=-0.933 * rs=-0.919 * rc=0.736 * Figure 1: Simpson’s paradox. Each panel shows translations of three source segments indexed by color and marker shape. At the source segment level, accuracy and fluency (left) and accuracyM and fluencyM (right, probabilities plotted on log scales) both show negative correlations rs. At the corpus level, both pairs of dimensions show positive correlations rc (see panel labels). Significant correlations (p < .05) are marked with ‘*’. Source segments and translations are drawn from past WMT General Task submissions and data points have been jittered for clarity. The shaded areas show 95% confidence intervals based on 1000 bootstrapped samples. Full translations are included in Tables 2 (orange dots), 3 (green dots) and 4 (blue dots) of the appendix. 2021; Kocmi et al., 2022, 2023). We argue that the conflict between these views is an instance of Simpson’s paradox (Yuan et al., 2021), which occurs when a relationship at one level of analysis (e.g. the corpus level) disappears or is reversed at a different level (e.g. the segment or sentence level). Figure 1 shows how the correlation rc between accuracy and fluency can be positive over a miniature corpus including translations of three source segments even though the correlation rs for each individual source segment is negative. Of the two levels of analysis, the segment level is the appropriate level for understanding how humans and machine translation systems should choose among possible translations of a source segment. The central goal of our work is therefore to establish that the correlation between accuracy and fluency is negative at the level of individual source segments. 2 Tradeoff between p(x|y) and p(y) Because accuracyM and fluencyM have formal definitions, we start with these dimensions. 2.1 Theoretical formulation and simulation Let Y be a finite set of translations of source segment x, and let ⃗px|y and ⃗py denote log probability vectors that include accuracyM and fluencyM scores for all y ∈Y .1 We use the Pearson correlation between the two vectors: rs = corr(⃗px|y, ⃗py) (1) to quantify the tradeoff between accuracyM and fluencyM across translations of x. If rs > 0 there is no tradeoff, and the translations with higher accuracyM also tend to have higher fluencyM. If rs < 0 the dimensions trade off, and improving a translation along one dimension tends to leave it worse along the other. Note that rs is a correlation at the segment level, and should be distinguished from the corpus-level correlation rc between p(x|y) and p(y) over an entire corpus of segments x and their translations y. Suppose that a translator is considering candidate translations y of source segment x. There are a vast number of possible translations, including many nonsense translations, but we assume that the translator chooses among a small set of good translations that all have near-maximal values of p(y|x). Because p(y|x) ∝p(x|y)p(y) is roughly constant over this set of good translations, it follows that accuracyM and fluencyM trade off within the set. To validate this informal argument, we ran simulations to confirm that tradeoffs between p(x|y) and p(y) emerge when x and y are numeric vectors drawn from a Gaussian joint distribution P(x, y) 1There are infinitely many possible translations, but here we consider a finite set generated by humans or machines. 93 centered at zero.2 We set an initial square matrix A with dimensionality equal to the total number of dimensions in x and y combined. Assuming all elements in x and y have σ2 = 1 and pairwise positive covariance, all diagonal elements of A are set to 1 and other elements 0.7. To ensure the covariance matrix is positive semi-definite, we replace the initial matrix A with a final covariance matrix defined as A⊤A. For each “source segment” x considered in our simulation, we generate 10,000 possible “translations” y by sampling from a distribution q(y) = Q i q(yi), where each element yi of y is sampled uniformly within two standard deviations of its mean. We then score each translation and compute p(x|y), p(y) and p(y|x) using the known joint P(x, y). We initially assume that both x and y are onedimensional vectors. Figure 2a shows the relationship between p(y) and p(x|y) for 3 “segments” x. Each point in each panel corresponds to a candidate translation y, and candidates with highest p(y|x) are shown in yellow. The correlation above each panel results from applying Equation 1 to all translations with p(y|x) above the 90th percentile (i.e. all points in the brightest part of each plot). The first “segment” x (leftmost panel) has relatively high probability p(x), and no tradeoff is observed in this case. The tradeoff emerges, however, and becomes increasingly strong as x moves away from the mode of the distribution p(x). At the “corpus” level, p(y) and p(x|y) are uncorrelated (rc = −.001, p = .970) when the top 10% of translations for each of the three “segments” are combined. Figure 2b shows that the tradeoff persists when the dimensionality of x and y is increased. The density plot for each dimensionality is based on a sample of 100 source “segments” (rather than the 3 in Figure 2a), and at all dimensionalities the majority of source “segments” induce tradeoffs. The tradeoffs are stronger (i.e. correlations more negative) when the candidate translations consist of the y with highest p(y|x) (top 10%), but for all dimensions except n = 1 most source “segments” still induce a tradeoff even if all candidate translations are considered. At the “corpus” level, p(y) and p(x|y) of the top translations are positively correlated (rc = .399, .159, .113, .109 for dimen2Code available at https://github.com/ ZhengWeiLim/accuracy-fluency-tradeoff. 0 1 p(x|y) 0.1 0.2 0.3 p(y) p(x)=0.327 rs=0.920 0 1 p(x|y) p(x)=0.317 rs=-0.298 0 1 p(x|y) p(x)=0.226 rs=-0.880 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 (a) Simulation with one-dimensional x and y. The three panels correspond to three different source “segments” x of decreasing probability p(x), and the points in each panel are candidate translations y. Brighter colors indicate translations with larger p(y|x). Pearson correlations across translations ranked in the top 10% based on p(y|x) are shown at the top of each panel. 1.0 0.5 0.0 0.5 1.0 rs 0 1 2 Density Top 10% #dim 1 2 4 8 1.0 0.5 0.0 0.5 1.0 rs All (b) Kernel density plots of tradeoffs across the top 10% (left) and across all translation choices (right). The tradeoff persists in higher dimensional space, and is stronger when selecting only y with the highest values of p(y|x). Figure 2: Tradeoffs between p(x|y) and p(y) in synthetic data. sionalities 1, 2, 4 and 8 respectively, p < .001). Although our simulations aim for simplicity rather than realism, they provide theoretical grounds for expecting tradeoffs at the segment level in real translations generated by humans and machines. They also suggest that the tradeoff may become stronger when only high-quality translations are considered, and that the strength of the tradeoff may depend on p(x). 2.2 Human and machine translation We now show that human and machine translations show the same tradeoff between accuracyM and fluencyM, which correspond to p(x|y) and p(y) estimated by an NMT model. Data. We analyze 15 translation studies from CRITT TPR-DB (CRITT) that include 13 language pairs (Carl et al., 2016b). We also use a subset of the Russian Learner Translator Corpus (RLTC) that has been aligned at the sentence level by Kunilovskaya (2023). For machine translation, we use WMT test sets which include segments of (mostly individual) sentences that are annotated with Multidimensional Quality Metrics labels (MTMQM) 94 1 0 1 rs 0 20 40 60 count CRITT t=-7.832 1 0 1 rs 0 50 100 count RLTC t=-10.462 1 0 1 rs 0 250 500 750 count MTMQM t=-62.105 Figure 3: Tradeoffs between estimated p(x|y) and p(y) across source segments from three corpora. Pairedsample t-tests against randomly permuted p(y) and p(x|y) are shown at the top of each panel. (Freitag et al., 2021a,b; Zerva et al., 2022; Freitag et al., 2023). To reduce spurious correlations, we remove duplicate translations and source segments with fewer than four unique translations. Additional details are provided in the appendix. Models. We use NLLB-200’s 3.3B variant model (Costa-jussà et al., 2022) to estimate p(y|x) and p(x|y).3 For consistency, we also extract p(y) based on the same model, skipping all inputs except for special tokens (e.g., <eos> tags).4 All probabilities are log scaled. Results. Figure 3 is a histogram analogous to the densities in Figure 2b, and shows distributions of tradeoff scores for source segments in CRITT, RLTC and MTMQM. In all three cases most source segments induce tradeoffs (i.e. produce negative correlations). To test for statistical significance we compared the actual distributions against randomly permuted data. The results of all paired-sample t-tests are significant (p < .001), and are included in the figure.5 When samples are aggregated at the corpus level, p(y) and p(x|y) show significant positive correlations (p < .001) for CRITT (rc = .625), RLTC (rc = .685) and MTMQM (rc = .675), revealing that Simpson’s paradox applies in all three cases. The simulation in Figure 2a suggests that segments with smaller p(x) tend to show greater tradeoffs, which predicts that p(x) and rs (Equation 1) should be positively correlated. Our data support this prediction for CRITT (r = .124, p = .013), RLTC (r = .225, p < .001) and MTMQM (r = .109, p < .001). 3NLLB model card 4To ensure reproducibility across models, we repeat our analysis in the appendix using M2M100 (Fan et al., 2021). 5Each permuted data set is created by randomly shuffling the pairings of p(x|y) and p(y) within the set of possible translations of each source segment. 1.0 0.5 0.0 0.5 1.0 rs 0 50 100 count RLTC t=-7.193 1.0 0.5 0.0 0.5 1.0 rs 0 200 400 600 count MTMQM t=-3.293 Figure 4: Tradeoffs between human ratings of accuracy and fluency across segments from two corpora. Pairedsample t-tests against randomly permuted scores are shown at the top of each panel. 3 Tradeoff between accuracy and fluency We now turn to human ratings of accuracy and fluency, and demonstrate that the two are again negatively correlated at the segment level. Data. Only RLTC and MTMQM are rated by human annotators. The subset of RLTC released by Kunilovskaya (2023) includes accuracy and fluency scores derived from error annotations. For MTMQM, we follow Freitag et al. (2021a) where accuracy scores are aggregates of “Accuracy” and “Terminology” errors, and fluency scores are aggregates of “Fluency”, “Style” and “Locale convention” errors. Targets that are labelled “Nontranslation” receive scores of zero for both accuracy and fluency. Major and minor errors receive penalties of 5 and 1 respectively. Fluency/Punctuation is assigned a penalty of 0.1. We calculate the final rating as sc = max(0, 25 −ec), where ec denotes the total penalty in error category c.6 Because some systems submit the same translation but receive different ratings, we average these scores and remove the duplicate entries. Results. Figure 4 shows correlations at the level of individual source segments. The majority of correlations are negative, and paired-sample t-tests reveal that both distributions are significantly (p < .001) different from distributions obtained from random permutations. The results therefore suggest that accuracy and fluency (as rated by humans) trade off at the level of individual segments. At the corpus level, accuracy and fluency are positively correlated for MTMQM (rc = .392, p < .001), and are uncorrelated in RLTC (rc = −.085, p < .001), suggesting again that Simpson’s paradox applies to both cases.7 6The maximum score is set at 25 because the maximum MTMQM penalty score is 25. 7Fluency and accuracy may be uncorrelated in RLTC at the 95 Unlike the case for accuracyM and fluencyM, human ratings of accuracy and fluency do not induce a positive correlation between p(x) and rs (r = −.150 and −.104 for RLTC and MTMQM respectively). We therefore find no support for the simulation-based prediction that low-probability sentences are more likely to produce strong tradeoffs between accuracy and fluency. Figure 4 is directly analogous to Figure 3, and we expected that source segments which showed strong tradeoffs (i.e. extreme negative correlations) in Figure 3 would also show strong tradeoffs in Figure 4. The two tradeoff measures, however, were uncorrelated,8 which suggests that accuracyM and fluencyM overlap only partially with human ratings of accuracy and fluency. A similar conclusion is suggested by Figure 5, which shows Pearson correlations of translation probability (p(y|x); blue bars), accuracyM (p(x|y); brown bars) and fluencyM (p(y); green bars) with human ratings of accuracy and fluency for RLTC and MTMQM.9 As expected, accuracyM shows a higher correlation with accuracy than fluency, and fluencyM shows the opposite pattern. Figure 5 however, suggests that accuracyM is not superior to p(y|x) as a predictor of accuracy, and that fluencyM is not superior to p(y|x) as a predictor of fluency. One reason why our model estimates of accuracy and fluency depart from human ratings is that accuracyM and fluencyM are sensitive to segment length. For example, a longer segment will have lower fluencyM than a shorter segment even if the two are both perfectly idiomatic. 4 Conclusion We showed that accuracy and fluency and p(x|y) and p(y) both trade off when translating individual source segments. This finding suggests that current protocols for assessing translation quality may need to be adjusted. Human assessments for recent WMT General Tasks are performed using Direct Assessment and Scalar Quality Metrics (DA+SQM) (Kocmi et al., 2022, 2023). This approach conflates meaning preservation and grammar into a single score indicative of overall quality of a transcorpus level because of a ceiling effect – 63.5% and 70.6% of sentences receive maximum ratings for fluency and accuracy in RLTC compared to 55.6% and 58.4% for MTMQM. 8The Pearson correlations between the two tradeoff measures for RLTC and MTMQM are r = .003, p = .933 and r = .022, p = .05. 9Values are in log scale and are ranked by percentile. Acc. Flu. 0.0 0.1 0.2 0.3 0.4 corr RLTC 2022 zhen 2022 ende 2023 zhen 2023 ende WMT Accuracy 2022 zhen 2022 ende 2023 zhen 2023 ende WMT Fluency log p(y|x) accuracyM fluencyM Figure 5: accuracyM and fluencyM predict human accuracy and fluency ratings for RLTC and WMT submissions to the general translation task in 2022 and 2023. zhen and ende refer to Chinese-English and EnglishGerman language pairs. All correlations reported are significant (p < .001). lation. In contrast, MQM is much more costly, but produces highly detailed scores that use multiple sub-categories for both accuracy and fluency. Future approaches could therefore consider a middle ground that extends DA+SQM to include accuracy and fluency as independent aspects as in WMT16 (Bojar et al., 2016). This direction would allow automatic MT evaluation metrics such as BLEURT (Sellam et al., 2020) and COMET (Rei et al., 2022) (both fine-tuned to DA scores) to be adapted to provide independent scores for accuracy and fluency. Our results also suggest the value of developing MT models that navigate the accuracy-fluency tradeoff in human-like ways. In some settings (e.g. translating legal texts) accuracy is more important than fluency (Popovi´c, 2020; Martindale and Carpuat, 2018; Vela and Tan, 2015; Specia et al., 2011; Martindale et al., 2019), but in others (e.g. translating informal conversation) fluency may take priority (Poibeau, 2022; FrankenbergGarcia, 2022). One natural approach to navigating the accuracy-fluency tradeoff builds on noisy channel models (Yu et al., 2016; Yee et al., 2019; Müller et al., 2020), which incorporate both p(y) and p(x|y) along with tradeoff parameters that specify the relative weights of the two. Tuning these parameters for specific registers may allow a model to find the right balance between accuracy and fluency in each case. 5 Limitations Although we provided evidence for both accuracyfluency and accuracyM-fluencyM tradeoffs in translation, we did not explore semantic and grammatical features that may predict which source segments produce the greatest tradeoffs. Outside of our simulation we do not have access to ground96 truth values of p(x|y) and p(y), and are only able to approximate these values using specific NMT models. Our work is also limited by the fact that MTMQM only includes translations generated by certain kinds of NMT models, and it is possible that our results do not generalize to translations generated by other types of models, such as statistical or rule-based MT systems. Finally, both RLTC and MTMQM have accuracy and fluency ratings derived from error annotations that are very similar in range. This constraint makes quality assessment and comparison at the segment level challenging. Ethics Statement We do not foresee any potential risks and harmful use of our work. Our analyses are based on licensed data which are freely available for academic use. Acknowledgements This work was supported by ARC FT190100200. References Fabio Alves and José Luiz Gonçalves. 2013. Investigating the conceptual-procedural distinction in the translation process: A relevance-theoretic analysis of micro and macro translation units. Target. International Journal of Translation Studies, 25(1):107–124. Rafael E Banchs, Luis F D’Haro, and Haizhou Li. 2015. Adequacy–fluency metrics: Evaluating MT in the continuous space model framework. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(3):472–482. Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, et al. 2016. Findings of the 2016 conference on machine translation (WMT16). In First conference on machine translation, pages 131–198. Association for Computational Linguistics. Peter F Brown, Stephen A Della Pietra, Vincent J Della Pietra, and Robert L Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational linguistics, 19(2):263– 311. Chris Callison-Burch, Cameron Shaw Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (Meta-) evaluation of machine translation. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 136–158. Michael Carl, Akiko Aizawa, and Masaru Yamada. 2016a. English-to-Japanese translation vs. dictation vs. post-editing: Comparing translation modes in a multilingual setting. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 4024–4031. Michael Carl and M Cristina Toledo Báez. 2019. Machine translation errors and the translation process: A study across different languages. Journal of Specialised Translation, 31:107–132. Michael Carl, Moritz Schaeffer, and Srinivas Bangalore. 2016b. The CRITT translation process research database. In New directions in empirical translation process research, pages 13–54. Springer. Sheila Castilho, Stephen Doherty, Federico Gaspari, and Joss Moorkens. 2018. Approaches to human and machine translation quality assessment. Translation quality assessment: From principles to practice, pages 9–38. Marta R Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. 2022. No language left behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672. Ali Darwish. 2008. Optimality in translation. Writescope Publishers. Gabriel Armand Djiako. 2019. Lexical ambiguity in machine translation and its impact on the evaluation of output by users. Ph.D. thesis, Saarländische Universitäts-und Landesbibliothek. Barbara Dragsted. 2010. Coordination of reading and writing processes in translation: An eye on uncharted territory. In Translation and Cognition, pages 41–62. John Benjamins Publishing Company. Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. 2021. Beyond English-centric multilingual machine translation. Journal of Machine Learning Research, 22(107):1–48. Akhbardeh Farhad, Arkhangorodsky Arkady, Biesialska Magdalena, Bojar Ondˇrej, Chatterjee Rajen, Chaudhary Vishrav, Marta R Costa-jussa, España-Bonet Cristina, Fan Angela, Federmann Christian, et al. 2021. Findings of the 2021 conference on machine translation (WMT21). In Proceedings of the Sixth Conference on Machine Translation, pages 1–88. Association for Computational Linguistics. Ana Frankenberg-Garcia. 2022. Can a corpus-driven lexical analysis of human and machine translation unveil discourse features that set them apart? Target, 34(2):278–308. Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021a. Experts, errors, and context: A large-scale study of human evaluation for machine translation. Transactions of the Association for Computational Linguistics, 9:1460–1474. 97 Markus Freitag, Nitika Mathur, Chi-kiu Lo, Eleftherios Avramidis, Ricardo Rei, Brian Thompson, Tom Kocmi, Frédéric Blain, Daniel Deutsch, Craig Stewart, et al. 2023. Results of WMT23 metrics shared task: Metrics might be guilty but references are not innocent. In Proceedings of the Eighth Conference on Machine Translation, pages 578–628. Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, George Foster, Alon Lavie, and Ondˇrej Bojar. 2021b. Results of the WMT21 metrics shared task: Evaluating metrics with expert-based human evaluations on TED and news domain. In Proceedings of the Sixth Conference on Machine Translation, pages 733–774. Kristian Tangsgaard Hvelplund Jensen, Annette C Sjørup, and Laura Winther Balling. 2009. Effects of L1 syntax on L2 translation. Copenhagen Studies in Language, 38:319–336. Tom Kocmi, Eleftherios Avramidis, Rachel Bawden, Ondˇrej Bojar, Anton Dvorkovich, Christian Federmann, Mark Fishel, Markus Freitag, Thamme Gowda, Roman Grundkiewicz, et al. 2023. Findings of the 2023 conference on machine translation (WMT23): LLMs are here but not quite there yet. In Proceedings of the Eighth Conference on Machine Translation, pages 1–42. Tom Kocmi, Rachel Bawden, Ondˇrej Bojar, Anton Dvorkovich, Christian Federmann, Mark Fishel, Thamme Gowda, Yvette Graham, Roman Grundkiewicz, Barry Haddow, et al. 2022. Findings of the 2022 conference on machine translation (WMT22). In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 1–45. Maria Kunilovskaya. 2023. Translationese indicators for human translation quality estimation (based on English-to-Russian translation of mass-media texts). Ph.D. thesis, University of Wolverhampton. Marianna Martindale and Marine Carpuat. 2018. Fluency over adequacy: A pilot study in measuring user trust in imperfect MT. In Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track), pages 13–25. Marianna Martindale, Marine Carpuat, Kevin Duh, and Paul McNamee. 2019. Identifying fluently inadequate output in neural and statistical machine translation. In Proceedings of Machine Translation Summit XVII: Research Track, pages 233–243. Nikita Mathur. 2021. Robustness in Machine Translation Evaluation. Ph.D. thesis, University of Melbourne. Bartolomé Mesa-Lao. 2014. Gaze behaviour on source texts: An exploratory study comparing translation and post-editing. In Post-editing of machine translation: Processes and applications, pages 219–245. Cambridge Scholars Publishing. Mathias Müller, Annette Rios Gonzales, and Rico Sennrich. 2020. Domain robustness in neural machine translation. In Proceedings of the 14th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track), pages 151–164. Jean Nitzke. 2019. Problem solving activities in postediting and translation from scratch: A multi-method study. Language Science Press. Dagmara Pło´nska. 2016. Problems of literality in french-polish translations of a newspaper article. New directions in empirical translation process research: exploring the CRITT TPR-DB, pages 279– 291. Thierry Poibeau. 2022. On “human parity” and “super human performance” in machine translation evaluation. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 6018– 6023. Maja Popovi´c. 2020. Relations between comprehensibility and adequacy errors in machine translation output. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 256–264. Ricardo Rei, José GC De Souza, Duarte Alves, Chrysoula Zerva, Ana C Farinha, Taisiya Glushkova, Alon Lavie, Luisa Coheur, and André FT Martins. 2022. Comet-22: Unbabel-ist 2022 submission for the metrics shared task. In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 578–585. Márcia Schmaltz, Igor AL da Silva, Adriana Pagano, Fabio Alves, Ana Luísa V Leal, Derek F Wong, Lidia S Chao, and Paulo Quaresma. 2016. Cohesive relations in text comprehension and production: An exploratory study comparing translation and postediting. New Directions in Empirical Translation Process Research: Exploring the CRITT TPR-DB, pages 239–263. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. Bleurt: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881– 7892. Annette Camilla Sjørup. 2013. Cognitive effort in metaphor translation: An eye-tracking and keylogging study. Frederiksberg: Copenhagen Business School (CBS). Lucia Specia, Najeh Hajlaoui, Catalina Hallett, and Wilker Aziz. 2011. Predicting machine translation adequacy. In Proceedings of Machine Translation Summit XIII: Papers. Elior Sulem, Omri Abend, and Ari Rappoport. 2020. Semantic structural decomposition for neural machine translation. In Proceedings of the ninth joint conference on lexical and computational semantics, pages 50–57. 98 Elke Teich, José Martínez Martínez, and Alina Karakanta. 2020. Translation, information theory and cognition. The Routledge Handbook of Translation and Cognition, pages 9781315178127–24. Bram Vanroy. 2021. Syntactic difficulties in translation. Ph.D. thesis, Ghent University. Mihaela Vela and Liling Tan. 2015. Predicting machine translation adequacy with document embeddings. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 402–410. Lucas Nunes Vieira, Natalie Zelenka, Roy Youdale, Xiaochun Zhang, and Michael Carl. 2023. Translating science fiction in a CAT tool: Machine translation and segmentation settings. Translation & Interpreting, 15(1):216–235. Kyra Yee, Yann Dauphin, and Michael Auli. 2019. Simple and effective noisy channel modeling for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5696–5701. Lei Yu, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Tomas Kocisky. 2016. The neural noisy channel. In International Conference on Learning Representations. Lei Yu, Laurent Sartran, Wojciech Stokowiec, Wang Ling, Lingpeng Kong, Phil Blunsom, and Chris Dyer. 2020. Better document-level machine translation with Bayes’ rule. Transactions of the Association for Computational Linguistics, 8:346–360. Fei Yuan, Longtu Zhang, Huang Bojun, and Yaobo Liang. 2021. Simpson’s bias in NLP training. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14276–14283. Chrysoula Zerva, Frédéric Blain, Ricardo Rei, Piyawat Lertvittayakumjorn, José GC De Souza, Steffen Eger, Diptesh Kanojia, Duarte Alves, Constantin Orˇasan, Marina Fomicheva, et al. 2022. Findings of the WMT 2022 shared task on quality estimation. In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 69–99. A Appendix A.1 Data specification A.1.1 Corpora The CRITT Translation Process Research Database (Carl et al., 2016b) is a collection of translation behavioural data in the area of Translation Process Research. From the public CRITT database we obtain 15 studies across 13 pairs of languages: RUC17 (enzh, Carl and Báez, 2019), ENJA15 (enja, Carl et al., 2016a), NJ12 (enhi, Carl et al., 2016b), STC17 (enzh, Carl and Báez, 2019), SG12 (ende, Nitzke, 2019), ENDU20 (ennl, Vanroy, 2021), BML12 (enes, Mesa-Lao, 2014), ACS08 (daen, Sjørup, 2013), MS13 (ptzh, Schmaltz et al., 2016), JLG10 (pten, Alves and Gonçalves, 2013), BD13 (daen, Dragsted, 2010), LWB09 (daen, Jensen et al., 2009), DG01 (plfr, Pło´nska, 2016), BD08 (daen, Dragsted, 2010) and CREATIVE (enzh, Vieira et al., 2023).10 After deduplication and removing source segments with fewer than 4 unique translations, the total number of source segments included is 399, each with an average of 10.9 unique translations. RLTC is a subset of the Russian Learner Translator Corpus that has been aligned at the segment level by Kunilovskaya (2023). We include a total of 1079 source segments from 5 genres: ‘Essay’, ‘Informational’, ‘Speech’, ‘Interview’ and ‘Educational’. The average number of unique translations for each source segment is 10.5. MTMQM is obtained from (Freitag et al., 2021a), which contains translations of TED talks and news data from the test sets of WMT General Tasks between 2020 and 2023.11 The translations are annotated with MQM labels. After preprocessing we are left with 11219 source segments and an average of 9.9 unique translations per source segment. A.2 Alternative result with M2M100 translation model In Figure 6 and 7, we replicate our findings of accuracyM and fluencyM in Section 2 and 3 with estimates based on M2M100 (1.2B variant) (Fan et al., 2021).12 A.3 Tradeoff examples Tables 2, 3 and 4 include the full set of translations plotted in Figure 1. The tables specify accuracy, fluency, accuracyM, fluencyM and translation probability p(y|x) for each segment. All translations listed are submissions to the WMT General Task between 2020 to 2022. 10https://sites.google.com/site/ centretranslationinnovation/tpr-db/ public-studies 11https://github.com/google/ wmt-mqm-human-evaluation 12https://huggingface.co/facebook/ m2m100_1.2B 99 Ich gab Ihnen eine Rückerstattung des Buches. {accuracy: 23.0, fluency: 25.0, accuracyM: -10.81, fluencyM: -56.0, log p(y|x): -10.31} Ich habe dir eine Rückerstattung des Buches ausgestellt. {accuracy: 23.0, fluency: 25.0, accuracyM: -5.84, fluencyM: -62.5, log p(y|x): -12.44} Ich habe dir das Buch zurückerstattet. {accuracy: 23.0, fluency: 25.0, accuracyM: -17.5, fluencyM: -44.25, log p(y|x): -7.63} Ich habe Ihnen das Buch erstattet. {accuracy: 24.0, fluency: 25.0, accuracyM: -15.19, fluencyM: -43.25, log p(y|x): -9.06} Ich habe Ihnen das Buch zurückerstattet. {accuracy: 24.2, fluency: 25.0, accuracyM: -17.25, fluencyM: -43.5, log p(y|x): -7.28} Ich habe Ihnen eine Rückerstattung des Buches ausgestellt. {accuracy: 24.3, fluency: 24.67, accuracyM: -6.13, fluencyM: -64.0, log p(y|x): -12.13} Ich stellte Ihnen eine Rückerstattung des Buches aus. {accuracy: 25.0, fluency: 23.0, accuracyM: -6.44, fluencyM: -70.0, log p(y|x): -14.75} Ich habe Ihnen eine Rückerstattung für das Buch erteilt. {accuracy: 25.0, fluency: 24.0, accuracyM: -11.56, fluencyM: -63.0, log p(y|x): -14.19} Table 2: Translations of I issued you a refund of the book. (plotted in orange in Figure 1). Accuracy and fluency scores are derived from MQM ratings, and accuracyM and fluencyM are estimates of log p(x|y) and log p(y) derived from an NMT model. 1 0 1 rs 0 20 40 count CRITT t=-5.481 1 0 1 rs 0 50 100 count RLTC t=-10.318 1 0 1 rs 0 250 500 750 count MTMQM t=-67.024 Tradeoff between accuracyM and fluencyM Figure 6: Histogram of tradeoffs between estimated p(x|y) and p(y) estimated by M2M100, which is analogous to Figure 3 in the main text. When analyzed at the corpus level, the correlations rc for for CRITT, RLTC and MTMQM are .689, .703 and .801 respectively (p < .001 in all cases). Acc. Flu. 0.0 0.1 0.2 0.3 0.4 corr RLTC 2022 zhen 2022 ende 2023 zhen 2023 ende WMT Accuracy 2022 zhen 2022 ende 2023 zhen 2023 ende WMT Fluency log p(y|x) accuracyM fluencyM Figure 7: accuracyM and fluencyM estimates based on M2M100 predict human accuracy and fluency ratings (p < .05). The figure is analogous to Figure 5. 100 Ashanti Development arbeitet seit fast 20 Jahren mit einer wachsenden Anzahl von Gemeinden in der Region Ashanti in Ghana zusammen und unterstützt sie in den Bereichen Wasser und sanitäre Einrichtungen, Bildung, Gesundheitsversorgung, Baumpflanzung und Landwirtschaft. {accuracy: 19.0, fluency: 25.0, accuracyM: -120.5, fluencyM: -498.0, log p(y|x): -27.0} Ashanti Development arbeitet seit fast zwanzig Jahren mit einer ständig wachsenden Anzahl von Gemeinden in der Region Ashanti in Ghana zusammen, engagiert sich mit Gemeinden und unterstützt Wasser und Sanitärversorgung, Bildung, Gesundheitsversorgung, Baumpflanzung und Landwirtschaft. Gemeinschaften erlangen das Wissen, um ihre eigene Entwicklung einzubetten und zu unterstützen. {accuracy: 22.0, fluency: 24.0, accuracyM: -47.5, fluencyM: -748.0, log p(y|x): -47.25} Ashanti Development arbeitet seit fast 20 Jahren mit einer ständig wachsenden Zahl von Gemeinden in der Ashanti-Region in Ghana zusammen, engagiert sich für Gemeinden und bietet Unterstützung in den Bereichen Wasser und sanitäre Einrichtungen, Bildung, Gesundheitsversorgung, Baumpflanzung und Landwirtschaft. Communities erwerben das Wissen, um ihre eigene Entwicklung zu verankern und zu unterstützen. {accuracy: 22.0, fluency: 25.0, accuracyM: -46.5, fluencyM: -832.0, log p(y|x): -49.0} Ashanti Development arbeitet seit 20 Jahren mit einer immer größeren Zahl von Gemeinden in der Region Ashanti in Ghana zusammen, engagiert sich mit Gemeinden und unterstützt Wasser und Sanitärversorgung, Bildung, Gesundheitsversorgung, Baumpflanzung und Landwirtschaft. {accuracy: 23.0, fluency: 24.9, accuracyM: -101.0, fluencyM: -516.0, log p(y|x): -39.25} Ashanti Development arbeitet seit fast 20 Jahren mit einer ständig wachsenden Anzahl von Gemeinden in der AshantiRegion Ghanas zusammen, indem es sich mit Gemeinden beschäftigt und ihnen Unterstützung in den Bereichen Wasser und Sanitärversorgung, Bildung, Gesundheitsversorgung, Baumpflanzung und Landwirtschaft bietet. {accuracy: 23.0, fluency: 25.0, accuracyM: -98.5, fluencyM: -652.0, log p(y|x): -29.625} Ashanti Development arbeitet seit fast 20 Jahren mit einer ständig wachsenden Zahl von Gemeinden in der Ashanti-Region in Ghana zusammen, arbeitet mit Gemeinden zusammen und unterstützt sie in den Bereichen Wasser und Abwasserentsorgung, Bildung, Gesundheitswesen, Baumpflanzung und Landwirtschaft. Gemeinschaften erwerben das Wissen, um ihre eigene Entwicklung zu verankern und zu unterstützen. {accuracy: 23.0, fluency: 24.0, accuracyM: -53.0, fluencyM: -828.0, log p(y|x): -42.5} Ashanti Development arbeitet seit fast 20 Jahren mit einer ständig wachsenden Anzahl von Gemeinden in der AshantiRegion in Ghana zusammen, engagiert sich für Gemeinden und unterstützt sie bei Wasser- und Sanitärversorgung, Bildung, Gesundheitswesen, Baumpflanzung und Landwirtschaft. Gemeinschaften gewinnen das Wissen, um ihre eigene Entwicklung einzubetten und zu unterstützen. {accuracy: 24.0, fluency: 23.0, accuracyM: -47.5, fluencyM: -784.0, log p(y|x): -45.25} Ashanti Development arbeitet seit fast 20 Jahren mit einer stetig wachsenden Anzahl von Gemeinden in der Ashanti-Region in Ghana zusammen, engagiert sich mit Gemeinden und bietet Unterstützung in den Bereichen Wasserversorgung und Abwasserentsorgung, Bildung, Gesundheitsversorgung, Baumpflanzung und Landwirtschaft. Gemeinden erwerben das Wissen, um ihre eigene Entwicklung zu verankern und zu unterstützen. {accuracy: 24.0, fluency: 24.0, accuracyM: -49.5, fluencyM: -848.0, log p(y|x): -42.25} Ashanti Development arbeitet seit fast 20 Jahren mit einer stetig wachsenden Anzahl von Gemeinschaften in der AshantiRegion von Ghana zusammen, engagiert sich in den Gemeinschaften und bietet Unterstützung in den Bereichen Wasser und Sanitär, Bildung, Gesundheitswesen, Baumpflanzung und Landwirtschaft. Die Gemeinschaften erwerben das Wissen, um ihre eigene Entwicklung zu verankern und zu unterstützen. {accuracy: 25.0, fluency: 22.0, accuracyM: -50.25, fluencyM: -828.0, log p(y|x): -43.0} Ashanti Development arbeitet seit fast 20 Jahren mit einer ständig wachsenden Zahl von Gemeinden in der AshantiRegion in Ghana zusammen, engagiert sich für Gemeinden und leistet Unterstützung bei Wasser- und Sanitärversorgung, Bildung, Gesundheitsversorgung, Baumpflanzung und Landwirtschaft. Gemeinschaften erlangen das Wissen, um ihre eigene Entwicklung zu verankern und zu unterstützen. {accuracy: 25.0, fluency: 24.0, accuracyM: -45.75, fluencyM: -816.0, log p(y|x): -45.0} Ashanti Development arbeitet seit fast 20 Jahren mit einer ständig wachsenden Zahl von Gemeinden in der Ashanti-Region in Ghana zusammen und unterstützt sie in den Bereichen Wasserversorgung und Abwasserentsorgung, Bildung, Gesundheitsversorgung, Baumpflanzung und Landwirtschaft. Die Gemeinden erlangen das Wissen, um ihre eigene Entwicklung zu fördern und zu unterstützen. {accuracy: 25.0, fluency: 24.0, accuracyM: -74.0, fluencyM: -768.0, log p(y|x): -42.0} Ashanti Development arbeitet seit fast 20 Jahren mit einer ständig wachsenden Zahl von Gemeinden in der Ashanti-Region in Ghana zusammen, engagiert sich für Gemeinden und leistet Unterstützung bei Wasser- und Sanitärversorgung, Bildung, Gesundheitswesen, Baumpflanzung und Landwirtschaft. Gemeinschaften erwerben das Wissen, um ihre eigene Entwicklung zu verankern und zu unterstützen. {accuracy: 25.0, fluency: 24.0, accuracyM: -46.25, fluencyM: -812.0, log p(y|x): -46.25} Table 3: Translations of Ashanti Development has been working with an ever-expanding number of communities in the Ashanti region of Ghana for approaching 20 years, engaging with communities and providing support with water and sanitation, education, healthcare, tree planting and farming. Communities gain the knowledge to embed and support their own development. These translations are plotted in green in Figure 1. 101 Interessanterweise war eine der 12 Galaxien in z66OD ein riesiges Objekt mit einem riesigen Gaskörper, bekannt als Himiko, das zuvor 2009 vom Subaru-Teleskop gefunden wurde. „Es ist vernünftig, einen Protohaufen in der Nähe eines massereichen Objekts wie Himiko zu finden. Wir sind jedoch überrascht zu sehen, dass Himiko nicht im Zentrum des Protohaufens lag, sondern am Rande 500 Millionen Lichtjahre vom Zentrum entfernt.“ Sagte Masami Ouchi, ein Teammitglied am Nationalen Astronomischen Observatorium von Japan und der Universität von Tokio, die Himiko im Jahr 2009 entdeckte, dass die Beziehung zwischen den Himiko und den Himiko-Klöstern noch immer nicht verstanden wird. {accuracy: 0.0, fluency: 22.9, accuracyM: -286.0, fluencyM: -1904.0, log p(y|x): -139.0} """""""Interessanterweise war eine der 12 Galaxien in z66OD ein riesiges Objekt mit einem riesigen Gaskörper, bekannt als Himiko, das zuvor vom Subaru-Teleskop im Jahr 2009 gefunden wurde. """"""""Es ist vernünftig, einen Protocluster in der Nähe eines massiven Objekts wie Himiko zu finden. Wir sind jedoch überrascht zu sehen, dass Himiko nicht im Zentrum des Protoclusters, sondern am Rand 500 Millionen Lichtjahre vom Zentrum entfernt war"""""""", sagte Masami Ouchi, ein Teammitglied am Nationalen Astronomischen Observatorium von Japan und der Universität von Tokio, der Himiko im Jahr 2009 entdeckte. Ironischerweise soll die mythologische Königin Himiko auch abgeschieden von ihrem Volk gelebt haben. Ouchi fährt fort: """"""""Es ist immer noch nicht verstanden, warum Himiko nicht im Zentrum liegt. Diese Ergebnisse werden ein Schlüssel für das Verständnis der Beziehung zwischen Haufen und massiven Galaxien sein""""""" {accuracy: 1.0, fluency: 23.4, accuracyM: -125.0, fluencyM: -2624.0, log p(y|x): -103.5} """""""Interessanterweise war eine der 12 Galaxien in z66OD ein riesiges Objekt mit einem riesigen Gaskörper, bekannt als Himiko, das zuvor vom Subaru-Teleskop im Jahr 2009 gefunden wurde. „Es ist vernünftig, einen Protocluster in der Nähe eines massiven Objekts, wie Himiko, zu finden. Allerdings sind wir überrascht zu sehen, dass Himiko nicht im Zentrum des Protoclusters, sondern am Rande 500 Millionen Lichtjahre vom Zentrum entfernt war.“, sagte Masami Ouchi, ein Teammitglied am Nationalen Astronomischen Observatorium von Japan und der Universität von Tokio, der Himiko im Jahr 2009 entdeckte. Ironischerweise soll die mythologische Königin Himiko auch abgeschieden von ihrem Volk gelebt haben. Ouchi fährt fort: """"""""Es ist immer noch nicht verstanden, warum Himiko nicht im Zentrum liegt. Diese Ergebnisse werden ein Schlüssel für das Verständnis der Beziehung zwischen Haufen und und massiven galaxien sein.""""""""""""""" {accuracy: 6.0, fluency: 24.0, accuracyM: -121.0, fluencyM: -2688.0, log p(y|x): -143.0} """""""Interessanterweise war eine der 12 Galaxien in z66OD ein riesiges Objekt mit einem riesigen Gaskörper, bekannt als Himiko, das zuvor 2009 vom Subaru-Teleskop gefunden wurde. „Es ist vernünftig, einen Protocluster in der Nähe eines massiven Objekts wie Himiko zu finden. Wir sind jedoch überrascht zu sehen, dass sich Himiko nicht im Zentrum des Protoclusters befand, sondern am Rande 500 Millionen Lichtjahre vom Zentrum entfernt“, sagte Masami Ouchi, Teammitglied am National Astronomical Observatory of Japan und der Universität Tokio, der Himiko 2009 entdeckte. Ironischerweise soll die mythologische Königin Himiko auch abgeschieden von ihrem Volk gelebt haben. Ouchi fährt fort: """"""""Es ist immer noch nicht verstanden, warum Himiko nicht im Zentrum liegt. Diese Ergebnisse werden ein Schlüssel für das Verständnis der Beziehung zwischen Haufen und massiven Galaxien sein.""""""""""""""" {accuracy: 6.0, fluency: 22.7, accuracyM: -126.0, fluencyM: -2592.0, log p(y|x): -123.0} Interessanterweise war eine der 12 Galaxien in z66OD ein riesiges Objekt mit einem riesigen Gaskörper, bekannt als Himiko, das zuvor 2009 vom Subaru-Teleskop gefunden wurde. „Es ist vernünftig, einen Protocluster in der Nähe eines massiven Objekts wie Himiko zu finden. Wir sind jedoch überrascht zu sehen, dass Himiko sich nicht im Zentrum des Protoclusters befand, sondern am Rand 500 Millionen Lichtjahre vom Zentrum entfernt“, sagte Masami Ouchi, Teammitglied am National Astronomical Observatory of Japan und der Universität Tokio, der Himiko 2009 entdeckte. Ironischerweise soll die mythologische Königin Himiko auch abseits ihres Volkes im Kloster gelebt haben. Ouchi fährt fort: „Es ist immer noch nicht verstanden, warum Himiko sich nicht im Zentrum befindet. Diese Ergebnisse werden ein Schlüssel zum Verständnis der Beziehung zwischen Haufen und massiven Galaxien sein.“ {accuracy: 9.0, fluency: 22.0, accuracyM: -131.0, fluencyM: -2512.0, log p(y|x): -108.0} """""""Interessanterweise war eine der 12 Galaxien in z66OD ein riesiges Objekt mit einem riesigen Gaskörper, bekannt als Himiko, das 2009 vom Subaru-Teleskop gefunden wurde. """"""""Es ist vernünftig, einen Protokluster in der Nähe eines massiven Objekts zu finden, wie z Himiko. Wir sind jedoch überrascht zu sehen, dass sich Himiko nicht in der Mitte des Protoklusters befand, sondern am Rand von 500 Millionen Lichtjahren vom Zentrum entfernt. """""""" sagte Masami Ouchi, ein Teammitglied des Nationalen Astronomischen Observatoriums Japans und der Universität Tokio, das Himiko 2009 entdeckte. Ironischerweise soll die mythologische Königin Himiko auch im Kloster von ihrem Volk gelebt haben. Ouchi fährt fort: """"""""Es ist immer noch nicht klar, warum Himiko nicht im Zentrum liegt. Diese Ergebnisse werden ein Schlüssel zum Verständnis der Beziehung zwischen Clustern und massiven Galaxien sein.""""""""""""""" {accuracy: 13.0, fluency: 20.7, accuracyM: -132.0, fluencyM: -2688.0, log p(y|x): -127.0} """""""Interessanterweise war eine der 12 Galaxien in z66OD ein riesiges Objekt mit einem riesigen Gaskörper, bekannt als Himiko, das zuvor vom Subaru-Teleskop im Jahr 2009 gefunden wurde. """"""""Es ist vernünftig, einen Protocluster in der Nähe eines massiven Objekts wie Himiko zu finden. Wir sind jedoch überrascht zu sehen, dass Himiko nicht im Zentrum des Protoclusters lag, sondern am Rande 500 Millionen Lichtjahre vom Zentrum entfernt"""""""", sagte Masami Ouchi, Teammitglied am Nationalen Astronomischen Observatorium Japans und der Universität Tokio, der Himiko 2009 entdeckte. Ironischerweise soll auch die mythologische Königin Himiko von ihrem Volk abgeschottet gelebt haben. Ouchi fährt fort: """"""""Es ist immer noch nicht klar, warum Himiko nicht in der Mitte liegt. Diese Ergebnisse werden ein Schlüssel zum Verständnis der Beziehung zwischen Clustern und massiven Galaxien sein.""""""""""""""" {accuracy: 16.0, fluency: 21.3, accuracyM: -122.5, fluencyM: -2624.0, log p(y|x): -111.0} 102 Table 4: Translations of """Interestingly, one of the 12 galaxies in z66OD was a giant object with a huge body of gas, known as Himiko, which was found previously by the Subaru Telescope in 2009. """"""""It is reasonable to find a protocluster near a massive object, such as Himiko. However, we’re surprised to see that Himiko was located not in the center of the protocluster, but on the edge 500 million light-years away from the center."""""""" said Masami Ouchi, a team member at the National Astronomical Observatory of Japan and the University of Tokyo, who discovered Himiko in 2009. Ironically, the mythological queen Himiko is also said to have lived cloistered away from her people. Ouchi continues, """"""""It is still not understood why Himiko is not located in the center. These results will be a key for understanding the relationship between clusters and massive galaxies.""""""""""" These translations are plotted in blue in Figure 1. 103
2024_short
9